34% of Organizations are Already Using or Implementing AI Application Security Tools

Thirty-four percent of organizations are either already using or implementing artificial intelligence (AI) application security tools to mitigate the accompanying risks of generative AI (GenAI), according to a new survey from Gartner, Inc. Over half (56%) of respondents said they are also exploring such solutions.

The Gartner Peer Community survey was conducted from April 1 to April 7 among 150 IT and information security leaders at organizations where GenAI or foundational models are in use, in plans for use, or being explored.

Twenty-six percent of survey respondents said they are currently implementing or using privacy-enhancing technologies (PETs), ModelOps (25%) or model monitoring (24%).

“IT and security and risk management leaders must, in addition to implementing security tools, consider supporting an enterprise-wide strategy for AI TRiSM (trust, risk and security management),” said Avivah Litan, Distinguished VP Analyst at Gartner. “AI TRiSM manages data and process flows between users and companies who host generative AI foundation models, and must be a continuous effort, not a one-off exercise to continuously protect an organization.”

IT Is Ultimately Responsible for GenAI Security
While 93% of IT and security leaders surveyed said they are at least somewhat involved in their organization’s GenAI security and risk management efforts, only 24% said they own this responsibility.

Among the respondents that do not own the responsibility for GenAI security and/or risk management, 44% reported that the ultimate responsibility for GenAI security rested with IT. For 20% of respondents, their organization’s governance, risk, and compliance departments owned the responsibility.

Top-of-Mind Risks
The risks associated with GenAI are significant, continuous and will constantly evolve. Survey respondents indicated that undesirable outputs and insecure code are among their top-of-mind risks when using GenAI:

* 57% of respondents are concerned about leaked secrets in AI-generated code.
* 58% of respondents are concerned about incorrect or biased outputs.

“Organizations that don’t manage AI risk will witness their models not performing as intended and, in the worst case, can cause human or property damage,” said Litan. “This will result in security failures, financial and reputational loss, and harm to individuals from incorrect, manipulated, unethical or biased outcomes. AI malperformance can also cause organizations to make poor business decisions.”

Related posts

Ericsson ConsumerLab: Rising use of Generative AI Apps boosts consumer interest in differentiated connectivity

Johnson Controls expands AI features in OpenBlue digital ecosystem

Accenture and PUMA India collaborate to build next gen supply chain network

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Read More