With AI adoption accelerating across enterprises, what do you view as the top information security challenge that leaders should address? How can CISOs, VPs and Director’s align governance, risk and security investments to enable AI innovation without creating new exposures?

1.4k viewscircle icon1 Upvotecircle icon9 Comments
Sort by:
Director of Engineering7 hours ago

You need to ensure you are investing in a clear process, with controls, and guardrails, as much as in the technology itself. Many companies go from POC to not knowing how to scale, and ensuring you have strong governance that is encapsulated as a part of that process, is also very important.

Director of Engineering11 hours ago

AI is an evolving territory and most importantly all the organizations needs to be aware about how the data emitting should not be use to training a model. All the compliance needs to be followed because its company’s responsibility to follow the risk and governance standards. Anonymizing the data would be good practice . Using AI doesn’t replace the manual intervention, the process should be defined that before using AI generated responses , a manual set of eyes should be followed.

Senior Principal Architect in Software5 days ago

One of the biggest security challenges with AI adoption is ensuring the integrity, confidentiality, and responsible use of the data and Machine Learning models behind it. As organizations move quickly to innovate, risks such as data poisoning, ML model manipulation, and ungoverned “shadow AI” across the organization can undermine trust if not addressed soon. Security leaders should work to embed AI governance into existing frameworks, drawing on standards like the NIST AI RMF. Prioritizing investments in data protection, data privacy, model monitoring, and AI red teaming can strengthen resilience without slowing down innovation. By approaching AI as both a valuable asset and a potential risk vector, leaders can put the right guardrails in place to support adoption that is safe, scalable, and sustainable.

Lightbulb on1
Chief Data Officer5 days ago

Before adopting AI, it is important to be clear about what it can truly solve and what it cannot. This helps set realistic expectations, measure ROI properly, and avoid unnecessary risk or disappointment when the technology is not addressing a real business problem.

AI also brings a lot of uncertainty. Its performance can be hard to predict, regulations are still evolving, and outcomes can be biased or unexpected. To manage this, organizations need a cross functional team that brings together : technical, business, legal, and leadership expertise. 

They also need a governance model with clear roles and decision processes, plus the ability to anticipate regulatory changes before they become constraints. Just as important the data must be clean, reliable, and compliant. 

Lightbulb on1
Principal Investigator6 days ago

Data verification from AI results is a top priority but requires expertise in the answers developed so I would say a referendum of fact verification. There's possibility that results from AI is incorrect or simply wrong due to different reasons especially since it is new to the public on a large scale. Only time will tell how reliable AI sourced adoption will incur in the next five to ten years. 

Content you might like

Yes, we're looking to add additional protections.43%

No, we're going to continue with our own operational models.49%

We're unsure.7%

View Results

Nope..37%

Planning to in the near future46%

Started using, works amazing11%

Started to use, very complicated to implement4%

View Results