With AI adoption accelerating across enterprises, what do you view as the top information security challenge that leaders should address? How can CISOs, VPs and Director’s align governance, risk and security investments to enable AI innovation without creating new exposures?

5.8k viewscircle icon1 Upvotecircle icon13 Comments
Sort by:
Head, Software Engineering, Cloud and Digital Transformationa day ago

Key will be to continuously keeping up with emerging concepts and AI technology, keeping up with new AI enabled threats in markets. Below are few comments on this:

1. Enterprises will use ML models which are available from different vendors. Employees can accidently expose the data by putting the information in the LLM prompts. Security team will need to evaluate how these data will be protected once exposed to ML models.

2. Coming up with new security controls, identifying and deploying solutions e.g. RAG Security Gateway.

3. Watching for new security threats, protection and remediation plans. E.g. recent one is...in August browser extension based 'Man in the middle attack' became very popular.

AI Governance Strategist in Travel and Hospitality4 days ago

A lot of great points here on data leakage, shadow AI, bias, and governance.
From my perspective, most of these risks seem to collapse into what I call the 3 A’s:

Alignment → WHY is it doing this? (intent)
Agency → WHO decides? (control)
Autonomy → HOW MUCH can it act alone? (independence)

Shadow AI often reflects unclear agency (e.g., who approved new AI features?).
Data leaks often come from excessive autonomy (no guardrails on access).
Bias often shows up as poor alignment (optimizing for patterns, not fairness).

This lens has helped me make sense of the landscape, but I’d be curious — which “A” do you see as the toughest challenge in your organization?

Director of Operations in Banking22 days ago

The top security challenge with AI adoption is uncontrolled data exposure—whether through shadow AI, sensitive data being fed into models, or lack of oversight on third-party tools. To address this, leaders should establish clear AI use policies, embed AI into existing risk assessments, and invest in controls for data leakage, access management, and monitoring.

CISOs, VPs, and Directors can enable innovation without new exposures by creating governance guardrails early, offering safe environments for experimentation, and aligning security investments around protecting the organization’s most critical data.

CISO in Insurance (except health)25 days ago

To effectively implement AI, it is crucial to adopt a proactive and strategic approach. The first step involves prohibiting the use of AI-related services until a comprehensive framework is established. Subsequently, collaborate with the Chief Information Officer (CIO) and the business to gain a thorough understanding of their requirements. The third step entails pre-defining and reaching an agreement with specific AI service providers for various functions, ensuring that data security measures are robustly implemented.

For instance, if development teams require access to ten distinct solutions, it is essential to engage with the CIO/CTO to establish a common toolset among all teams. This strategic alignment not only facilitates the management of third-party risks and exposures but also enhances operational efficiency.

Director of Engineeringa month ago

You need to ensure you are investing in a clear process, with controls, and guardrails, as much as in the technology itself. Many companies go from POC to not knowing how to scale, and ensuring you have strong governance that is encapsulated as a part of that process, is also very important.

Content you might like

Better security25%

Higher quality output62%

More scalability28%

Lower costs64%

More transparency37%

Additional customization options15%

Easier to use 15%

Another change (comment to share)

View Results

Established AI governance framework with defined policies and oversight39%

Currently developing governance models and risk controls69%

Relying on existing security/compliance frameworks (no AI-specific policy)31%

No formal AI governance approach in place3%

View Results