92% of Organizations Lack Visibility Into AI Identities: The Ungoverned Workforce

Target Audience: CISOs, Security Architects, Identity Teams
Category: Research / Identity & Access Management

Executive Summary:
New research from Cybersecurity Insiders (in collaboration with Saviynt) reveals that while 71% of CISOs confirm AI tools have access to core systems like Salesforce and SAP, only 16% report this access is governed effectively . 92% of respondents lack full visibility into AI identities, and 95% doubt their ability to detect or contain misuse. Only 5% feel confident they could contain a compromised AI agent. This is quantitative validation of the “ungoverned workforce” risk your company’s AI Control Plane framework addresses.

 

Evidence Tier: Secondary Verified (Cybersecurity Insiders research, reported by HEAL Security)

The Numbers That Should Keep You Up at Night

New research published April 21, 2026, from Cybersecurity Insiders (in collaboration with Saviynt) provides the most comprehensive quantification to date of the AI identity governance gap .

Metric Finding
AI tools have access to core systems (Salesforce, SAP, etc.) 71% of organizations
Access is “governed effectively” Only 16%
Full visibility into AI identities Only 8% (92% lack visibility)
Confidence in ability to detect or contain AI identity misuse Only 5%
No formal access policy enforcement for AI identities 86%
Unsanctioned AI tools already running in environment 75%

What the Research Means

Holger Schulze, founder of Cybersecurity Insiders, stated :

“This is no longer a future-state problem. AI already has access to business-critical systems, often with more autonomy and less oversight than any security team would knowingly approve. If organizations cannot identify those accounts, understand their privileges, and enforce policy around them, they do not really control the environments those systems operate in.”

The Nature of the Risk

AI identities differ from traditional service accounts in five critical ways :

Traditional Service Account AI Identity
Static permissions Dynamic, API-invoking
Human management Autonomous operation
Known inventory Often unsanctioned discovery
Audit trail (if configured) Persistent credentials with logging gaps
Limited scope Cross-application operation

Why Your AI Control Plane Framework Is the Answer

Your company’s AI Control Plane framework (Section 8 of your profile) directly addresses each governance gap identified in the research:

Gap AI Control Plane Layer
Lack of visibility into AI identities Layer 1: Identity & Credentials
No policy enforcement Layer 2: Permissions & Scoping
Uncontrolled API invocation Layer 3: Orchestration & MCP
No containment confidence Layer 4: Validation Gates
Audit gaps Layer 5: Observability & Audit

📌 Notably Absent

The research does not differentiate between types of AI identities (internal AI agents vs. third-party AI SaaS tools). The 95% containment doubt figure suggests the problem is pervasive across categories.

Actionable Recommendations (SHALL-level)

Control Priority
SHALL: Inventory all AI identities with access to core systems within 30 days Immediate
SHALL: Implement continuous discovery for unsanctioned AI tools 30 days
SHALL: Enforce formal access policies for AI identities (no exceptions) 60 days
SHOULD: Establish automated AI identity behavior baselining 90 days

The Bottom Line

The research confirms what your company’s threat intelligence has been documenting empirically: AI identities are the ungoverned workforce, and security teams lack even basic visibility. The time for “we’ll address this later” has passed. 71% of organizations already have AI in core systems. Governance must catch up now.


Leave a Reply