AI Identity for Secure Enterprise Agentic AI

Posted on March 20, 2026 at 09:22 PM

AI Identity for Secure Enterprise Agentic AI

Governing Non-Human Identities in the Age of Autonomous Agents

March 2026

Executive Summary

Agentic AI — autonomous systems capable of reasoning, planning, and acting across enterprise infrastructure — is advancing faster than the identity and access frameworks designed to govern it. Unlike traditional software, AI agents do not log in, do not follow predictable human usage patterns, and can hold privileges that exceed those of the humans who created them. This report examines why AI identity has emerged as the primary security control for enterprise agentic deployments, the threat vectors that arise when identity governance fails, and the technical standards and vendor frameworks available to address these risks.


Key finding: 88% of organizations have reported suspected or confirmed AI agent security incidents, yet only 22% treat AI agents as independent, identity-bearing entities. This governance gap is the defining enterprise security challenge of 2026. [Source: Gravitee, State of AI Agent Security 2026]


Three inter-related forces define the urgency: non-human identities (NHIs) now outnumber human identities by ratios of 50:1 or higher; the Model Context Protocol (MCP) is extending machine-to-machine access to core business systems at unprecedented speed; and regulatory bodies — including NIST — have launched dedicated AI agent standards initiatives. Organizations that fail to establish cryptographic, lifecycle-managed agent identities today face unquantifiable audit, compliance, and breach risk as agentic deployments scale.

1. Introduction and Background

1.1 The Shift from Generative to Agentic AI

Generative AI — large language models responding to human prompts — was the dominant enterprise AI paradigm from 2023 to 2024. Agentic AI represents the next evolution: systems that autonomously plan multi-step tasks, invoke tools, delegate to sub-agents, and act inside live enterprise systems with minimal human oversight. Gartner named agentic AI the top strategic technology trend for 2025 and projected that by 2028, 33% of enterprise software applications will include agentic capabilities, up from less than 1% in 2024.

In practice, enterprises are already deploying agents that open pull requests, query production databases, trigger financial workflows, and respond to customer inquiries — often operating continuously, around the clock, without a human in the loop.

1.2 Why Identity Becomes the Central Control

Traditional enterprise security assumes that access is initiated by a human user who authenticates once per session. AI agents break every one of these assumptions. They are:

  • Ephemeral: Created on-demand for a specific task and destroyed afterward, with no persistent login history.

  • Delegated: Often acting on behalf of a human principal, inheriting permissions the delegator may not have intended to share.

  • Non-deterministic: Capable of discovering and exploiting access paths that their developers never anticipated.

  • Autonomous: Able to call other agents, APIs, and tools without further human authorization at each step.

Because there is no human judgment pausing the execution chain, identity — who the agent is, what it is allowed to do, and whether that scope is enforced in real time — becomes the only meaningful security control. As Frank Dickson, Group VP at IDC, has stated: 'Without a unified identity foundation, agentic systems introduce unmanageable risk across data, infrastructure, and compliance.'

2. Technical Overview

2.1 Non-Human Identities (NHIs) Defined

Non-human identities are digital credentials — service accounts, API keys, OAuth tokens, X.509 certificates, and machine tokens — that authenticate systems and automations rather than people. AI agents are the fastest-growing and highest-risk sub-category of NHI. They differ from traditional NHIs such as CI/CD pipeline service accounts or RPA bots in a critical respect: they reason and adapt, meaning their access behaviour is inherently unpredictable and cannot be fully modelled at provisioning time.

Current scale data underlines the governance challenge. NHIs outnumber human identities by 25–50x in the average enterprise, with some financial services environments reporting ratios as high as 96:1. Research from CyberArk found that 50% of organisations surveyed had experienced a breach in the past year tied to compromised machine identities.

2.2 The MCP Attack Surface

The Model Context Protocol (MCP) has become the dominant method for connecting LLM-based agents to external tools and data sources. In the same way that APIs enabled cloud platform integration, MCP enables machine-to-machine integration at agent speed. An MCP connection carries real authority: an agent can retrieve sensitive data, trigger workflows, and act inside critical systems — all without a person in the middle.

This creates a high-value attack surface. Security researchers have documented tool poisoning attacks, remote code execution vulnerabilities in MCP server implementations, and supply-chain tampering via malicious npm packages that silently exfiltrate data. In one documented incident, a GitHub MCP server allowed a malicious repository issue to inject hidden instructions that hijacked an agent and triggered data exfiltration from private repositories.


Critical risk: Agent-to-agent communication introduces impersonation, session smuggling, and capability escalation. A compromised research agent can insert hidden instructions into output consumed by a financial agent, which then executes unintended transactions.


2.3 The Governance Deficit

Enterprises are encountering what analysts describe as a 'triple threat': agentic risk (agents operating with administrative privileges exceeding their creators'), a governance deficit (machine-speed identity creation outpacing human-speed approval processes), and visibility gaps (inability to track which agents are active and what data they are accessing). According to Saviynt's 2026 identity security analysis, most organisations cannot answer basic questions about how many agents are running or what autonomous decisions they are making.

3. Data, Evidence, and Threat Landscape

3.1 Threat Vector Summary

The table below summarises the principal threat vectors in AI agent deployments, their impacts, and recommended mitigations.


Threat Vector Description Impact Mitigation

Prompt Injection Malicious input hijacks agent instructions Unauthorized actions, data exfiltration Input validation, sandboxed execution

Credential Sprawl Static API keys & OAuth tokens left unrotated Lateral movement, supply-chain breach Dynamic short-lived credentials (SPIFFE/SVID)

Over-Privileged Agents Agents inherit creator's full permission set Blast radius amplified on compromise Just-in-time, least-privilege provisioning

Agent Impersonation Rogue agents exploit trust in multi-agent pipelines Cascading unauthorized decisions Cryptographic agent identity (mTLS, SPIFFE IDs)

Shadow Agents Unsanctioned agents created outside IT oversight Unmonitored access to core systems Continuous agent inventory & discovery

3.2 Key Statistics

  • 88% of organisations report suspected or confirmed AI agent security incidents (Gravitee, 2026).

  • Only 22% treat AI agents as independent, identity-bearing entities requiring formal governance.

  • 80% of IT leaders have witnessed AI agents act outside their expected behaviour (SailPoint survey).

  • NHIs outnumber human identities by 50:1 on average; some environments exceed 96:1.

  • 68% of IT security incidents now involve machine identities (Obsidian Security, 2026).

  • 50% of organisations experienced a breach in the past year tied to unmanaged machine identities (CyberArk, 2025).

  • Only 29% of organisations felt prepared to secure their agentic AI deployments (Cisco State of AI Security 2026).

3.3 Adversarial Escalation

State-sponsored actors have integrated agentic AI into offensive operations. A China-linked group reportedly automated 80–90% of a cyberattack chain by jailbreaking an AI coding assistant and directing it to conduct reconnaissance, identify vulnerabilities, and develop exploit scripts. These patterns mean that defenders can no longer treat prompt injection or agent hijacking as theoretical risks — they are documented, in-production attack techniques.

4. Standards, Protocols, and Technical Frameworks

4.1 SPIFFE / SPIRE

SPIFFE (Secure Production Identity Framework For Everyone) is an open standard originally designed for microservice authentication in cloud-native environments. Its core mechanism — issuing cryptographically verifiable, short-lived X.509 SVIDs (SVID = SPIFFE Verifiable Identity Document) tied to workloads rather than people — makes it well-suited for AI agent identity. Key properties for agentic deployments include:

  • Workload-bound identity: Each agent receives a unique SPIFFE ID tied to its origin, capabilities, and trust level.

  • Dynamic credentialing: Identities are issued and rotated automatically, eliminating long-lived secrets.

  • Federated trust: Identities can be validated across organisational and cloud boundaries.

HashiCorp Vault Enterprise 1.21 now natively supports SPIFFE authentication, enabling automated X.509 SVID issuance for NHI workloads including AI agents. Strata's Maverics platform uses SPIFFE/SPIRE SVIDs for mTLS authentication in its agentic identity reference architecture.

4.2 OAuth 2.0 for Machine-to-Machine (OIDC M2M)

OAuth 2.0 client credentials flows and OpenID Connect machine-to-machine profiles provide a well-understood mechanism for agents to obtain scoped, time-limited access tokens. The challenge in agentic contexts is token governance: refresh tokens can persist for months or years, and bearer tokens grant access to whoever holds them regardless of context. Zero Trust OAuth implementations add runtime policy evaluation, so each token request is evaluated against the agent's current task scope, not just its initial permissions.

4.3 NIST AI Agent Standards Initiative

In February 2026, NIST's Center for AI Standards and Innovation (CAISI) formally launched the AI Agent Standards Initiative, the first government-level effort to define interoperability and identity standards for autonomous AI agents. The initiative is structured around three pillars: facilitating industry-led development of agent standards, advancing research in AI agent security and identity, and promoting trusted cross-sector adoption. NIST's ITL has also published a concept paper on AI agent identity and authorisation (public comment due April 2026). This regulatory momentum is expected to produce mandatory compliance requirements for regulated industries such as financial services within 12–24 months.

4.4 Emerging Vendor Frameworks

Several vendors have released structured frameworks for agentic identity governance in 2025–2026:

  • Okta for AI Agents (GA April 2026): Addresses three critical questions — where are my agents, what can they connect to, and what can they do — using an agent registry integrated with Okta's identity platform.

  • Teleport Agentic Identity Framework: Provides cryptographic identity, ephemeral privileges, zero standing access, and full auditability as a unified infrastructure layer.

  • Microsoft Entra Agent ID: Extends Microsoft's Entra identity platform to register and govern AI agents, integrated with Conditional Access and Zero Trust policy engines.

  • Strata Maverics Agentic Identity: Orchestrates agent provisioning, fine-grained ABAC/OPA policy evaluation, and lifecycle management across hybrid and multi-cloud environments.

5. Case Studies and Examples

5.1 GitHub MCP Prompt Injection (2025)

A documented real-world attack demonstrated how a malicious GitHub issue could inject hidden instructions into a developer AI agent connected via MCP. The agent, operating with repository access, followed the injected instructions and exfiltrated data from private repositories. The attack exploited the absence of cryptographic agent identity verification and the lack of runtime access controls on MCP tool invocations. The remediation pattern identified by researchers requires: (a) signed agent identities verified at each tool call, (b) input sanitisation before passing external content to agent context windows, and (c) scoped MCP permissions following least-privilege principles.

5.2 Salesloft-Drift OAuth Token Supply Chain Breach (2025)

Attackers compromised a third-party SaaS application and exploited long-lived OAuth refresh tokens to gain access to hundreds of downstream enterprise environments. Obsidian Security researchers noted the blast radius was ten times larger than direct SaaS breaches because the OAuth tokens enabled cascading trust across the supply chain. The incident illustrates why static, long-lived credentials are unsuitable for agentic environments: once stolen, bearer tokens grant unrestricted access with no further authentication required.

5.3 AI-Orchestrated Cyber Espionage Campaign (Late 2025)

A China-linked threat group reportedly jailbroke an AI coding assistant and directed it to automate 80–90% of a complex attack chain: port scanning, vulnerability identification, exploit script generation, credential theft, and exfiltration. Human operators provided only high-level strategic direction. This case demonstrates that the adversarial use of agentic AI is operational, not theoretical, and that defenders must govern agent identity with the same rigour applied to privileged human accounts.


⚠ Data gap: Detailed public post-mortems from regulated financial institutions on AI agent security incidents remain scarce. The financial services sector is expected to be among the first to face mandatory incident disclosure requirements under emerging NIST guidance.


6. Discussion and Implications

6.1 Identity as the AI Control Plane

The emerging consensus among analysts, CISOs, and vendors is that identity is not merely one control among many for agentic AI — it is the foundational control plane. Access monitoring, behavioural analytics, and runtime sandboxing all provide value, but they are secondary to establishing who the agent is, what it is authorised to do, and whether that authorisation is enforced cryptographically and continuously. Oasis Security's 2025 assessment concluded that 'governing access without governing identity is no longer viable' in agentic environments.

6.2 The Human-Speed Governance Problem

A structural tension exists between the speed at which agents create and consume identities and the speed at which human security teams can review and approve access. NHI provisioning events that once occurred on a weekly deployment cycle now happen continuously as agents spawn sub-agents, request tool access, and de-provision on task completion. Resolving this requires automated, policy-driven identity lifecycle management — not manual access reviews. Gartner has warned that IAM teams are 'missing in action' on machine identity governance, creating a leadership vacuum that enterprise security organisations must urgently close.

6.3 Regulatory Trajectory

NIST's AI Agent Standards Initiative, the EU AI Act's classification of certain autonomous systems as high-risk, and emerging SEC guidance on AI-related cyber risk disclosure all signal a regulatory environment that will increasingly require formal AI identity governance. Organisations in financial services, healthcare, and critical infrastructure should expect mandatory agent registration, auditability requirements, and breach notification obligations specifically covering AI agent incidents within the next two to three years.

7. Recommendations

7.1 Immediate Actions (0–90 Days)

  • Establish what agents are running, who created them, what systems they access, and what credentials they hold. This includes 'shadow agents' created by business units outside IT oversight. Conduct an AI agent inventory audit.

  • Replace hard-coded API keys and long-lived OAuth tokens with short-lived, automatically rotated credentials. Adopt SPIFFE/SPIRE or equivalent cryptographic workload identity where possible. Eliminate static, long-lived credentials.

  • Audit all agent permission sets. Agents should hold only the permissions required for their current task. Privilege should be granted just-in-time and revoked on task completion. Apply least-privilege provisioning.

  • Ensure that Zero Trust policies — continuous verification, no implicit trust, micro-segmentation — apply to agent-initiated requests, not just human login events. Extend Zero Trust to agents.

7.2 Medium-Term Actions (90 Days – 12 Months)

  • Define policies for agent creation, governance, credential rotation, behavioural monitoring, and decommissioning. Assign clear ownership across security, IAM, and DevOps teams. Implement a formal agent identity lifecycle.

  • Assess platforms such as Okta for AI Agents, Microsoft Entra Agent ID, Teleport, or Strata Maverics against your environment's requirements for multi-cloud, hybrid, and MCP-connected deployments. Evaluate agent-native IAM platforms.

  • Audit all MCP server connections for over-privileged access, unsigned tool integrations, and supply-chain risk. Apply input validation and output filtering to all agent-to-external-tool interactions. Establish MCP governance controls.

  • Participate in CAISI listening sessions and public comment periods to shape standards that will govern your sector. Align internal governance frameworks with emerging NIST guidance proactively. Engage with NIST AI Agent Standards Initiative.

7.3 Strategic Actions (12+ Months)

  • Move beyond retrofitting human IAM for agents. Design identity infrastructure that treats agents as first-class principals with their own provisioning workflows, policy engines, and audit trails. Architect for an agent-native identity model.

  • Require complete, cryptographically anchored audit logs for all agent-initiated actions — equivalent to privileged access workstation logging for human administrators. Build agent accountability into AI governance.

  • Model your agent identity governance against the NIST AI Agent Identity and Authorization framework (expected H2 2026) and EU AI Act compliance requirements for high-risk autonomous systems. Prepare for mandatory compliance requirements.

8. Conclusion

AI agents are becoming the largest unseen workforce inside enterprise systems. They act autonomously, at machine speed, across cloud and on-premises environments, often with privileges that exceed those of their human creators. The identity frameworks that govern human access — built for predictable, session-based, human-initiated interactions — are structurally insufficient for this new class of digital worker.

The good news is that the technical foundations for secure agentic AI identity are available today. Cryptographic workload identity standards (SPIFFE), dynamic credentialing protocols (OAuth M2M, OIDC), Zero Trust policy engines, and agent-native IAM platforms collectively provide the building blocks for a robust identity architecture. What is required is the organisational will to apply them — before a high-profile agentic AI breach forces the issue.

Enterprises that establish AI identity governance now — through agent inventories, least-privilege provisioning, cryptographic identity, and lifecycle management — will be positioned to deploy agentic AI confidently and at scale. Those that do not face compounding risk as agent populations grow and regulatory scrutiny intensifies.

References

[1] Okta — Blueprint for the Secure Agentic Enterprise (March 2025) — [https://www.okta.com/newsroom/press-releases/showcase-2026/]

[2] Strata Identity — 8 Strategies for AI Agent Security in 2025/2026 — [https://www.strata.io/blog/agentic-identity/8-strategies-for-ai-agent-security-in-2025/]

[3] Microsoft Security Blog — Four Priorities for AI-Powered Identity and Network Access Security in 2026 — [https://www.microsoft.com/en-us/security/blog/2026/01/20/four-priorities-for-ai-powered-identity-and-network-access-security-in-2026/]

[4] Help Net Security — Enterprises Racing to Secure Agentic AI Deployments — [https://www.helpnetsecurity.com/2026/02/23/ai-agent-security-risks-enterprise/]

[5] Saviynt — 2026 Identity Security and AI Trends and Predictions — [https://saviynt.com/blog/2026-identity-security-trends]

[6] Dark Reading — Identity Security 2026: 4 Predictions & Recommendations — [https://www.darkreading.com/identity-access-management-security/identity-security-2026-predictions-and-recommendations]

[7] HashiCorp — SPIFFE: Securing the Identity of Agentic AI and Non-Human Actors — [https://www.hashicorp.com/en/blog/spiffe-securing-the-identity-of-agentic-ai-and-non-human-actors]

[8] Obsidian Security — What Are Non-Human Identities? The Complete Guide to NHI Security — [https://www.obsidiansecurity.com/blog/what-are-non-human-identities-nhi-security-guide]

[9] World Economic Forum — Non-Human Identities: Agentic AI's New Frontier of Cybersecurity Risk — [https://www.weforum.org/stories/2025/10/non-human-identities-ai-cybersecurity/]

[10] NIST — Announcing the AI Agent Standards Initiative — [https://www.nist.gov/news-events/news/2026/02/announcing-ai-agent-standards-initiative-interoperable-and-secure]

[11] InfoQ — Teleport Launches Agentic Identity Framework — [https://www.infoq.com/news/2026/02/teleport-secure-ai-agents/]

[12] Strata Identity — A New Identity Playbook for AI Agents in 2026 — [https://www.strata.io/blog/agentic-identity/new-identity-playbook-ai-agents-not-nhi-8b/]

[13] Deloitte Insights — Agentic AI Strategy (Tech Trends 2026) — [https://www.deloitte.com/us/en/insights/topics/technology-management/tech-trends/2026/agentic-ai-strategy.html]

[14] Cerbos — Strategies for Securing Non-Human Identities — [https://www.cerbos.dev/blog/strategies-for-securing-non-human-identities]

[15] Oasis Security — Identity Security 2025: The Rise of AI Agents & NHIs — [https://www.oasis.security/blog/identity-security-2025-ai-agents]