82% of Security Executives Are Confident in Their AI Agent Policies. Over Half of Those Agents Are Running Without Oversight.
The State of AI Agent Security 2026 Report found that 82% of executives feel confident their existing policies cover AI agents — while more than half of deployed agents operate without monitoring or logging. That gap is producing an 88% enterprise incident rate.
- ai
- security
- governance
- agentic-ai
- compliance
Two statistics from the State of AI Agent Security 2026 Report belong in the same sentence, because together they define the shape of the current problem: 82% of security executives feel confident that their existing policies protect against unauthorized agent actions. And more than half of all deployed agents operate without security oversight or logging.
Same organizations. Same data set. Both statistics are simultaneously true.
The consequence — also from the same report — is a third number: 88% of organizations confirmed or suspected at least one AI agent security incident in the last twelve months. In healthcare, the figure reached 92.7%.
The confidence is not dishonest. The security programs are real. The policies exist, the API gateways are deployed, the access reviews happen. The problem is that those programs were built for a different composition of infrastructure — one where the primary callers are humans and the primary credentials are sessions. AI agents are neither, and most governance programs were never extended to cover them.
Why a genuine governance program produces a false sense of coverage
The 82% confidence figure makes structural sense once you understand how enterprise security programs are scoped.
An access management policy defines how user accounts are provisioned, reviewed, and deprovisioned. An API gateway configuration was written when the primary callers were browser applications and partner integrations operated by people. A SIEM was tuned to detect anomalous human login patterns — not a service account making 500 API calls in 90 seconds. A quarterly access review covers the list of employees with privileged access.
None of those controls are wrong. They were designed well for their intended scope. The scope simply does not extend to the infrastructure that is now carrying the risk.
FireTail's April 2026 analysis documents the operational reality: AI agents are being deployed by individual business teams — engineering, product, finance, operations — without passing through the same approval gates as a new application. They run under existing service account credentials, produce logs that do not feed into security monitoring pipelines, and are never formally registered in the systems that track what infrastructure exists and who is responsible for it.
The result is a specific pattern: the security team's dashboard is accurate for what it covers. AI agents are simply not in scope. The executive rates confidence against the coverage that exists, not against the coverage that the current infrastructure requires.
Only 21% of executives have complete visibility into agent permissions, tool usage, or data access patterns, per the same report. The 79% who do not cannot measure a coverage gap that is not represented in their monitoring.
The credential architecture that converts a single incident into a broad exposure
The specific technical failure behind most AI agent incidents is not sophisticated. It is a shared credential.
The Gravitee report found that 45.6% of organizations rely on shared API keys for agent-to-agent authentication. Another 27.2% use custom hardcoded authorization logic. Only 21.9% treat AI agents as independent, identity-bearing entities with their own credentials and scopes.
A shared API key has no individual identity. When ten agents — a customer support bot, an internal research agent, a finance reconciliation workflow, a code review tool — all authenticate using the same key, every request through that key looks identical at the API layer. The gateway sees one client. It cannot distinguish which agent made which request, cannot scope access differently per agent, and cannot produce an audit trail that answers the core incident investigation question: which agent was involved, what did it access, what was the scope of the compromise?
When one of those agents is compromised — through a prompt injection that causes it to issue data export requests, through a leaked key in a configuration file, through a tool misuse event that chains across an over-permissioned credential — the blast radius is not bounded to that agent. It is bounded by the full scope of the shared key, which in practice tends to be broad because the team provisioning a shared credential does not know every agent's exact access requirements and defaults to wider access.
CybelAngel's 2026 API security risk analysis found that missing or misconfigured authentication is the leading category of API incidents — accounting for 17% of all cases. A material share of those are not credentials stolen through advanced attacks. They are credentials that were never properly scoped, abandoned when pilots ended, or left in configuration files with no rotation cadence.
The Gravitee report notes the compound problem directly: most agents ship with more access than they need, and when a customer support agent can read the entire knowledge base, query billing systems, and modify account settings, the blast radius of a single compromise grows exponentially. The problem is not the agent's capabilities — it is that no scoping was enforced at the credential level.
What "no oversight" looks like in operational terms
The half of deployed agents running without security oversight is not a vague policy gap. It is a specific set of absent infrastructure controls.
No per-agent identity in access logs. Standard gateway logs capture the authenticating credential. With a shared key, those logs show one client. Reconstructing what any individual agent did — which every incident investigation and every compliance audit in a regulated industry requires — means cross-referencing gateway logs, application logs, and agent framework logs to assemble a picture that was never designed to be assembled. IBM's 2025 Cost of a Data Breach report puts mean time to identify a breach at 194 days; absent per-agent audit records, that timeline extends further, particularly when the trail runs through a shared credential with no per-caller differentiation.
No rate limit differentiation by agent. A customer support agent handling 20 API calls per conversation and a batch reconciliation agent generating 500 calls per run have fundamentally different traffic profiles. Without per-agent rate limit configuration, organizations either set limits that constrain legitimate workflows or raise limits globally to accommodate the most demanding agent — removing the protection for all callers. Neither is a governed configuration; both are workarounds for infrastructure designed for human-driven traffic patterns that it was never asked to differentiate.
No scope enforcement below the service-account level. The finance reconciliation agent needs read access to transaction records. It does not need write access, or access to HR endpoints, or access to customer PII collections. But if it runs under a service account provisioned for a broader integration — the path of least resistance when deploying an agent quickly — it operates with excess privilege from day one, and that scope is never revisited because no formal process exists for reviewing per-agent access scope after the fact.
No credential lifecycle. Machine credentials provisioned for agents often have no expiry date, no owner field, and no deprovisioning workflow tied to the agent's lifecycle. When an agent pilot ends, the agent stops running. The credential does not expire. It sits in a config file or environment variable with valid access to production APIs, available to anyone who finds the file — potentially for months or years after the pilot that created it concluded.
The adoption rate makes this an immediate problem
Gartner's projection that 40% of enterprise applications will integrate task-specific AI agents by end of 2026 — up from under 5% in 2025 — is not a future planning horizon. It is the current deployment trajectory, measured in months. The organizations now registering 88% incident rates are those that reached meaningful agent deployment first. Organizations accelerating their deployments today are following the same adoption curve with the same governance gap, unless the infrastructure was updated before deployment — not after.
Shadow AI compounds the scale problem. Research aggregated by Vectra AI found that 98% of organizations report unsanctioned AI use, with shadow AI growing at 340% year-over-year. Each unsanctioned agent deployment adds a credential, a traffic pattern, and an access scope to the enterprise's actual risk surface — none of which appears in the security program's inventory. The gap between the governance program's view of agent deployments and the actual number of agents operating against enterprise APIs is not closing on its own.
The EU AI Act, effective August 2026, adds a regulatory dimension: enterprises will need to demonstrate documented governance over AI systems — including agents — deployed within their operations. That requires knowing what agents exist, what they can access, and whether access has been reviewed. For most organizations, those three questions currently have no reliable answer.
What closing the gap requires at the API layer
The confidence-coverage gap closes at the API gateway — not because the gateway is the only control, but because it is the only control point that every agent must traverse to do anything useful: API access, tool invocation, data retrieval. Enforcing controls there provides coverage regardless of which team deployed the agent, which framework it runs in, or whether it was provisioned through a formal approval process.
The architecture that closes the gap has four requirements:
Per-agent identity at the gateway. Every agent authenticates with a credential that belongs to it — not a shared pool credential. A named client record with its own API key or certificate, its own access profile, and its own audit record. Zerq implements this through per-client credential management: each agent is a named client with explicitly defined scopes, rate limits, and the same RBAC enforcement that applies to every other caller. The gateway enforces those credentials identically whether the agent calls via REST or through the Gateway MCP.
Scoped access aligned to actual agent function. Per-collection and per-HTTP-method scoping means the finance agent gets read access to transaction endpoints — and nothing else. Zerq's access policy model lets access requirements be specified at the scope the agent actually needs, not at the broadest level that covers all cases. When scope needs to change, it changes in gateway configuration — not by reissuing credentials across a dozen environment variables.
Structured audit records capturing agent identity. Not just endpoint, status code, and timestamp — the minimum for human-traffic logs. Per-call records with client identity, request parameters, response status, and for MCP-routed calls, the tool name. Zerq's request logging writes structured records that support compliance queries across agent sessions: which agent, which endpoints, in what order, with what outcomes. Incident investigations that require hours of log correlation against shared-key deployments run in under a minute.
Credential lifecycle with rotation built in. Scheduled rotation and external secret references (via Vault or equivalent) mean credentials do not accumulate indefinitely with valid access. An agent pilot credential does not remain live for a year after the pilot ends — it is rotated on the same cadence as every other credential in the gateway, not left in a config file that outlasts the project it was created for.
The access review already has the right process. It just has not been extended to machines.
The quarterly access review asks the right questions: who has access, is it still needed, is it the right scope. The problem is not the process — it is that the population it covers stops at human users.
Gartner's AI governance spending forecast projects $492 million in 2026, surpassing $1 billion by 2030. The investment is real. The question is whether it is directed at infrastructure that covers the actual risk — per-agent identity, scoped credentials, lifecycle management, structured audit — or at policies that are confident on paper and blind to the agents actually operating against enterprise APIs.
The 88% incident rate tells the story of organizations that deployed agents before extending governance to cover them. The 14.4% with full security and IT approval for agent deployments — those that treated agent deployment as a governed infrastructure event — have a materially different incident profile.
The path from 82% confidence to actual coverage is not a new governance program. It is extending the program that already works for humans — per-client identity, scoped access, lifecycle management, structured audit — to the callers that were not there when the program was designed.
Zerq gives every AI agent, MCP client, and application a named identity, scoped credentials, enforced rate limits, and a complete audit trail from day one — with no separate deployment path for non-human callers. Agent credentials rotate on schedule, access is scoped per collection and per HTTP method, and every request produces a structured record that supports incident investigation and compliance audit. See how Zerq handles access control and agent credentials and client management, or request a demo to assess your current agent credential coverage.