The shadow admin plane problem: why AI agents need the same RBAC as your human operators
When AI agents manage your API platform through a side door — separate credentials, no RBAC, no audit trail — you have built a shadow admin plane. Here's the architectural fix.
- ai
- security
- governance
- mcp
There is a version of AI-assisted platform management that looks like this: an AI agent in your IDE can create collections, configure proxies, update workflows, and provision partner access. It does this through a management API that has no connection to your identity provider, uses a shared service account token, and writes no audit log that your compliance team can read.
This is the shadow admin plane problem. And it is increasingly common as teams move fast to add AI tooling to their operations.
The problem is not the AI agent. The problem is giving it a side door.
What a shadow admin plane looks like in practice
The pattern usually starts innocuously. A platform engineer wants to automate API catalog operations — creating collections for new products, updating workflow definitions, provisioning access for new partners. They write a script. The script needs credentials. They create a service account token with broad admin access. The token goes into a CI/CD variable or a local config file.
Later, the same token gets used for an AI assistant in the team's IDE. The AI can now list, create, update, and delete platform resources — with the same permissions as a human admin, but with no connection to the identity system that governs human access, and no audit entry that distinguishes what the AI did from what a human did.
When an auditor asks "who changed this workflow definition on Tuesday?" the answer is "the service account." Which tells them nothing.
Why separate AI credentials fail the compliance test
Compliance frameworks — SOC 2, ISO 27001, HIPAA, FedRAMP — share a common requirement: the ability to attribute every privileged action to an authenticated identity, with evidence of what that identity was permitted to do.
A shared service account token fails this in two ways:
Attribution is impossible. Multiple agents, scripts, and humans may use the same token. When something goes wrong, you cannot determine who or what caused it.
Separation of duties is absent. A viewer should be able to read configuration but not change it. A modifier should be able to update workflows but not delete them. An auditor should be able to read the audit log but not touch live config. If your AI agent uses an admin token, it has all permissions — the same anti-pattern as giving every human employee admin access.
The right model: same OIDC, same RBAC, same audit log
The governance model for AI agent platform access should be identical to the model for human access:
Same OIDC session. The AI agent authenticates using the same identity provider as your human operators. Its token has the same structure, the same expiry, and the same claim validation. There is no "AI credentials" concept — there is just a token issued by your IdP, with appropriate roles.
Same RBAC. The agent's token grants the same roles as a human with equivalent responsibilities would have. A read-only AI assistant gets viewer permissions. An automation that creates collections gets modifier permissions. Neither gets admin access by default.
Same audit log. Every action the AI agent takes — list collections, create a proxy, update a workflow — appears in the same audit log as equivalent human actions. The entry includes the identity (the token subject), the action, the affected resource, and the timestamp. Compliance teams query one log, not two.
What "one auth model for both UI and automation" means
When the management API enforces OIDC + RBAC, and the same API is exposed to both the admin UI and AI tooling (via MCP or direct API calls), you get one auth model by construction — not by policy.
The AI agent uses the same endpoint as the UI. It presents a token to the same authentication middleware. That middleware enforces the same role checks. The same audit log records the action.
This is not a limitation on what AI agents can do. It is a guarantee that what they do is governed — the same way everything else is governed.
The practical benefit: AI becomes auditable, not risky
Regulated industries are cautious about AI-assisted operations not because AI is inherently risky, but because unaudited, unattributed changes to production configuration are risky — regardless of whether a human or an AI makes them.
The fix is not to block AI from platform management. It is to bring AI into the same governance model that makes human platform management acceptable to your compliance programme.
Once an AI agent's actions are in the same audit log, subject to the same RBAC, and authenticated through the same IdP as your human operators, the compliance question changes from "can we allow AI to manage this?" to "what role level is appropriate for this automation?" That is a tractable operational question, not a blanket security concern.
What to check in your current setup
- Does your platform management API validate tokens from your corporate OIDC/IdP, or does it accept a separate set of service account credentials?
- Are AI agents and automation scripts using the same token type and role model as human operators, or do they use a shared admin key?
- When an AI agent creates or updates a platform resource, does that action appear in the same audit log your compliance team queries?
- Can you revoke an AI agent's access by revoking its token at the IdP — the same way you would offboard a human operator?
If the answers are inconsistent, the shadow admin plane problem is already present.
Zerq's management MCP uses the same OIDC authentication and RBAC as the admin UI. Every action by an AI agent, automation script, or human operator lands in the same audit log. See how platform automation works or request a demo to review your current management access model.