Skip to main content

One Audit Log for Humans and AI — Why Separating Them Is a Compliance Mistake

Every change made via Zerq's management API — whether from the admin UI, a script, or an AI agent via Copilot — shows up in the same audit log with the same identity fields. Compliance and ops see who changed what and when, without distinguishing human from automation. Here is why that matters and how it works.

  • compliance
  • audit
  • ai
  • governance
  • operations
Zerq team

When compliance teams ask for an audit trail, they have a specific question in mind: "who changed what, when, and was it authorised?" That question does not care whether the person who made the change was sitting at a browser, running an automation script, or asking an AI assistant to do it. The change happened. It needs to be in the record.

The operational temptation when AI tooling is introduced is to route AI-initiated changes through a separate audit mechanism — a different log table, a different retention policy, or simply no audit trail at all because "it is just a tool." This is a compliance mistake. Separating human-originated changes from AI-originated changes creates exactly the kind of gap that compliance reviews and forensic investigations are designed to expose.

Zerq's audit model is designed around one principle: any change to the platform — regardless of who or what initiated it — produces a single, consistent audit record in the same trail.

What goes in the management audit log

The management audit log captures every configuration change to the platform. Not API traffic (that is a separate log). Configuration changes:

  • Collections created, updated, or deleted
  • Proxies added, modified, or removed
  • Workflow definitions created or changed
  • Policies created, updated, assigned, or unassigned
  • Client records created, modified, suspended, or deleted
  • Credentials issued, rotated, or revoked
  • Access profiles created or modified
  • User accounts added, modified, or removed
  • Role assignments changed
  • Rate limit settings updated

Every one of these operations, from every surface that can perform them, produces an audit record with the same structure:

{
  "timestamp": "2026-04-06T14:32:17.441Z",
  "actor": {
    "id": "usr_alice_01",
    "email": "[email protected]",
    "role": "editor"
  },
  "action": "policy.update",
  "resource": {
    "type": "policy",
    "id": "pol_payments_standard",
    "name": "Payments Standard"
  },
  "changes": {
    "rate_limit_rpm": { "from": 500, "to": 750 },
    "burst_limit": { "from": 50, "to": 75 }
  },
  "source": "management_ui",
  "session_id": "sess_a1b2c3d4"
}

The source field distinguishes where the action came from — management_ui, management_api, or copilot — but this is metadata on an otherwise identical record structure. The actor identity, the change, and the timestamp are the same regardless of source.

The three sources that produce identical records

The admin UI

An operator logs into the management console, navigates to a policy, and changes the rate limit from 500 to 750 RPM. The change is saved. An audit record is written with source: management_ui, the operator's identity from their OIDC session, the before and after values, and the timestamp.

This is the baseline case everyone expects. The audit record answers: Alice changed the Payments Standard policy rate limit at 14:32.

The management API (scripts and automation)

Your CI/CD pipeline calls the management API to apply a configuration change as part of a deployment. A platform engineer runs a script to bulk-update rate limits across 20 client profiles. An integration test suite creates a collection, runs a test, and tears it down.

Every management API call that modifies platform state produces an audit record with source: management_api. The actor is the service account or user whose API key authenticated the request. The change record is identical in structure to the UI-originated record.

This matters because automation often runs at times when no human is at a keyboard — deployment pipelines run overnight, scripts run on schedules. Without API-originated changes in the audit trail, you have gaps: "there was no change on Tuesday evening" when actually a deployment applied three policy updates at 02:00.

Copilot (AI-initiated changes)

An operator asks Zerq Copilot for Management: "Raise the rate limit on the Payments Standard policy to 750 RPM." Copilot interprets the request, maps it to a management API call, and executes it under the operator's OIDC session. The same audit record is written — actor.id: usr_alice_01, action: policy.update, changes: { rate_limit_rpm: { from: 500, to: 750 } } — with source: copilot.

The audit record does not say "an AI did this." It says Alice did this, through Copilot, at 14:32. The identity is Alice's. The authorisation was Alice's role. The Copilot was the surface through which Alice acted — the same way the UI is the surface through which Alice acts when she clicks through the console.

This is the correct model for AI-initiated platform changes. The AI assistant does not have its own identity in the audit log. It acts under the authenticated user's identity with the authenticated user's permissions.

Why separating human and AI audit records is a compliance mistake

It creates two sources of truth for the same question

If you have one audit trail for UI changes and a separate log for AI-initiated changes, the answer to "show me all changes to the Payments Standard policy in Q2" requires querying two systems, reconciling their formats, and merging the results. Any compliance query that spans both surfaces requires manual work that introduces errors and gaps.

Under SOX, SOC 2, PCI DSS, and most financial services audit frameworks, "show me all changes to production configuration in this period" needs to produce a complete, unambiguous record. "Here is most of it, and here is a separate file for the AI-initiated changes" is not a complete record.

AI-initiated changes need human accountability, not AI accountability

The right framing for AI-assisted platform management is not "the AI changed the configuration." It is "Alice, authorised by her editor role, used Copilot to change the configuration." The accountability is human. The audit record should reflect that.

When a compliance auditor asks "who authorised this rate limit change?", the answer should be "Alice, at 14:32, using editor permissions" — not "Copilot did it at 14:32." The first answer has accountability. The second creates ambiguity about who bears responsibility and whether the change was authorised through an appropriate human decision.

A unified audit trail with the human actor as the identity on Copilot-initiated changes resolves this cleanly. The AI assistant is a tool Alice used. The audit record shows Alice's decision, not the tool's invocation.

Separation enables audit evasion

This is the uncomfortable point: if AI-initiated changes are not in the main audit trail, a bad actor who gains access to an AI assistant and uses it to make configuration changes has a lower-visibility path than one who uses the UI directly. UI changes appear in the primary audit trail that compliance teams review. AI changes, in a separate or absent audit trail, may not be reviewed at all.

A unified audit trail eliminates this asymmetry. Every surface — UI, API, Copilot — produces records in the same trail with the same retention and the same access controls. There is no lower-visibility path.

The forensic scenario: "who changed this policy?"

The practical value of a unified audit log is most visible in the forensic scenario. Something has gone wrong. A rate limit that should have been 500 RPM was 750 RPM during the incident window. The question is: who changed it, when, and was that change authorised?

With a unified audit log, the query is:

actor: any
resource: pol_payments_standard
action: policy.update
time: 2026-04-01T00:00:00Z to 2026-04-06T23:59:59Z

The result is a list of every change to that policy in the window, regardless of whether it came from the UI, the API, or Copilot. Each record has the actor identity, their role at the time, the before and after values, and the source.

With a fragmented audit trail, the same question requires checking multiple systems, possibly finding that one of them has no record (Copilot changes were not retained in the same system), and being unable to provide a complete answer.

The forensic question "who changed this?" should always have a single, complete answer. That requires a single audit trail.

What the audit record enables beyond forensics

The unified audit log is not just for incident investigation. It supports:

Quarterly access reviews. The audit log shows which identities (human users, service accounts, AI-assisted sessions) made changes in the review period. Identities that appear in the audit log but are no longer active employees or approved integrations are candidates for deprovisioning.

Change velocity monitoring. How many configuration changes were made in a given period? A spike in change frequency (more changes in one day than in the previous month) is an anomaly worth investigating.

Role compliance verification. Did any viewer-role account produce write events in the audit log? That should be impossible if RBAC is working correctly — but the audit log is where you verify it.

Regulatory evidence packaging. When a regulator asks for evidence of change management controls — "show us that configuration changes go through an approval process and are logged" — the audit trail is the primary evidence. A complete, consistent trail is audit-ready evidence. A fragmented trail requires manual assembly.


Zerq's management audit log captures every configuration change — from the UI, the management API, or Copilot — in a single structured trail with consistent identity fields. See Monitoring & Analytics for the full observability capability, or request a demo to walk through how the audit trail integrates with your compliance review process.