How to connect Claude, Cursor, and ChatGPT to your enterprise APIs — without a security incident
MCP makes it easy to give AI tools access to your APIs. It also makes it easy to give them too much access, with no audit trail and no rate limits. Here's how to do it right.
- ai
- mcp
- developer-experience
- security
The Model Context Protocol (MCP) has made it trivial for AI tools like Claude, Cursor, and ChatGPT to discover and call your APIs. What it has not made trivial is doing that safely.
89% of developers now use AI tools in their workflow. But only 24% design their APIs with AI consumers in mind. The gap between those two numbers is where most MCP security incidents start.
The good news: the security model for AI agent API access is the same model you already use for human API access. The fix is not a new tool — it is applying the same gateway to a new type of consumer.
What MCP actually does
MCP gives AI clients — Claude Desktop, Cursor, ChatGPT, or any MCP-compatible agent — a standard way to discover what tools are available and invoke them. A tool in MCP terms maps directly to an API operation: list endpoints, get endpoint details, execute a request.
Without a gateway in the middle, an MCP server typically:
- Holds credentials directly (usually static API keys in a config file)
- Has no rate limiting — a looping agent can make thousands of calls
- Produces no structured audit log compatible with your SIEM
- Has broader permissions than any individual human user would be given
With a gateway in the middle, every MCP call goes through the same enforcement as every REST call: credential validation, rate limiting, RBAC, structured logging.
The three-component model that works
Component 1: Your existing API gateway as the MCP endpoint.
The MCP server should be a thin layer on top of your gateway — not a separate service with its own credentials and access model. When the AI tool calls execute_endpoint, that call goes through the gateway, not around it.
This means the AI tool uses the same client ID and profile as any other API consumer. Its access is scoped. Its calls are rate-limited. Its requests appear in the same logs as everything else.
Component 2: Scoped profiles per AI consumer. Different AI tools should have different access levels. An IDE assistant helping a developer test APIs needs read access to endpoint definitions and the ability to make test calls. An internal chatbot answering customer support queries needs a narrower set of operations. A CI/CD automation needs different permissions again.
Profiles let you express this: each profile gets access to specific API products, with its own rate limit, its own credential, and its own audit trail.
Component 3: The same audit trail as your REST traffic. When your security team asks "what did the AI agent do yesterday between 14:00 and 16:00," the answer should come from the same structured log query they use for any other investigation — not from a separate AI observability tool that nobody set up correctly.
Setting up Claude Desktop with your gateway MCP endpoint
The configuration is straightforward once your gateway exposes an MCP endpoint. In Claude Desktop's MCP client config:
{
"mcpServers": {
"my-company-apis": {
"type": "http",
"url": "https://your-gateway.example.com/mcp",
"headers": {
"X-Client-ID": "claude-assistant",
"X-Profile-ID": "developer-readonly",
"Authorization": "Bearer <token>"
}
}
}
}
The key details:
X-Client-ID: identifies the AI consumer, same as any other clientX-Profile-ID: scopes access to a specific set of APIs and permissionsAuthorization: the credential for this profile — rotated on the same schedule as every other credential
Cursor and ChatGPT use the same Streamable HTTP transport. The config format varies slightly by client, but the auth model is identical.
What the AI tool can do — and what it cannot
Once connected, the AI tool can:
- List your API collections — see what API products exist and which it has access to
- List endpoints — browse operations within a collection
- Get endpoint details — see parameters, schemas, authentication requirements
- Execute an endpoint — make a real API call, through the gateway, with full enforcement
What it cannot do (if your profiles are configured correctly):
- Access API products outside its assigned profile
- Exceed its rate limit, no matter how fast it loops
- Make calls that do not appear in your audit log
- Escalate its own permissions
The question to ask before going live
Before you give any AI tool access to a production API, ask: if a junior contractor had this same credential and these same permissions, would you be comfortable with that?
If the answer is yes, you have scoped the access correctly. If the answer is no — the access is too broad, the credential never rotates, or you have no way to audit what it does — fix that first.
The access model for AI tools is not a new problem. It is the partner access management problem, applied to a new type of consumer.
Zerq's gateway MCP endpoint lets AI tools discover and call your APIs with the same credentials, rate limits, and audit trail as your REST consumers. See how it works or request a demo to connect your first AI tool safely.