From Docker Compose to Multi-Replica Kubernetes: Scaling Zerq Without Rewriting Anything
Docker Compose for development. Kubernetes for production with multi-replica scaling, rolling updates, and HA data stores. The gateway config, policies, and API products stay identical — only the orchestration layer changes. Here is the concrete path.
- architecture
- kubernetes
- docker
- deployment
- operations
The standard recommendation for Zerq deployments is: Docker Compose for development and early environments, Kubernetes for production. This is not about Kubernetes being harder or better — it is about what each tool is actually good at.
Docker Compose is excellent for spinning up a complete local stack in seconds, iterating on configuration, and running integration tests without cluster overhead. Kubernetes gives you multi-replica horizontal scaling, rolling updates with health checks, auto-scaling on load metrics, and the declarative infrastructure model that production environments need.
The question teams always ask is: "how much do I have to rewrite when I move?" The answer is: the orchestration manifests, and nothing else. Gateway configuration, API products, policies, workflows, and credentials all live in MongoDB — they travel with you to Kubernetes without modification.
This post is the concrete path, not the abstract principle.
The Compose baseline
A minimal Docker Compose setup for Zerq includes four services:
services:
gateway:
image: zerq/gateway:latest
ports:
- "8080:8080" # API traffic
- "8443:8443" # API traffic (TLS)
environment:
MONGODB_URI: mongodb://mongodb:27017/zerq
REDIS_URL: redis://redis:6379
depends_on:
- mongodb
- redis
management:
image: zerq/management:latest
ports:
- "3000:3000"
environment:
MONGODB_URI: mongodb://mongodb:27017/zerq
depends_on:
- mongodb
mongodb:
image: mongo:7
volumes:
- mongodb_data:/data/db
redis:
image: redis:7-alpine
volumes:
- redis_data:/data
volumes:
mongodb_data:
redis_data:
This runs correctly on a developer laptop. The gateway serves traffic, the management UI is available for configuration, MongoDB persists config and audit data, and Redis handles distributed rate limit state.
What this does not provide: HA, rolling updates, horizontal scaling, health check integration with load balancers, pod auto-scaling, or resource limits that prevent a runaway service from taking down the host. Compose is fine for dev; these properties are what you get from Kubernetes.
What moves to Kubernetes unchanged
Before the Kubernetes manifests, the important point: everything in MongoDB moves unchanged.
When you migrate from Compose to Kubernetes, you are changing how containers are scheduled and how traffic reaches them. You are not changing:
- The API products (collections, proxies, endpoints) you have configured
- The access policies and client credentials you have issued
- The workflow definitions attached to your proxies
- The audit records accumulated since you started
If you are migrating a development environment to a staging or production Kubernetes cluster, you can either:
- Point the new deployment at the same MongoDB instance (if network-accessible)
- Export and restore MongoDB data to a new MongoDB deployment using standard
mongodump/mongorestore
Either way, the API products your team built in Compose appear in Kubernetes without re-entry.
The Kubernetes topology
A production Kubernetes deployment typically looks like:
Namespace and separation
apiVersion: v1
kind: Namespace
metadata:
name: zerq
Run all Zerq components in a dedicated namespace. This scopes RBAC, network policies, and resource quotas cleanly.
Gateway Deployment — multiple replicas
apiVersion: apps/v1
kind: Deployment
metadata:
name: zerq-gateway
namespace: zerq
spec:
replicas: 3
selector:
matchLabels:
app: zerq-gateway
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0 # zero-downtime rolling updates
template:
metadata:
labels:
app: zerq-gateway
spec:
containers:
- name: gateway
image: zerq/gateway:latest
ports:
- containerPort: 8080
- containerPort: 8443
env:
- name: MONGODB_URI
valueFrom:
secretKeyRef:
name: zerq-secrets
key: mongodb-uri
- name: REDIS_URL
valueFrom:
secretKeyRef:
name: zerq-secrets
key: redis-url
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "1000m"
readinessProbe:
httpGet:
path: /health/ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
livenessProbe:
httpGet:
path: /health/live
port: 8080
initialDelaySeconds: 10
periodSeconds: 10
Why maxUnavailable: 0: This ensures rolling updates never take a pod down before a replacement is ready. Combined with Go's fast startup (seconds, not the minutes a JVM might need), new pods reach their readiness probe quickly, so updates proceed without traffic drops.
Why 3 replicas minimum: With 2 replicas, a rolling update (surge 1, unavailable 0) temporarily runs 3 pods. With 3 as the base, you always have 2 healthy pods serving traffic during an update, never 1.
Horizontal Pod Autoscaler
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: zerq-gateway-hpa
namespace: zerq
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: zerq-gateway
minReplicas: 3
maxReplicas: 20
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
Because the gateway binary is stateless on its own (state lives in MongoDB and Redis), scaling out is clean. New pods join the cluster, connect to the shared data stores, and start serving traffic. No warmup state to synchronise.
Service and Ingress
apiVersion: v1
kind: Service
metadata:
name: zerq-gateway
namespace: zerq
spec:
selector:
app: zerq-gateway
ports:
- name: http
port: 80
targetPort: 8080
- name: https
port: 443
targetPort: 8443
type: ClusterIP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: zerq-gateway-ingress
namespace: zerq
annotations:
nginx.ingress.kubernetes.io/proxy-read-timeout: "300"
nginx.ingress.kubernetes.io/proxy-body-size: "10m"
spec:
ingressClassName: nginx
tls:
- hosts:
- api.yourdomain.com
secretName: zerq-tls
rules:
- host: api.yourdomain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: zerq-gateway
port:
number: 80
MongoDB — production options
For production, MongoDB should run as a replica set for HA. You have three practical options:
MongoDB Atlas in your cloud tenant — easiest to operate, available in your region of choice for data residency. Connect via the Atlas connection string in your Kubernetes secret.
MongoDB Community Operator — runs a replica set as a CRD inside your Kubernetes cluster:
apiVersion: mongodbcommunity.mongodb.com/v1
kind: MongoDBCommunity
metadata:
name: zerq-mongodb
namespace: zerq
spec:
members: 3
type: ReplicaSet
version: "7.0.0"
security:
authentication:
modes: ["SCRAM"]
users:
- name: zerq-app
db: admin
passwordSecretRef:
name: mongodb-password
roles:
- name: readWrite
db: zerq
statefulSet:
spec:
volumeClaimTemplates:
- metadata:
name: data-volume
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: fast
resources:
requests:
storage: 50Gi
Self-managed replica set on dedicated VMs — appropriate for environments where Kubernetes does not manage stateful workloads, or where the MongoDB cluster is shared with other services.
Redis — HA with Sentinel or Cluster
For distributed rate limiting, Redis needs to be available across gateway replicas. For production:
# Using Bitnami Redis chart values (helm install)
# redis/values.yaml
architecture: replication
auth:
enabled: true
existingSecret: redis-secret
existingSecretPasswordKey: redis-password
replica:
replicaCount: 2
sentinel:
enabled: true
masterSet: zerq-redis
Redis Sentinel provides automatic failover if the primary goes down. Rate limit counters are ephemeral — if Redis restarts, counters reset. This is a known operational property: a brief window after Redis recovery where clients may exceed their quota before counters rebuild. For most deployments this is acceptable; if it is not, a Redis Cluster topology provides no-downtime failover.
Secrets management
Do not put MongoDB connection strings or Redis passwords in ConfigMaps. Use Kubernetes Secrets at minimum:
kubectl create secret generic zerq-secrets \
--namespace zerq \
--from-literal=mongodb-uri="mongodb+srv://user:[email protected]/zerq" \
--from-literal=redis-url="redis://:password@zerq-redis-sentinel:26379/0"
For environments with a secrets manager (Vault, AWS Secrets Manager, Azure Key Vault), use the External Secrets Operator to synchronise secrets into Kubernetes Secrets without embedding credentials in Kubernetes manifests.
The observability difference at scale
In a single Compose instance, all logs come from one process. In a multi-replica Kubernetes deployment, logs come from multiple pods with different pod IDs. The structured logs that Zerq produces become more valuable at scale because they carry product, partner, and endpoint dimensions — not just pod identity.
When you are debugging elevated error rates on a specific endpoint across a 10-pod deployment, filtering log aggregation by product=payments and status=502 gives you the affected calls across all pods immediately. The observability design that works on one instance works at any replica count.
What does not change at all
To close the loop on the original question — here is what is identical between your Docker Compose dev environment and your Kubernetes production environment:
| Thing | Same in Compose | Same in K8s |
|---|---|---|
| API products, proxies, endpoints | ✓ | ✓ |
| Auth policies, rate limits | ✓ | ✓ |
| Workflow definitions | ✓ | ✓ |
| Client credentials | ✓ | ✓ |
| Audit record format | ✓ | ✓ |
| Management UI behaviour | ✓ | ✓ |
| MongoDB connection string format | ✓ | ✓ |
What changes: docker-compose.yml becomes Deployments, Services, Ingress, HPAs, and a MongoDB replica set configuration. The platform does not know or care about the difference.
See Zerq's architecture page for the full deployment options overview, or request a demo to walk through your specific Kubernetes topology and data store requirements.