Skip to main content
Star zrok on GitHub Star
Version: 2.0 (Current)

Scaling zrok frontends

The Linux, Docker, and Kubernetes deployment guides describe a single-host model where one frontend process handles all public share traffic. This page explains how to run multiple frontend instances for higher throughput and availability.

How the dynamic frontend works

Each zrok2 access dynamicProxy process:

  1. Loads a Ziti identity from a JSON file and connects to the Ziti overlay
  2. Subscribes to an AMQP exchange (dynamicProxy) using an ephemeral queue bound to its frontend token as the routing key
  3. Queries the controller via gRPC (dynamicProxyController Ziti service) for the initial set of share mappings
  4. Listens on an HTTP/HTTPS address for incoming requests
  5. Routes requests by matching the Host header against its in-memory mapping table, proxying to the share's backend through the Ziti overlay

The AMQP queue is unique per process instance — when the controller publishes a mapping update for a frontend token, every instance subscribed to that token receives an independent copy. Instances do not compete for messages.

Scaling approaches

Option A: Multiple instances of one frontend (simplest)

Run multiple zrok2 access dynamicProxy processes that share the same frontend token and Ziti identity. Place a load balancer in front of them.

                    ┌─ Frontend Instance A (same token, same identity)
Load Balancer ──────┤
└─ Frontend Instance B (same token, same identity)

Each instance:

  • Uses the same frontend.yaml (with a different bind_address if co-located)
  • Loads the same public.json Ziti identity file (read-only — no locking)
  • Receives identical AMQP mapping updates independently
  • Maintains its own in-memory mapping table

This is the simplest approach. No additional zrok admin commands are needed. The frontend token and identity file can be copied to additional hosts.

Option B: Separate frontends per instance

Create distinct frontend records in the controller, each with its own token and optionally its own Ziti identity. Map each to the same namespace(s).

# Create additional frontends (each gets a unique token)
zrok2 admin create frontend --dynamic <public-ziti-id> frontend-2
zrok2 admin create frontend --dynamic <public-ziti-id> frontend-3

# Map them to the same namespace
zrok2 admin create namespace-frontend public <frontend-2-token>
zrok2 admin create namespace-frontend public <frontend-3-token>

Each frontend can share the same Ziti identity (public.json) or use separate identities. Separate identities provide stronger isolation — if one identity is compromised, the others are unaffected.

To create a separate identity for each frontend:

# Create a new identity for the second frontend
zrok2 admin create identity public-2

# Create the frontend using the new identity's Ziti ID
zrok2 admin create frontend --dynamic <public-2-ziti-id> frontend-2

Then configure each frontend's frontend.yaml with its own frontend_token, identity, and controller.identity_path.

Which approach to choose

ConcernOption A (shared)Option B (separate)
Setup complexityLowest — copy filesMore admin commands
Identity isolationSharedIndependent
Namespace flexibilityAll instances serve the same namespacesEach frontend can serve different namespaces
AMQP routingAll instances share one routing keyEach has its own routing key
MonitoringInstances are indistinguishableEach frontend has a unique token in logs

For most deployments, Option A is sufficient. Use Option B when you need per-frontend namespace isolation, distinct monitoring identifiers, or defense in depth for the Ziti identity.

Load balancer configuration

Place a Layer 4 (TCP) or Layer 7 (HTTP) load balancer in front of the frontend instances. The load balancer must:

  • Forward the Host header unchanged (the frontend uses it for routing)
  • Support WebSocket upgrade (for zrok2 share connections)
  • Use sticky sessions if your frontends serve stateful backends (optional)

For TLS termination, either:

  • Terminate TLS at the load balancer and forward plaintext to the frontends
  • Pass TLS through to the frontends (each must have the certificate)

Example: Caddy

*.share.example.com {
reverse_proxy frontend-a:8080 frontend-b:8080
}

Example: Docker Compose with Caddy

Plain Docker Compose does not load balance across replicas on a single port — you need a reverse proxy. Remove ports: from the frontend service, scale it, and let Caddy (or Nginx/Traefik) route to replicas via Docker DNS:

services:
zrok2-frontend:
image: openziti/zrok2:latest
command: ["access", "public", "/config/frontend.yaml"]
deploy:
replicas: 3
volumes:
- zrok2-config:/config:ro
# No ports: — Caddy handles ingress

caddy:
image: caddy:2-alpine
ports:
- "0.0.0.0:443:443"
command: caddy reverse-proxy --from :443 --to zrok2-frontend:8080

Docker DNS resolves zrok2-frontend to all replica IPs, and Caddy round-robins across them.

Example: Kubernetes

The Kubernetes guide supports scaling via the frontend.replicaCount value in the Helm chart.

Other components

  • zrok2-controller — multiple controller instances can share the same PostgreSQL database. Each publishes AMQP mapping updates independently. Place a load balancer in front for the API endpoint.
  • zrok2-metrics-bridge — can read fabric.usage events from a file (single Ziti controller) or from an AMQP queue (multiple Ziti controllers). The AMQP source mode supports scaling across a multi-controller Ziti deployment.