You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/madr/decisions/094-zone-proxy-deployment-model.md
+28-23Lines changed: 28 additions & 23 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -22,7 +22,7 @@ This architectural change requires revisiting the deployment model for zone prox
22
22
**Scope of this document**: This MADR focuses on **deployment tooling** — how users deploy zone proxies via Helm, Konnect UI, and Terraform.
23
23
24
24
**Single-mesh focus**: This document assumes **single-mesh-per-zone as the default** deployment pattern.
25
-
For multi-mesh scenarios, deploy additional zone proxies using separate Helm releases with a dedicated zone-proxy chart. This can be packaged either as an independent `kuma-zone-proxy` chart (separate release cycle, full independence) or as a subchart of the main kuma chart (single repo, tighter coupling). See the multi-mesh deployment guide for details.
25
+
For multi-mesh scenarios, deploy additional zone proxies using separate Helm releases with a dedicated `kuma-zone-proxy` chart. A multi-mesh deployment guide will be provided separately.
26
26
27
27
This document addresses the following questions:
28
28
@@ -38,7 +38,7 @@ Note: Whether zone ingress and egress share a single deployment is addressed in
38
38
| Per-mesh Services |**Yes** - each mesh gets its own Service/LoadBalancer for mTLS isolation |
39
39
| Namespace placement |**kuma-system**|
40
40
| Deployment mechanism |**Helm-managed** (current pattern extended for mesh-scoped zone proxies) |
41
-
| Helm release structure |**Single release**(CP + zone proxy in one chart)|
41
+
| Helm release structure |**Subchart**— single release for default mesh; standalone release for additional meshes|
42
42
43
43
| Question | Decision |
44
44
|----------|----------|
@@ -355,6 +355,7 @@ zoneProxy:
355
355
mesh: my-very-long-mesh-name-that-exceeds-limits
356
356
service:
357
357
name: zp-long-mesh # Override when auto-generated name is too long
358
+
spec: {} # Additional Service spec fields if needed
358
359
```
359
360
360
361
**Cost implication**: More LoadBalancers = higher cloud cost.
@@ -390,9 +391,13 @@ zoneProxy:
390
391
mesh: default
391
392
```
392
393
393
-
#### Helm Release Structure Options
394
+
#### Helm Release Structure: Subchart Approach
394
395
395
-
##### Option 1: Single Release (CP + Zone Proxy) — Recommended for Single-Mesh
396
+
The zone proxy is packaged as a `kuma-zone-proxy` subchart of the main `kuma` chart. This single packaging naturally supports both single-mesh and multi-mesh deployment modes.
397
+
398
+
##### Single Release (Default — Single-Mesh)
399
+
400
+
The `kuma-zone-proxy` subchart is a dependency of the main `kuma` chart. A single `helm install` deploys everything:
396
401
397
402
```bash
398
403
helm install kuma kuma/kuma -f values.yaml
@@ -419,53 +424,53 @@ zoneProxy:
419
424
420
425
**Best for**: Single-mesh deployments (the default), teams preferring simplicity, GitOps workflows.
421
426
422
-
##### Option 2: Two Releases (CP separate from Zone Proxy)
427
+
##### Standalone Release (Multi-Mesh)
428
+
429
+
For additional meshes, install the same `kuma-zone-proxy` chart as a separate Helm release:
| **Failure blast radius** | ✅ Bad zone proxy config doesn't break CP |
447
448
| **GitOps** | ✅ Clear separation of concerns |
448
449
449
-
**Best for**: Production environments wanting CP stability isolation.
450
+
**Best for**: Multi-mesh deployments, production environments wanting CP stability isolation.
450
451
451
452
##### Recommendation
452
453
453
-
**Option 1 (single release)** for single-mesh deployments. Option 2 when CP stability isolation is required or for multi-mesh scenarios.
454
+
**Subchart approach**: single release for the default mesh (simple, one `helm install`); standalone release of the same `kuma-zone-proxy` chart for additional meshes.
454
455
455
-
##### Helm Chart Structure: Pod Spec Passthrough
456
+
##### Helm Chart Structure: Passthrough Values
456
457
457
458
**Problem**: Current Helm charts enumerate Kubernetes fields one by one — the ingress chart alone is 292 lines for ~15 fields. Users are blocked when they need an unsupported field (e.g., `shareProcessNamespace`, `initContainers`, `topologySpreadConstraints`), creating a cat-and-mouse problem where the chart never fully covers the Kubernetes API surface.
458
459
459
-
**Solution**: The zone proxy chart accepts raw `podSpec`, `containers`, and `serviceSpec` sections. Helm's `merge` overlays user values onto sensible defaults. Adding any PodSpec, container, or Service field requires zero template changes.
460
+
**Solution**: The zone proxy chart groups passthrough values by Kubernetes resource. Helm's `merge` overlays user values onto sensible defaults. Adding any PodSpec, container, or Service field requires zero template changes.
460
461
461
462
```yaml
462
-
# values.yaml — open-ended passthrough
463
+
# values.yaml — resource-grouped passthrough
463
464
zoneProxy:
464
465
enabled: true
465
466
mesh: default
466
-
podSpec: {} # ANY valid PodSpec field (nodeSelector, tolerations, initContainers, etc.)
467
-
containers: {} # ANY container field (resources, lifecycle, securityContext, env, etc.)
468
-
serviceSpec: {} # ANY Service spec field (externalTrafficPolicy, loadBalancerSourceRanges, etc.)
467
+
service:
468
+
name: "" # Override when auto-generated name exceeds 63 chars
469
+
spec: {} # ANY Service spec field (externalTrafficPolicy, loadBalancerSourceRanges, etc.)
470
+
deployment:
471
+
name: "" # Override when auto-generated name is needed
472
+
podSpec: {} # ANY valid PodSpec field (nodeSelector, tolerations, initContainers, etc.)
0 commit comments