-
Notifications
You must be signed in to change notification settings - Fork 1.6k
docs: update GKE ambient guide to remove manual ResourceQuota step (Fix istio/istio#56376) #16660
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Changes from all commits
4bc4815
26baa00
bd30dac
65bd2df
4a9b3a6
911fa08
12cedb2
7cb276f
8dd2737
cbf32dc
cb82172
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -17,12 +17,29 @@ Certain Kubernetes environments require you to set various Istio configuration o | |
|
||
### Google Kubernetes Engine (GKE) | ||
|
||
#### Platform profile | ||
|
||
When using GKE you must append the correct `platform` value to your installation commands, as GKE uses nonstandard locations for CNI binaries which requires Helm overrides. | ||
|
||
#### istioctl ambient | ||
|
||
{{< text syntax=bash >}} | ||
$ istioctl install --set profile=ambient --set values.cni.platform=gke | ||
{{< /text >}} | ||
|
||
#### Helm ambient | ||
|
||
{{< text syntax=bash >}} | ||
$ helm install istio-cni charts/cni --set profile=ambient --set values.cni.platform=gke | ||
{{< /text >}} | ||
|
||
#### Namespace restrictions | ||
|
||
On GKE, any pods with the [system-node-critical](https://kubernetes.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/) `priorityClassName` can only be installed in namespaces that have a [ResourceQuota](https://kubernetes.io/docs/concepts/policy/resource-quotas/) defined. By default in GKE, only `kube-system` has a defined ResourceQuota for the `node-critical` class. The Istio CNI node agent and `ztunnel` both require the `node-critical` class, and so in GKE, both components must either: | ||
On GKE, any pods with the [system-node-critical](https://kubernetes.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/) `priorityClassName` can only be installed in namespaces that have a [ResourceQuota](https://kubernetes.io/docs/concepts/policy/resource-quotas/) defined. The Istio CNI node agent and `ztunnel` both require the `node-critical` class. | ||
|
||
By default in GKE, only `kube-system` has a defined ResourceQuota for the `node-critical` class. Installing Istio with the `ambient` profile creates a ResourceQuota in the `istio-system` namespace. | ||
|
||
- Be installed into `kube-system` (_not_ `istio-system`) | ||
- Be installed into another namespace (such as `istio-system`) in which a ResourceQuota has been manually created, for example: | ||
To install Istio in any other namespace, you must manually create a ResourceQuota: | ||
|
||
{{< text syntax=yaml >}} | ||
apiVersion: v1 | ||
|
@@ -41,30 +58,6 @@ spec: | |
- system-node-critical | ||
{{< /text >}} | ||
|
||
#### Platform profile | ||
|
||
When using GKE you must append the correct `platform` value to your installation commands, as GKE uses nonstandard locations for CNI binaries which requires Helm overrides. | ||
|
||
{{< tabset category-name="install-method" >}} | ||
|
||
{{< tab name="Helm" category-value="helm" >}} | ||
|
||
{{< text syntax=bash >}} | ||
$ helm install istio-cni istio/cni -n istio-system --set profile=ambient --set global.platform=gke --wait | ||
{{< /text >}} | ||
|
||
{{< /tab >}} | ||
|
||
{{< tab name="istioctl" category-value="istioctl" >}} | ||
|
||
{{< text syntax=bash >}} | ||
$ istioctl install --set profile=ambient --set values.global.platform=gke | ||
{{< /text >}} | ||
|
||
{{< /tab >}} | ||
|
||
{{< /tabset >}} | ||
|
||
### Amazon Elastic Kubernetes Service (EKS) | ||
|
||
If you are using EKS: | ||
|
@@ -265,7 +258,9 @@ The following configurations apply to all platforms, when certain {{< gloss "CNI | |
|
||
1. Cilium currently defaults to proactively deleting other CNI plugins and their config, and must be configured with | ||
`cni.exclusive = false` to properly support chaining. See [the Cilium documentation](https://docs.cilium.io/en/stable/helm-reference/) for more details. | ||
|
||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Are these newlines important? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. No it's not There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. And I havent changed anything in those lines There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. You might have some formatter that did, because it's in your commit. |
||
1. Cilium's BPF masquerading is currently disabled by default, and has issues with Istio's use of link-local IPs for Kubernetes health checking. Enabling BPF masquerading via `bpf.masquerade=true` is not currently supported, and results in non-functional pod health checks in Istio ambient. Cilium's default iptables masquerading implementation should continue to function correctly. | ||
|
||
1. Due to how Cilium manages node identity and internally allow-lists node-level health probes to pods, | ||
applying any default-DENY `NetworkPolicy` in a Cilium CNI install underlying Istio in ambient mode will cause `kubelet` health probes (which are by-default silently exempted from all policy enforcement by Cilium) to be blocked. This is because Istio uses a link-local SNAT address for kubelet health probes, which Cilium is not aware of, and Cilium does not have an option to exempt link-local addresses from policy enforcement. | ||
|
||
|
Uh oh!
There was an error while loading. Please reload this page.