Skip to content
Open
Changes from 8 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
248 changes: 96 additions & 152 deletions content/en/docs/ambient/install/platform-prerequisites/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,14 +17,31 @@ Certain Kubernetes environments require you to set various Istio configuration o

### Google Kubernetes Engine (GKE)

#### Platform profile

When using GKE you must append the correct `platform` value to your installation commands, as GKE uses nonstandard locations for CNI binaries which requires Helm overrides.

#### istioctl ambient

```bash
istioctl install --set profile=ambient --set values.cni.platform=gke
```

#### Helm ambient

```bash
helm install istio-cni charts/cni --set profile=ambient --set values.cni.platform=gke
```

#### Namespace restrictions

On GKE, any pods with the [system-node-critical](https://kubernetes.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/) `priorityClassName` can only be installed in namespaces that have a [ResourceQuota](https://kubernetes.io/docs/concepts/policy/resource-quotas/) defined. By default in GKE, only `kube-system` has a defined ResourceQuota for the `node-critical` class. The Istio CNI node agent and `ztunnel` both require the `node-critical` class, and so in GKE, both components must either:
On GKE, any pods with the [system-node-critical](https://kubernetes.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/) `priorityClassName` can only be installed in namespaces that have a [ResourceQuota](https://kubernetes.io/docs/concepts/policy/resource-quotas/) defined. The Istio CNI node agent and `ztunnel` both require the `node-critical` class.

By default in GKE, only `kube-system` has a defined ResourceQuota for the `node-critical` class. Installing Istio with the `ambient` profile creates a ResourceQuota in the `istio-system` namespace.

- Be installed into `kube-system` (_not_ `istio-system`)
- Be installed into another namespace (such as `istio-system`) in which a ResourceQuota has been manually created, for example:
To install Istio in any other namespace, you must manually create a ResourceQuota:

{{< text syntax=yaml >}}
```yaml
apiVersion: v1
kind: ResourceQuota
metadata:
Expand All @@ -39,31 +56,7 @@ spec:
scopeName: PriorityClass
values:
- system-node-critical
{{< /text >}}

#### Platform profile

When using GKE you must append the correct `platform` value to your installation commands, as GKE uses nonstandard locations for CNI binaries which requires Helm overrides.

{{< tabset category-name="install-method" >}}

{{< tab name="Helm" category-value="helm" >}}

{{< text syntax=bash >}}
$ helm install istio-cni istio/cni -n istio-system --set profile=ambient --set global.platform=gke --wait
{{< /text >}}

{{< /tab >}}

{{< tab name="istioctl" category-value="istioctl" >}}

{{< text syntax=bash >}}
$ istioctl install --set profile=ambient --set values.global.platform=gke
{{< /text >}}

{{< /tab >}}

{{< /tabset >}}
```

### Amazon Elastic Kubernetes Service (EKS)

Expand All @@ -79,199 +72,150 @@ There is an [open issue on the VPC CNI component](https://github.com/aws/amazon-

You can check if you have pod ENI trunking enabled by running the following command:

{{< text syntax=bash >}}
$ kubectl set env daemonset aws-node -n kube-system --list | grep ENABLE_POD_ENI
{{< /text >}}
```bash
kubectl set env daemonset aws-node -n kube-system --list | grep ENABLE_POD_ENI
```

You can check if you have any pod-attached security groups in your cluster by running the following command:

{{< text syntax=bash >}}
$ kubectl get securitygrouppolicies.vpcresources.k8s.aws
{{< /text >}}
```bash
kubectl get securitygrouppolicies.vpcresources.k8s.aws
```

You can set `POD_SECURITY_GROUP_ENFORCING_MODE=standard` by running the following command, and recycling affected pods:

{{< text syntax=bash >}}
$ kubectl set env daemonset aws-node -n kube-system POD_SECURITY_GROUP_ENFORCING_MODE=standard
{{< /text >}}
```bash
kubectl set env daemonset aws-node -n kube-system POD_SECURITY_GROUP_ENFORCING_MODE=standard
```

### k3d

When using [k3d](https://k3d.io/) with the default Flannel CNI, you must append the correct `platform` value to your installation commands, as k3d uses nonstandard locations for CNI configuration and binaries which requires some Helm overrides.

1. Create a cluster with Traefik disabled so it doesn't conflict with Istio's ingress gateways:

{{< text bash >}}
$ k3d cluster create --api-port 6550 -p '9080:80@loadbalancer' -p '9443:443@loadbalancer' --agents 2 --k3s-arg '--disable=traefik@server:*'
{{< /text >}}

1. Set `global.platform=k3d` when installing Istio charts. For example:

{{< tabset category-name="install-method" >}}
```bash
k3d cluster create --api-port 6550 -p '9080:80@loadbalancer' -p '9443:443@loadbalancer' --agents 2 --k3s-arg '--disable=traefik@server:*'
```

{{< tab name="Helm" category-value="helm" >}}
2. Set `global.platform=k3d` when installing Istio charts. For example:

{{< text syntax=bash >}}
$ helm install istio-cni istio/cni -n istio-system --set profile=ambient --set global.platform=k3d --wait
{{< /text >}}
#### Helm

{{< /tab >}}
```bash
helm install istio-cni istio/cni -n istio-system --set profile=ambient --set global.platform=k3d --wait
```

{{< tab name="istioctl" category-value="istioctl" >}}
#### istioctl

{{< text syntax=bash >}}
$ istioctl install --set profile=ambient --set values.global.platform=k3d
{{< /text >}}

{{< /tab >}}

{{< /tabset >}}
```bash
istioctl install --set profile=ambient --set values.global.platform=k3d
```

### K3s

When using [K3s](https://k3s.io/) and one of its bundled CNIs, you must append the correct `platform` value to your installation commands, as K3s uses nonstandard locations for CNI configuration and binaries which requires some Helm overrides. For the default K3s paths, Istio provides built-in overrides based on the `global.platform` value.

{{< tabset category-name="install-method" >}}

{{< tab name="Helm" category-value="helm" >}}

{{< text syntax=bash >}}
$ helm install istio-cni istio/cni -n istio-system --set profile=ambient --set global.platform=k3s --wait
{{< /text >}}

{{< /tab >}}

{{< tab name="istioctl" category-value="istioctl" >}}
#### Helm

{{< text syntax=bash >}}
$ istioctl install --set profile=ambient --set values.global.platform=k3s
{{< /text >}}
```bash
helm install istio-cni istio/cni -n istio-system --set profile=ambient --set global.platform=k3s --wait
```

{{< /tab >}}
#### istioctl

{{< /tabset >}}
```bash
istioctl install --set profile=ambient --set values.global.platform=k3s
```

However, these locations may be overridden in K3s, [according to K3s documentation](https://docs.k3s.io/cli/server#k3s-server-cli-help). If you are using K3s with a custom, non-bundled CNI, you must manually specify the correct paths for those CNIs, e.g. `/etc/cni/net.d` - [see the K3s docs for details](https://docs.k3s.io/networking/basic-network-options#custom-cni). For example:

{{< tabset category-name="install-method" >}}
#### Helm

{{< tab name="Helm" category-value="helm" >}}
```bash
helm install istio-cni istio/cni -n istio-system --set profile=ambient --wait --set cniConfDir=/var/lib/rancher/k3s/agent/etc/cni/net.d --set cniBinDir=/var/lib/rancher/k3s/data/current/bin/
```

{{< text syntax=bash >}}
$ helm install istio-cni istio/cni -n istio-system --set profile=ambient --wait --set cniConfDir=/var/lib/rancher/k3s/agent/etc/cni/net.d --set cniBinDir=/var/lib/rancher/k3s/data/current/bin/
{{< /text >}}
#### istioctl

{{< /tab >}}

{{< tab name="istioctl" category-value="istioctl" >}}

{{< text syntax=bash >}}
$ istioctl install --set profile=ambient --set values.cni.cniConfDir=/var/lib/rancher/k3s/agent/etc/cni/net.d --set values.cni.cniBinDir=/var/lib/rancher/k3s/data/current/bin/
{{< /text >}}

{{< /tab >}}

{{< /tabset >}}
```bash
istioctl install --set profile=ambient --set values.cni.cniConfDir=/var/lib/rancher/k3s/agent/etc/cni/net.d --set values.cni.cniBinDir=/var/lib/rancher/k3s/data/current/bin/
```

### MicroK8s

If you are installing Istio on [MicroK8s](https://microk8s.io/), you must append the correct `platform` value to your installation commands, as MicroK8s [uses non-standard locations for CNI configuration and binaries](https://microk8s.io/docs/change-cidr). For example:

{{< tabset category-name="install-method" >}}

{{< tab name="Helm" category-value="helm" >}}

{{< text syntax=bash >}}
$ helm install istio-cni istio/cni -n istio-system --set profile=ambient --set global.platform=microk8s --wait
#### Helm

{{< /text >}}
```bash
helm install istio-cni istio/cni -n istio-system --set profile=ambient --set global.platform=microk8s --wait
```

{{< /tab >}}
#### istioctl

{{< tab name="istioctl" category-value="istioctl" >}}

{{< text syntax=bash >}}
$ istioctl install --set profile=ambient --set values.global.platform=microk8s
{{< /text >}}

{{< /tab >}}

{{< /tabset >}}
```bash
istioctl install --set profile=ambient --set values.global.platform=microk8s
```

### minikube

If you are using [minikube](https://kubernetes.io/docs/tasks/tools/install-minikube/) with the [Docker driver](https://minikube.sigs.k8s.io/docs/drivers/docker/),
you must append the correct `platform` value to your installation commands, as minikube with Docker uses a nonstandard bind mount path for containers.
For example:

{{< tabset category-name="install-method" >}}

{{< tab name="Helm" category-value="helm" >}}

{{< text syntax=bash >}}
$ helm install istio-cni istio/cni -n istio-system --set profile=ambient --set global.platform=minikube --wait"
{{< /text >}}

{{< /tab >}}
#### Helm

{{< tab name="istioctl" category-value="istioctl" >}}
```bash
helm install istio-cni istio/cni -n istio-system --set profile=ambient --set global.platform=minikube --wait
```

{{< text syntax=bash >}}
$ istioctl install --set profile=ambient --set values.global.platform=minikube"
{{< /text >}}
#### istioctl

{{< /tab >}}

{{< /tabset >}}
```bash
istioctl install --set profile=ambient --set values.global.platform=minikube
```

### Red Hat OpenShift

OpenShift requires that `ztunnel` and `istio-cni` components are installed in the `kube-system` namespace, and that you set `global.platform=openshift` for all charts.

{{< tabset category-name="install-method" >}}

{{< tab name="Helm" category-value="helm" >}}

You must `--set global.platform=openshift` for **every** chart you install, for example with the `istiod` chart:

{{< text syntax=bash >}}
$ helm install istiod istio/istiod -n istio-system --set profile=ambient --set global.platform=openshift --wait
{{< /text >}}

In addition, you must install `istio-cni` and `ztunnel` in the `kube-system` namespace, for example:
#### Helm

{{< text syntax=bash >}}
$ helm install istio-cni istio/cni -n kube-system --set profile=ambient --set global.platform=openshift --wait
$ helm install ztunnel istio/ztunnel -n kube-system --set profile=ambient --set global.platform=openshift --wait
{{< /text >}}
You must `--set global.platform=openshift` for **every** chart you install, for example with the `istiod` chart:

{{< /tab >}}
```bash
helm install istiod istio/istiod -n istio-system --set profile=ambient --set global.platform=openshift --wait
```

{{< tab name="istioctl" category-value="istioctl" >}}
In addition, you must install `istio-cni` and `ztunnel` in the `kube-system` namespace, for example:

{{< text syntax=bash >}}
$ istioctl install --set profile=openshift-ambient --skip-confirmation
{{< /text >}}
```bash
helm install istio-cni istio/cni -n kube-system --set profile=ambient --set global.platform=openshift --wait
helm install ztunnel istio/ztunnel -n kube-system --set profile=ambient --set global.platform=openshift --wait
```

{{< /tab >}}
#### istioctl

{{< /tabset >}}
```bash
istioctl install --set profile=openshift-ambient --skip-confirmation
```

## CNI plugins

The following configurations apply to all platforms, when certain {{< gloss "CNI" >}}CNI plugins{{< /gloss >}} are used:
The following configurations apply to all platforms, when certain CNI plugins are used:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this being changed?


### Cilium

1. Cilium currently defaults to proactively deleting other CNI plugins and their config, and must be configured with
`cni.exclusive = false` to properly support chaining. See [the Cilium documentation](https://docs.cilium.io/en/stable/helm-reference/) for more details.
1. Cilium's BPF masquerading is currently disabled by default, and has issues with Istio's use of link-local IPs for Kubernetes health checking. Enabling BPF masquerading via `bpf.masquerade=true` is not currently supported, and results in non-functional pod health checks in Istio ambient. Cilium's default iptables masquerading implementation should continue to function correctly.
1. Due to how Cilium manages node identity and internally allow-lists node-level health probes to pods,
applying any default-DENY `NetworkPolicy` in a Cilium CNI install underlying Istio in ambient mode will cause `kubelet` health probes (which are by-default silently exempted from all policy enforcement by Cilium) to be blocked. This is because Istio uses a link-local SNAT address for kubelet health probes, which Cilium is not aware of, and Cilium does not have an option to exempt link-local addresses from policy enforcement.
`cni.exclusive = false` to properly support chaining. See [the Cilium documentation](https://docs.cilium.io/en/stable/helm-reference/) for more details.
2. Cilium's BPF masquerading is currently disabled by default, and has issues with Istio's use of link-local IPs for Kubernetes health checking. Enabling BPF masquerading via `bpf.masquerade=true` is not currently supported, and results in non-functional pod health checks in Istio ambient. Cilium's default iptables masquerading implementation should continue to function correctly.
3. Due to how Cilium manages node identity and internally allow-lists node-level health probes to pods,
applying any default-DENY `NetworkPolicy` in a Cilium CNI install underlying Istio in ambient mode will cause `kubelet` health probes (which are by-default silently exempted from all policy enforcement by Cilium) to be blocked. This is because Istio uses a link-local SNAT address for kubelet health probes, which Cilium is not aware of, and Cilium does not have an option to exempt link-local addresses from policy enforcement.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is where you're getting your ordered number linting failure. Not sure why these are being changed, however?


This can be resolved by applying the following `CiliumClusterWideNetworkPolicy`:

{{< text syntax=yaml >}}
```yaml
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why?

apiVersion: "cilium.io/v2"
kind: CiliumClusterwideNetworkPolicy
metadata:
Expand All @@ -285,7 +229,7 @@ applying any default-DENY `NetworkPolicy` in a Cilium CNI install underlying Ist
ingress:
- fromCIDR:
- "169.254.7.127/32"
{{< /text >}}
```

This policy override is *not* required unless you already have other default-deny `NetworkPolicies` or `CiliumNetworkPolicies` applied in your cluster.

Expand Down