|
| 1 | +# Enabling IPv6 |
| 2 | + |
| 3 | +## Overview |
| 4 | + |
| 5 | +CAPA enables you to create an IPv6 Kubernetes clusters on Amazon Web Service (AWS). |
| 6 | + |
| 7 | +Only single-stack IPv6 clusters are supported. However, CAPA utilizes a dual stack infrastructure (e.g. dual stack VPC) to support IPv6. In fact, it is the only mode of operation at the time of writing. |
| 8 | + |
| 9 | +> **IMPORTANT NOTE**: Dual stack clusters are not yet supported. |
| 10 | +
|
| 11 | +## Prerequisites |
| 12 | + |
| 13 | +The instance types for control plane and worker machines must be [Nitro-based](https://docs.aws.amazon.com/ec2/latest/instancetypes/ec2-nitro-instances.html) in order to support IPv6. To see a list of Nitro instance types in your region, run the following command: |
| 14 | + |
| 15 | +```bash |
| 16 | +aws ec2 describe-instance-types \ |
| 17 | + --filters Name=hypervisor,Values=nitro \ |
| 18 | + --query="InstanceTypes[*].InstanceType" |
| 19 | +``` |
| 20 | + |
| 21 | +## Creating IPv6 EKS-managed Clusters |
| 22 | + |
| 23 | +To quickly deploy an IPv6 EKS cluster, use the [IPv6 EKS cluster template](https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-aws/refs/heads/main/templates/cluster-template-eks-ipv6.yaml). |
| 24 | + |
| 25 | +<aside class="note warning"> |
| 26 | + |
| 27 | +<h1>Warning</h1> |
| 28 | + |
| 29 | +You can't define custom Pod CIDRs on EKS with IPv6. EKS automatically assigns an address range from a unique local |
| 30 | +address range of `fc00::/7`. |
| 31 | + |
| 32 | +</aside> |
| 33 | + |
| 34 | +**Notes**: All addons **must** be enabled. A working cluster configuration looks like this: |
| 35 | + |
| 36 | +```yaml |
| 37 | +kind: AWSManagedControlPlane |
| 38 | +apiVersion: controlplane.cluster.x-k8s.io/v1beta1 |
| 39 | +metadata: |
| 40 | + name: "${CLUSTER_NAME}-control-plane" |
| 41 | +spec: |
| 42 | + network: |
| 43 | + vpc: |
| 44 | + ipv6: {} |
| 45 | + region: "${AWS_REGION}" |
| 46 | + sshKeyName: "${AWS_SSH_KEY_NAME}" |
| 47 | + version: "${KUBERNETES_VERSION}" |
| 48 | + addons: |
| 49 | + - name: "vpc-cni" |
| 50 | + version: "v1.11.0-eksbuild.1" |
| 51 | + # this is important, otherwise environment property update will not work |
| 52 | + conflictResolution: "overwrite" |
| 53 | + - name: "coredns" |
| 54 | + version: "v1.8.7-eksbuild.1" |
| 55 | + - name: "kube-proxy" |
| 56 | + version: "v1.22.6-eksbuild.1" |
| 57 | +``` |
| 58 | +
|
| 59 | +## Creating IPv6 Self-managed Clusters |
| 60 | +
|
| 61 | +To quickly deploy an IPv6 self-managed cluster, use the [IPv6 cluster template](https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-aws/refs/heads/main/templates/cluster-template-ipv6.yaml). |
| 62 | +
|
| 63 | +When creating a self-managed cluster, you can define the Pod and Service CIDR. For example, you can define ULA IPv6 range `fd01::/48` for pod networking and `fd02::/112` for service networking. |
| 64 | + |
| 65 | +<aside class="note warning"> |
| 66 | + |
| 67 | +<h1>Warning</h1> |
| 68 | + |
| 69 | +**Action required**: Since coredns pods run on the single-stack IPv6 pod network, they will fail to resolve non-cluster DNS queries |
| 70 | +via the IPv4 upstream nameserver in `/etc/resolv.conf`. |
| 71 | + |
| 72 | +The workaround is to edit the `coredns` ConfigMap to use Route53 Resolver nameserver `fd00:ec2::253`, by setting `forward . /etc/resolv.conf` part to `forward . fd00:ec2::253 /etc/resolv.conf`. |
| 73 | + ```bash |
| 74 | + kubectl -n kube-system edit cm/coredns |
| 75 | + ``` |
| 76 | +</aside> |
| 77 | + |
| 78 | +### CNI IPv6 support |
| 79 | + |
| 80 | +By default, no CNI plugin is installed when provisioning a self-managed cluster. You need to install your own CNI solution that supports IPv6, for example, Calico with VXLAN. |
| 81 | + |
| 82 | +You can find the guides to enable [IPv6](https://docs.tigera.io/calico/latest/networking/ipam/ipv6#ipv6) and [VXLAN](https://docs.tigera.io/calico/latest/networking/configuring/vxlan-ipip) support for Calico on their official documentation. Or you can use a customized Calico manifests [here](https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-aws/refs/heads/main/test/e2e/data/cni/calico_ipv6.yaml) for IPv6. |
| 83 | + |
| 84 | +## IPv6 CIDR Allocations |
| 85 | + |
| 86 | +### AWS-assigned IPv6 VPC CIDR |
| 87 | + |
| 88 | +To request AWS to automatically assign an IPv6 CIDR from an AWS defined address pool, use the following setting: |
| 89 | + |
| 90 | +```yaml |
| 91 | +spec: |
| 92 | + network: |
| 93 | + vpc: |
| 94 | + ipv6: {} |
| 95 | +``` |
| 96 | + |
| 97 | +### BYOIPv6 VPC CIDR |
| 98 | + |
| 99 | +To define your own IPv6 address pool and CIDR set the following values: |
| 100 | + |
| 101 | +```yaml |
| 102 | +spec: |
| 103 | + network: |
| 104 | + vpc: |
| 105 | + ipv6: |
| 106 | + poolId: pool-id |
| 107 | + cidrBlock: "2009:1234:ff00::/56" |
| 108 | +``` |
| 109 | + |
| 110 | +There must already be a provisioned pool and a set of IPv6 CIDRs for that. |
| 111 | + |
| 112 | +### BYO IPv6 VPC |
| 113 | + |
| 114 | +If you have a VPC that is IPv6 enabled (i.e. dual stack VPC) and you would like to use it, please define it in the `AWSCluster` specs: |
| 115 | + |
| 116 | +```yaml |
| 117 | +spec: |
| 118 | + network: |
| 119 | + vpc: |
| 120 | + id: vpc-1234567890abcdefg |
| 121 | + cidrBlock: 10.0.0.0/16 |
| 122 | + ipv6: |
| 123 | + cidrBlock: "2001:1234:ff00::/56" |
| 124 | + egressOnlyInternetGatewayId: eigw-1234567890abcdefg |
| 125 | +``` |
| 126 | + |
| 127 | +This has to be done explicitly because otherwise, it would break in the following two scenarios: |
| 128 | +- During an upgrade from 1.5 to >=2.0 where the VPC is ipv6 enabled, but CAPA was only recently made aware. |
| 129 | +- During a migration on the VPC, switching it from only IPv4 to Dual Stack (it would see that ipv6 is enabled and |
| 130 | + enforce it while doing that would not have been the intention of the user). |
0 commit comments