Skip to content

Commit 9edbe77

Browse files
committed
docs: add dualstack cluster support documentation
Add new dualstack cluster template and documentation updates for IPv6 and dualstack cluster configurations. Additionally, docs for configuring API LB's target group IP type is also added. New cluster templates and calico manifest are included for creating dualstack clusters.
1 parent 9920b8a commit 9edbe77

File tree

7 files changed

+11958
-30
lines changed

7 files changed

+11958
-30
lines changed
Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
# Enabling IPv6

docs/book/src/topics/ipv6-enabled-cluster.md

Lines changed: 247 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -2,22 +2,41 @@
22

33
## Overview
44

5-
CAPA enables you to create an IPv6 Kubernetes clusters on Amazon Web Service (AWS).
5+
CAPA enables you to create IPv6 and dualstack (IPv4 + IPv6) Kubernetes clusters on Amazon Web Service (AWS) on a dualstack network infrastructure.
66

7-
Only single-stack IPv6 clusters are supported. However, CAPA utilizes a dual stack infrastructure (e.g. dual stack VPC) to support IPv6. In fact, it is the only mode of operation at the time of writing.
7+
## Prerequisites
88

9-
> **IMPORTANT NOTE**: Dual stack clusters are not yet supported.
9+
The instance types for control plane and worker machines must support IPv6. To see a list of instance types that support IPv6 in your region, run the following command:
1010

11-
## Prerequisites
11+
```bash
12+
aws ec2 describe-instance-types \
13+
--region <region> \
14+
--filters "Name=network-info.ipv6-supported,Values=true" \
15+
--query 'InstanceTypes[].InstanceType'
16+
```
1217

13-
The instance types for control plane and worker machines must be [Nitro-based](https://docs.aws.amazon.com/ec2/latest/instancetypes/ec2-nitro-instances.html) in order to support IPv6. To see a list of Nitro instance types in your region, run the following command:
18+
If you want to check whether a specific instance type supports IPv6, run the following command:
1419

1520
```bash
1621
aws ec2 describe-instance-types \
17-
--filters Name=hypervisor,Values=nitro \
18-
--query="InstanceTypes[*].InstanceType"
22+
--region <region> \
23+
--instance-types <instance-type> \
24+
--query 'InstanceTypes[0].NetworkInfo.Ipv6Supported'
25+
```
26+
27+
## Enabling IPv6 capabilities
28+
29+
To instruct CAPA to configure IPv6 capabilities for the network infrastructure, you must explicitly define `spec.network.vpc.ipv6` in either `AWSCluster` (for self-managed clusters) or `AWSManagedControlPlane` (for EKS clusters). See [IPv6 CIDR Allocations](#ipv6-cidr-allocations) for different IPv6 CIDR configuration options.
30+
31+
```yaml
32+
spec:
33+
network:
34+
vpc:
35+
ipv6: {}
1936
```
2037
38+
**Note:** CAPA, by default, will provision a dualstack infrastructure (i.e. dualstack VPC and subnets). However, your Kubernetes cluster can be configured as either IPv6-only or dualstack depending on your pod/service CIDR configuration.
39+
2140
## Creating IPv6 EKS-managed Clusters
2241
2342
To quickly deploy an IPv6 EKS cluster, use the [IPv6 EKS cluster template](https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-aws/refs/heads/main/templates/cluster-template-eks-ipv6.yaml).
@@ -26,12 +45,11 @@ To quickly deploy an IPv6 EKS cluster, use the [IPv6 EKS cluster template](https
2645
2746
<h1>Warning</h1>
2847
29-
You can't define custom Pod CIDRs on EKS with IPv6. EKS automatically assigns an address range from a unique local
30-
address range of `fc00::/7`.
48+
EKS currently only supports IPv6-only clusters (not dualstack). You can't define custom Pod CIDRs on EKS with IPv6. EKS automatically assigns an address range from a unique local address range of `fc00::/7`.
3149

3250
</aside>
3351

34-
**Notes**: All addons **must** be enabled. A working cluster configuration looks like this:
52+
**Notes**: All addons **must** be enabled. A working IPv6 cluster configuration defines `spec.network.vpc.ipv6` and all addons as followed:
3553

3654
```yaml
3755
kind: AWSManagedControlPlane
@@ -47,56 +65,151 @@ spec:
4765
version: "${KUBERNETES_VERSION}"
4866
addons:
4967
- name: "vpc-cni"
50-
version: "v1.11.0-eksbuild.1"
68+
version: "v1.11.0-eksbuild.1" # Note: Check for latest compatible version
5169
# this is important, otherwise environment property update will not work
5270
conflictResolution: "overwrite"
5371
- name: "coredns"
54-
version: "v1.8.7-eksbuild.1"
72+
version: "v1.8.7-eksbuild.1" # Note: Check for latest compatible version
5573
- name: "kube-proxy"
56-
version: "v1.22.6-eksbuild.1"
74+
version: "v1.22.6-eksbuild.1" # Note: Check for latest compatible version
5775
```
5876

5977
## Creating IPv6 Self-managed Clusters
6078

6179
To quickly deploy an IPv6 self-managed cluster, use the [IPv6 cluster template](https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-aws/refs/heads/main/templates/cluster-template-ipv6.yaml).
6280

63-
When creating a self-managed cluster, you can define the Pod and Service CIDR. For example, you can define ULA IPv6 range `fd01::/48` for pod networking and `fd02::/112` for service networking.
81+
When creating a self-managed cluster, you can define the IPv6 Pod and Service CIDR. For example, you can define ULA IPv6 range `fd01::/48` for pod networking and `fd02::/112` for service networking.
82+
83+
```yaml
84+
apiVersion: cluster.x-k8s.io/v1beta1
85+
kind: Cluster
86+
metadata:
87+
name: "${CLUSTER_NAME}"
88+
spec:
89+
clusterNetwork:
90+
pods:
91+
cidrBlocks:
92+
- fd01::/48
93+
services:
94+
cidrBlocks:
95+
- fd02::/112
96+
```
6497

6598
<aside class="note warning">
6699

67100
<h1>Warning</h1>
68101

69-
**Action required**: Since coredns pods run on the single-stack IPv6 pod network, they will fail to resolve non-cluster DNS queries
70-
via the IPv4 upstream nameserver in `/etc/resolv.conf`.
102+
**DNS64/NAT64**: If you are configuring CAPA to create dualstack private subnets (by default) for an IPv6 cluster and need IPv6-only pods to reach IPv4-only internet services, you must enable [DNS64/NAT64](https://docs.aws.amazon.com/vpc/latest/userguide/nat-gateway-nat64-dns64.html) as CAPA does not do so.
71103

72-
The workaround is to edit the `coredns` ConfigMap to use Route53 Resolver nameserver `fd00:ec2::253`, by setting `forward . /etc/resolv.conf` part to `forward . fd00:ec2::253 /etc/resolv.conf`.
104+
**CoreDNS**: Since CoreDNS pods run on the single-stack IPv6 pod network, they will fail to resolve non-cluster DNS queries via the IPv4 upstream nameserver in `/etc/resolv.conf`.
105+
106+
The workaround is to edit the `coredns` ConfigMap in namespace `kube-system` to use Route53 Resolver IPv6 nameserver `fd00:ec2::253`, by setting `forward . /etc/resolv.conf` part to `forward . fd00:ec2::253 /etc/resolv.conf`.
73107
```bash
74108
kubectl -n kube-system edit cm/coredns
75109
```
110+
111+
**Note**: This CoreDNS workaround is NOT required for dualstack clusters where pods have both IPv4 and IPv6 addresses.
112+
113+
</aside>
114+
115+
## Creating Dualstack Self-managed Clusters
116+
117+
To quickly deploy a dualstack self-managed cluster, use the [Dualstack cluster template](https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-aws/refs/heads/main/templates/cluster-template-dualstack.yaml).
118+
119+
When creating a self-managed cluster, you can define both IPv4 and IPv6 Pod and Service CIDRs. For example:
120+
121+
```yaml
122+
apiVersion: cluster.x-k8s.io/v1beta1
123+
kind: Cluster
124+
metadata:
125+
name: "${CLUSTER_NAME}"
126+
spec:
127+
clusterNetwork:
128+
pods:
129+
cidrBlocks:
130+
- 192.168.0.0/16
131+
- fd01::/48
132+
services:
133+
cidrBlocks:
134+
- 172.30.0.0/16
135+
- fd02::/112
136+
```
137+
138+
## Cloud Controller Manager IPv6 Support for Self-managed Clusters
139+
140+
<aside class="note warning">
141+
<h1>Warning</h1>
142+
143+
The AWS Cloud Controller Manager (CCM) does not currently support dualstack Load Balancers. When creating Services of type LoadBalancer in a dualstack cluster, they will be
144+
assigned addresses from only one IP family.
76145
</aside>
77146

78-
### CNI IPv6 support
147+
**Node IP addresses**: You need to provide cloud-config to the CCM via a ConfigMap to set the `NodeIPFamilies` to include IPv6. This instructs the CCM to consider IPv6 in the node's network interface. If not, the CCM will only consider node's IPv4. This causes nodes to have only IPv4 and new pods with `hostNetwork: true` will only pick up the node's IPv4 address.
79148

80-
By default, no CNI plugin is installed when provisioning a self-managed cluster. You need to install your own CNI solution that supports IPv6, for example, Calico with VXLAN.
149+
For example, provide the following ConfigMap to `cloud-controller-manager-addon`:
81150

82-
You can find the guides to enable [IPv6](https://docs.tigera.io/calico/latest/networking/ipam/ipv6#ipv6) and [VXLAN](https://docs.tigera.io/calico/latest/networking/configuring/vxlan-ipip) support for Calico on their official documentation. Or you can use a customized Calico manifests [here](https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-aws/refs/heads/main/test/e2e/data/cni/calico_ipv6.yaml) for IPv6.
151+
```yaml
152+
apiVersion: v1
153+
kind: ConfigMap
154+
metadata:
155+
name: cloud-config
156+
namespace: kube-system
157+
data:
158+
cloud-config.conf: |
159+
[Global]
160+
NodeIPFamilies=ipv4
161+
NodeIPFamilies=ipv6
162+
```
163+
164+
And then provide the `cloud-config.conf` to the CCM DaemonSet as follows:
83165

84-
**Note**: If you are using Calico as the CNI provider, ensure the CNI ingress rule allows VXLAN. You can set the rule in the `AWSCluster` resource, for example:
166+
```yaml
167+
spec:
168+
containers:
169+
- name: aws-cloud-controller-manager
170+
image: registry.k8s.io/provider-aws/cloud-controller-manager:v1.28.3
171+
args:
172+
- --v=2
173+
- --cloud-provider=aws
174+
- --use-service-account-credentials=true
175+
- --configure-cloud-routes=false
176+
- --cloud-config=/etc/kubernetes/cloud-config.conf # Define cloud-config file path
177+
volumeMounts:
178+
- name: cloud-config
179+
mountPath: /etc/kubernetes/cloud-config.conf
180+
subPath: cloud-config.conf
181+
hostNetwork: true
182+
volumes:
183+
- name: cloud-config
184+
configMap:
185+
name: cloud-config
186+
```
187+
188+
## CNI IPv6 Support for Self-managed Clusters
189+
190+
By default, no CNI plugin is installed when provisioning a self-managed cluster. You need to install your own CNI solution that supports IPv6, for example, Calico with VXLAN. You can find the guides to enable [IPv6](https://docs.tigera.io/calico/latest/networking/ipam/ipv6) and [VXLAN](https://docs.tigera.io/calico/latest/networking/configuring/vxlan-ipip) support for Calico on their official documentation.
191+
192+
**Important notes for Calico with IPv6**:
193+
- Calico supports IPv6 with VXLAN encapsulation only (IP-in-IP is not supported for IPv6)
194+
- VXLAN for IPv6 requires kernel version ≥ 4.19.1 (or Red Hat kernel ≥ 4.18.0)
195+
- If you are using Calico as the CNI provider, ensure the CNI ingress rule allows VXLAN for cross-subnet communications. You can set the rule in the `AWSCluster` resource, for example:
85196
```yaml
86197
spec:
87198
network:
88199
cni:
89200
cniIngressRules:
90201
# If using Calico as CNI provider, this rule is required.
91202
# Note: Calico currently supports IPv6 with VXLAN.
92-
- description: "IPv6 VXLAN (calico)"
203+
- description: "VXLAN (calico)"
93204
protocol: udp
94205
fromPort: 4789
95206
toPort: 4789
96207
```
97208

98209
## IPv6 CIDR Allocations
99210

211+
CAPA supports various methods to allocate an IPv6 CIDR to the cluster VPC.
212+
100213
### AWS-assigned IPv6 VPC CIDR
101214

102215
To request AWS to automatically assign an IPv6 CIDR from an AWS defined address pool, use the following setting:
@@ -108,7 +221,11 @@ spec:
108221
ipv6: {}
109222
```
110223

111-
### BYOIPv6 VPC CIDR
224+
By default, Amazon provides one fixed size (`/56`) IPv6 CIDR block to a VPC.
225+
226+
### Bring-your-own IPv6 VPC CIDR (EC2)
227+
228+
If you own an IPv6 address space, you can import it into AWS EC2 IPv6 address pool (See [guide](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-byoip.html#byoip-requirements)). After importing it, you can assign /56 ranges from the space to individual VPCs in the same account.
112229

113230
To define your own IPv6 address pool and CIDR set the following values:
114231

@@ -121,11 +238,25 @@ spec:
121238
cidrBlock: "2009:1234:ff00::/56"
122239
```
123240

124-
There must already be a provisioned pool and a set of IPv6 CIDRs for that.
241+
### Bring-your-own IPv6 VPC CIDR via VPC Address Manager (VPC IPAM)
242+
243+
If you want to allocate an IPv6 CIDR to the VPC from an existing VPC IPAM pool, define the pool ID and a prefix length as followed:
244+
245+
```yaml
246+
spec:
247+
network:
248+
vpc:
249+
ipv6:
250+
ipamPool:
251+
id: ipam-pool-id
252+
netmaskLength: 56
253+
```
254+
255+
By default, if you omit `netmaskLength`, CAPA will set it to the default `56`.
125256

126-
### BYO IPv6 VPC
257+
### Bring-your-own IPv6 VPC
127258

128-
If you have a dual stack VPC (i.e. CAPA will only use IPv6 for the cluster in this configuration) and you would like to use it, please define it in the `AWSCluster` specs:
259+
If you have an existing dualstack VPC that you would like to use, you must explicitly provide the IPv6 CIDR block and egress-only internet gateway ID specs:
129260

130261
```yaml
131262
spec:
@@ -138,7 +269,93 @@ spec:
138269
egressOnlyInternetGatewayId: eigw-1234567890abcdefg
139270
```
140271

141-
This has to be done explicitly because otherwise, it would break in the following two scenarios:
142-
- During an upgrade from 1.5 to >=2.0 where the VPC is ipv6 enabled, but CAPA was only recently made aware.
143-
- During a migration on the VPC, switching it from only IPv4 to Dual Stack (it would see that ipv6 is enabled and
144-
enforce it while doing that would not have been the intention of the user).
272+
This has to be done to explicitly express the user intention to use the IPv6 capabilities of the VPC.
273+
274+
## Mixing subnets of different IP families
275+
276+
CAPA allows you to define the AZs the subnets should be created in, the number of subnets per AZ and whether a subnet is IPv4, dualstack, or IPv6-only. For example:
277+
278+
```yaml
279+
spec:
280+
network:
281+
subnets:
282+
# This creates a dualstack public subnet in us-east-1a
283+
# Both cidrBlock + isIPv6==true
284+
- cidrBlock: 10.0.0.0/20
285+
isIpv6: true
286+
isPublic: true
287+
availabilityZone: us-east-1a
288+
id: ${CLUSTER_NAME}-subnet-public-us-east-1a
289+
# This creates a dualstack public subnet in us-east-1b
290+
# Both cidrBlock + isIPv6==true
291+
- cidrBlock: 10.0.16.0/20
292+
isIpv6: true
293+
isPublic: true
294+
availabilityZone: us-east-1b
295+
id: ${CLUSTER_NAME}-subnet-public-us-east-1b
296+
# This creates an IPv4 private subnet in us-east-1a
297+
# Only cidrBlock defined + isIpv6==false (default)
298+
- cidrBlock: 10.0.128.0/20
299+
isPublic: false
300+
availabilityZone: us-east-1a
301+
id: ${CLUSTER_NAME}-subnet-private-us-east-1a
302+
# This creates an IPv6-only private subnet in us-east-1a
303+
# cidrBlock is undefined + isIpv6==true
304+
- isPublic: false
305+
isIpv6: true
306+
availabilityZone: us-east-1a
307+
id: ${CLUSTER_NAME}-subnet-private-1-us-east-1a
308+
# This creates an IPv4 private subnet in us-east-1b
309+
# Only cidrBlock defined + isIpv6==false (default)
310+
- cidrBlock: 10.0.144.0/20
311+
isPublic: false
312+
availabilityZone: us-east-1b
313+
id: ${CLUSTER_NAME}-subnet-private-us-east-1b
314+
# This creates an IPv6-only private subnet in us-east-1b
315+
# cidrBlock is undefined + isIpv6==true
316+
- isPublic: false
317+
isIpv6: true
318+
availabilityZone: us-east-1b
319+
id: ${CLUSTER_NAME}-subnet-private-1-us-east-1b
320+
vpc:
321+
cidrBlock: 10.0.0.0/16
322+
# The VPC IPv6 CIDR will be allocated by AWS.
323+
ipv6: {}
324+
region: us-east-1
325+
```
326+
327+
A subnet IP specification is defined as follows (applied to managed VPC only):
328+
329+
| Subnet Type | `isIPv6` | `cidrBlock` | `ipv6CidrBlock` | Notes |
330+
|-------------|----------|-------------|-----------------|-------|
331+
| **IPv4-only** | `false` or omitted | Required | N/A | Traditional IPv4 subnet |
332+
| **Dualstack** | `true` | Required | Optional | Auto-assigned from VPC CIDR if omitted |
333+
| **IPv6-only** | `true` | Omitted/empty | Optional | Auto-assigned from VPC CIDR if omitted |
334+
335+
## IPv6 support for Local and Wavelength zones
336+
337+
According to the AWS docs, the state of IPv6 support is as followed:
338+
339+
- ❌ No IPv6 support for Wavelength zones. See [reference](https://docs.aws.amazon.com/wavelength/latest/developerguide/wavelength-quotas.html#vpc-considerations).
340+
- Limited support for Local zones, which requires a dedicate IPv6 CIDR for local zone network border group. See [reference](https://docs.aws.amazon.com/local-zones/latest/ug/how-local-zones-work.html#considerations).
341+
342+
Thus, CAPA currently does not support creating IPv6-enabled subnets in Local and Wavelength zones.
343+
344+
However, if you have an existing VPC with IPv6-only or dualstack subnets in Local zones, you can define them in the cluster spec.
345+
346+
347+
```yaml
348+
spec:
349+
network:
350+
subnets:
351+
- id: "cluster-subnet-private-us-east-1-nyc-1a"
352+
- id: "cluster-subnet-public-us-east-1-nyc-1a"
353+
- id: "cluster-subnet-private-us-east-1-wl1-was-wlz-1"
354+
- id: "cluster-subnet-public-us-east-1-wl1-was-wlz-1"
355+
vpc:
356+
id: vpc-1234567890abcdefg
357+
cidrBlock: 10.0.0.0/16
358+
ipv6:
359+
cidrBlock: "2001:1234:ff00::/56"
360+
egressOnlyInternetGatewayId: eigw-1234567890abcdefg
361+
```

0 commit comments

Comments
 (0)