Skip to content

Commit ddacd99

Browse files
committed
docs: add documentations for enabling IPv6 in non-eks clusters
This combines existing docs for IPv6 EKS clusters with non-EKS ones, and also properly register the topic page into the documentation TOC.
1 parent a21f6b9 commit ddacd99

File tree

4 files changed

+143
-100
lines changed

4 files changed

+143
-100
lines changed

docs/book/src/SUMMARY_PREFIX.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -29,6 +29,7 @@
2929
- [Upgrades](./topics/rosa/upgrades.md)
3030
- [External Auth Providers](./topics/rosa/external-auth.md)
3131
- [Support](./topics/rosa/support.md)
32+
- [Enabling IPv6](./topics/ipv6-enabled-cluster.md)
3233
- [Bring Your Own AWS Infrastructure](./topics/bring-your-own-aws-infrastructure.md)
3334
- [Specifying the IAM Role to use for Management Components](./topics/specify-management-iam-role.md)
3435
- [Using external cloud provider with EBS CSI driver](./topics/external-cloud-provider-with-ebs-csi-driver.md)

docs/book/src/topics/bring-your-own-aws-infrastructure.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -29,6 +29,12 @@ In order to have Cluster API consume existing AWS infrastructure, you will need
2929
* Route table associations that provide connectivity to the Internet through a NAT gateway (for private subnets) or the Internet gateway (for public subnets)
3030
* VPC endpoints for `ec2`, `elasticloadbalancing`, `secretsmanager` an `autoscaling` (if using MachinePools) when the private Subnets do not have a NAT gateway
3131

32+
If you enable IPv6 for the workload cluster, you will need to ensure the following additional requirements:
33+
- An IPv6 CIDR associated with the VPC (i.e. dualstack VPC).
34+
- An egress-only internet gateway for IPv6 egress traffic from private subnets (only needed if the nodes require access to the Internet)
35+
- In the route table associated with private subnets, a route that sends all internet-bound IPv6 traffic (`::/0`) to the egress-only internet gateway.
36+
- (Optional) Enable DNS64 for private subnets to allow IPv6-only workloads to access IPv4-only services via NAT64.
37+
3238
You will need the ID of the VPC and subnet IDs that Cluster API should use. This information is available via the AWS Management Console or the AWS CLI.
3339

3440
Note that there is no need to create an Elastic Load Balancer (ELB), security groups, or EC2 instances; Cluster API will take care of these items.
Lines changed: 0 additions & 100 deletions
Original file line numberDiff line numberDiff line change
@@ -1,101 +1 @@
11
# IPv6 Enabled Cluster
2-
3-
CAPA supports IPv6 enabled clusters. Dual stack clusters are not yet supported, but
4-
dual VPC, meaning both ipv6 and ipv4 are defined, is supported and in fact, it's the
5-
only mode of operation at the writing of this doc.
6-
7-
Upcoming feature will be IPv6 _only_.
8-
9-
## Managed Clusters
10-
11-
### How to set up
12-
13-
Two modes of operations are supported. Request AWS to generate and assign an address
14-
or BYOIP which is Bring Your Own IP. There must already be a provisioned pool and a
15-
set of IPv6 CIDRs for that.
16-
17-
#### Automatically Generated IP
18-
19-
To request AWS to assign a set of IPv6 addresses from an AWS defined address pool,
20-
use the following setting:
21-
22-
```yaml
23-
kind: AWSManagedControlPlane
24-
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
25-
metadata:
26-
name: "${CLUSTER_NAME}-control-plane"
27-
spec:
28-
network:
29-
vpc:
30-
ipv6: {}
31-
```
32-
33-
#### BYOIP ( Bring Your Own IP )
34-
35-
To define your own IPv6 address pool and CIDR set the following values:
36-
37-
```yaml
38-
spec:
39-
network:
40-
vpc:
41-
ipv6:
42-
poolId: pool-id
43-
cidrBlock: "2009:1234:ff00::/56"
44-
```
45-
46-
If you have a VPC that is IPv6 enabled and you would like to use it, please define it in the config:
47-
48-
```yaml
49-
spec:
50-
network:
51-
vpc:
52-
ipv6: {}
53-
```
54-
55-
This has to be done explicitly because otherwise, it would break in the following two scenarios:
56-
- During an upgrade from 1.5 to >=2.0 where the VPC is ipv6 enabled, but CAPA was only recently made aware
57-
- During a migration on the VPC, switching it from only IPv4 to Dual Stack ( it would see that ipv6 is enabled and
58-
enforce it while doing that would not have been the intention of the user )
59-
60-
61-
### Requirements
62-
63-
The use of a Nitro enabled instance is required. To see a list of nitro instances in your region
64-
run the following command:
65-
66-
```bash
67-
aws ec2 describe-instance-types --filters Name=hypervisor,Values=nitro --region us-west-2 | grep "InstanceType"
68-
```
69-
70-
This will list all available Nitro hypervisor based instances in your region.
71-
72-
All addons **must** be enabled. A working cluster configuration looks like this:
73-
74-
```yaml
75-
kind: AWSManagedControlPlane
76-
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
77-
metadata:
78-
name: "${CLUSTER_NAME}-control-plane"
79-
spec:
80-
network:
81-
vpc:
82-
ipv6: {}
83-
region: "${AWS_REGION}"
84-
sshKeyName: "${AWS_SSH_KEY_NAME}"
85-
version: "${KUBERNETES_VERSION}"
86-
addons:
87-
- name: "vpc-cni"
88-
version: "v1.11.0-eksbuild.1"
89-
conflictResolution: "overwrite" # this is important, otherwise environment property update will not work
90-
- name: "coredns"
91-
version: "v1.8.7-eksbuild.1"
92-
- name: "kube-proxy"
93-
version: "v1.22.6-eksbuild.1"
94-
```
95-
96-
You can't define custom POD CIDRs on EKS with IPv6. EKS automatically assigns an address range from a unique local
97-
address range of `fc00::/7`.
98-
99-
## Unmanaged Clusters
100-
101-
Unmanaged clusters are not supported at this time.
Lines changed: 136 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,136 @@
1+
# Enabling IPv6
2+
3+
## Overview
4+
5+
CAPA enables you to create an IPv6 Kubernetes clusters on Amazon Web Service (AWS).
6+
7+
Only single-stack IPv6 clusters are supported. However, CAPA utilizes a dual stack infrastructure (e.g. dual stack VPC) to support IPv6. In fact, it is the only mode of operation at the time of writing.
8+
9+
> **IMPORTANT NOTE**: Dual stack clusters are not yet supported.
10+
11+
## Prerequisites
12+
13+
The instance types for control plane and worker machines must be [Nitro-based](https://docs.aws.amazon.com/ec2/latest/instancetypes/ec2-nitro-instances.html) in order to support IPv6. To see a list of Nitro instance types in your region, run the following command:
14+
15+
```bash
16+
aws ec2 describe-instance-types \
17+
--filters Name=hypervisor,Values=nitro \
18+
--query="InstanceTypes[*].InstanceType"
19+
```
20+
21+
## Creating IPv6 EKS-managed Clusters
22+
23+
To quickly deploy an IPv6 EKS cluster, use the [IPv6 EKS cluster template](https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-aws/refs/heads/main/templates/cluster-template-eks-ipv6.yaml).
24+
25+
<aside class="note warning">
26+
27+
<h1>Warning</h1>
28+
29+
You can't define custom Pod CIDRs on EKS with IPv6. EKS automatically assigns an address range from a unique local
30+
address range of `fc00::/7`.
31+
32+
</aside>
33+
34+
**Notes**: All addons **must** be enabled. A working cluster configuration looks like this:
35+
36+
```yaml
37+
kind: AWSManagedControlPlane
38+
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
39+
metadata:
40+
name: "${CLUSTER_NAME}-control-plane"
41+
spec:
42+
network:
43+
vpc:
44+
ipv6: {}
45+
region: "${AWS_REGION}"
46+
sshKeyName: "${AWS_SSH_KEY_NAME}"
47+
version: "${KUBERNETES_VERSION}"
48+
addons:
49+
- name: "vpc-cni"
50+
version: "v1.11.0-eksbuild.1"
51+
# this is important, otherwise environment property update will not work
52+
conflictResolution: "overwrite"
53+
- name: "coredns"
54+
version: "v1.8.7-eksbuild.1"
55+
- name: "kube-proxy"
56+
version: "v1.22.6-eksbuild.1"
57+
```
58+
59+
## Creating IPv6 Self-managed Clusters
60+
61+
To quickly deploy an IPv6 self-managed cluster, use the [IPv6 cluster template](https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-aws/refs/heads/main/templates/cluster-template-ipv6.yaml).
62+
63+
When creating a self-managed cluster, you can define the Pod and Service CIDR. For example, you can define ULA IPv6 range `fd01::/48` for pod networking and `fd02::/112` for service networking.
64+
65+
<aside class="note warning">
66+
67+
<h1>Warning</h1>
68+
69+
**Action required**: Since coredns pods run on the single-stack IPv6 pod network, they will fail to resolve non-cluster DNS queries
70+
via the IPv4 upstream nameserver in `/etc/resolv.conf`.
71+
72+
Here are workaround options:
73+
- Edit the `coredns` deployment and add `hostNetwork: true`, so it can leverage host routes for the v4 network.
74+
```bash
75+
kubectl -n kube-system patch deploy/coredns \
76+
--type=merge -p '{"spec": {"template": {"spec":{"hostNetwork": true}}}}'
77+
```
78+
- Edit the `coredns` ConfigMap to use Route53 Resolver nameserver `fd00:ec2::253`, by setting `forward . /etc/resolv.conf` part to `forward . fd00:ec2::253 /etc/resolv.conf`.
79+
```bash
80+
kubectl -n kube-system edit cm/coredns
81+
```
82+
</aside>
83+
84+
### CNI IPv6 support
85+
86+
By default, no CNI plugin is installed when provisioning a self-managed cluster. You need to install your own CNI solution that supports IPv6, for example, Calico with VXLAN.
87+
88+
You can find the guides to enable [IPv6](https://docs.tigera.io/calico/latest/networking/ipam/ipv6#ipv6) and [VXLAN](https://docs.tigera.io/calico/latest/networking/configuring/vxlan-ipip) support for Calico on their official documentation. Or you can use a customized Calico manifests [here](https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-aws/refs/heads/main/test/e2e/data/cni/calico_ipv6.yaml) for IPv6.
89+
90+
## IPv6 CIDR Allocations
91+
92+
### AWS-assigned IPv6 VPC CIDR
93+
94+
To request AWS to automatically assign an IPv6 CIDR from an AWS defined address pool, use the following setting:
95+
96+
```yaml
97+
spec:
98+
network:
99+
vpc:
100+
ipv6: {}
101+
```
102+
103+
### BYOIPv6 VPC CIDR
104+
105+
To define your own IPv6 address pool and CIDR set the following values:
106+
107+
```yaml
108+
spec:
109+
network:
110+
vpc:
111+
ipv6:
112+
poolId: pool-id
113+
cidrBlock: "2009:1234:ff00::/56"
114+
```
115+
116+
There must already be a provisioned pool and a set of IPv6 CIDRs for that.
117+
118+
### BYO IPv6 VPC
119+
120+
If you have a VPC that is IPv6 enabled (i.e. dual stack VPC) and you would like to use it, please define it in the `AWSCluster` specs:
121+
122+
```yaml
123+
spec:
124+
network:
125+
vpc:
126+
id: vpc-1234567890abcdefg
127+
cidrBlock: 10.0.0.0/16
128+
ipv6:
129+
cidrBlock: "2001:1234:ff00::/56"
130+
egressOnlyInternetGatewayId: eigw-1234567890abcdefg
131+
```
132+
133+
This has to be done explicitly because otherwise, it would break in the following two scenarios:
134+
- During an upgrade from 1.5 to >=2.0 where the VPC is ipv6 enabled, but CAPA was only recently made aware.
135+
- During a migration on the VPC, switching it from only IPv4 to Dual Stack (it would see that ipv6 is enabled and
136+
enforce it while doing that would not have been the intention of the user).

0 commit comments

Comments
 (0)