Skip to content

Commit 8816d87

Browse files
committed
docs: add documentations for enabling IPv6 in non-eks clusters
This combines existing docs for IPv6 EKS clusters with non-EKS ones, and also properly register the topic page into the documentation TOC.
1 parent 8f33547 commit 8816d87

File tree

4 files changed

+137
-101
lines changed

4 files changed

+137
-101
lines changed

docs/book/src/SUMMARY_PREFIX.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -29,6 +29,7 @@
2929
- [Upgrades](./topics/rosa/upgrades.md)
3030
- [External Auth Providers](./topics/rosa/external-auth.md)
3131
- [Support](./topics/rosa/support.md)
32+
- [Enabling IPv6](./topics/ipv6-enabled-cluster.md)
3233
- [Bring Your Own AWS Infrastructure](./topics/bring-your-own-aws-infrastructure.md)
3334
- [Specifying the IAM Role to use for Management Components](./topics/specify-management-iam-role.md)
3435
- [Using external cloud provider with EBS CSI driver](./topics/external-cloud-provider-with-ebs-csi-driver.md)

docs/book/src/topics/bring-your-own-aws-infrastructure.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -29,6 +29,12 @@ In order to have Cluster API consume existing AWS infrastructure, you will need
2929
* Route table associations that provide connectivity to the Internet through a NAT gateway (for private subnets) or the Internet gateway (for public subnets)
3030
* VPC endpoints for `ec2`, `elasticloadbalancing`, `secretsmanager` an `autoscaling` (if using MachinePools) when the private Subnets do not have a NAT gateway
3131

32+
If you enable IPv6 for the workload cluster, you will need to ensure the following additional requirements:
33+
- An IPv6 CIDR associated with the VPC (i.e. dualstack VPC).
34+
- An egress-only internet gateway for IPv6 egress traffic from private subnets (only needed if the nodes require access to the Internet)
35+
- In the route table associated with private subnets, a route that sends all internet-bound IPv6 traffic (`::/0`) to the egress-only internet gateway.
36+
- (Optional) Enable DNS64 for private subnets to allow IPv6-only workloads to access IPv4-only services via NAT64.
37+
3238
You will need the ID of the VPC and subnet IDs that Cluster API should use. This information is available via the AWS Management Console or the AWS CLI.
3339

3440
Note that there is no need to create an Elastic Load Balancer (ELB), security groups, or EC2 instances; Cluster API will take care of these items.

docs/book/src/topics/eks/ipv6-enabled-cluster.md

Lines changed: 0 additions & 101 deletions
This file was deleted.
Lines changed: 130 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,130 @@
1+
# Enabling IPv6
2+
3+
## Overview
4+
5+
CAPA enables you to create an IPv6 Kubernetes clusters on Amazon Web Service (AWS).
6+
7+
Only single-stack IPv6 clusters are supported. However, CAPA utilizes a dual stack infrastructure (e.g. dual stack VPC) to support IPv6. In fact, it is the only mode of operation at the time of writing.
8+
9+
> **IMPORTANT NOTE**: Dual stack clusters are not yet supported.
10+
11+
## Prerequisites
12+
13+
The instance types for control plane and worker machines must be [Nitro-based](https://docs.aws.amazon.com/ec2/latest/instancetypes/ec2-nitro-instances.html) in order to support IPv6. To see a list of Nitro instance types in your region, run the following command:
14+
15+
```bash
16+
aws ec2 describe-instance-types \
17+
--filters Name=hypervisor,Values=nitro \
18+
--query="InstanceTypes[*].InstanceType"
19+
```
20+
21+
## Creating IPv6 EKS-managed Clusters
22+
23+
To quickly deploy an IPv6 EKS cluster, use the [IPv6 EKS cluster template](https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-aws/refs/heads/main/templates/cluster-template-eks-ipv6.yaml).
24+
25+
<aside class="note warning">
26+
27+
<h1>Warning</h1>
28+
29+
You can't define custom Pod CIDRs on EKS with IPv6. EKS automatically assigns an address range from a unique local
30+
address range of `fc00::/7`.
31+
32+
</aside>
33+
34+
**Notes**: All addons **must** be enabled. A working cluster configuration looks like this:
35+
36+
```yaml
37+
kind: AWSManagedControlPlane
38+
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
39+
metadata:
40+
name: "${CLUSTER_NAME}-control-plane"
41+
spec:
42+
network:
43+
vpc:
44+
ipv6: {}
45+
region: "${AWS_REGION}"
46+
sshKeyName: "${AWS_SSH_KEY_NAME}"
47+
version: "${KUBERNETES_VERSION}"
48+
addons:
49+
- name: "vpc-cni"
50+
version: "v1.11.0-eksbuild.1"
51+
# this is important, otherwise environment property update will not work
52+
conflictResolution: "overwrite"
53+
- name: "coredns"
54+
version: "v1.8.7-eksbuild.1"
55+
- name: "kube-proxy"
56+
version: "v1.22.6-eksbuild.1"
57+
```
58+
59+
## Creating IPv6 Self-managed Clusters
60+
61+
To quickly deploy an IPv6 self-managed cluster, use the [IPv6 cluster template](https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-aws/refs/heads/main/templates/cluster-template-ipv6.yaml).
62+
63+
When creating a self-managed cluster, you can define the Pod and Service CIDR. For example, you can define ULA IPv6 range `fd01::/48` for pod networking and `fd02::/112` for service networking.
64+
65+
<aside class="note warning">
66+
67+
<h1>Warning</h1>
68+
69+
**Action required**: Since coredns pods run on the single-stack IPv6 pod network, they will fail to resolve non-cluster DNS queries
70+
via the IPv4 upstream nameserver in `/etc/resolv.conf`.
71+
72+
The workaround is to edit the `coredns` ConfigMap to use Route53 Resolver nameserver `fd00:ec2::253`, by setting `forward . /etc/resolv.conf` part to `forward . fd00:ec2::253 /etc/resolv.conf`.
73+
```bash
74+
kubectl -n kube-system edit cm/coredns
75+
```
76+
</aside>
77+
78+
### CNI IPv6 support
79+
80+
By default, no CNI plugin is installed when provisioning a self-managed cluster. You need to install your own CNI solution that supports IPv6, for example, Calico with VXLAN.
81+
82+
You can find the guides to enable [IPv6](https://docs.tigera.io/calico/latest/networking/ipam/ipv6#ipv6) and [VXLAN](https://docs.tigera.io/calico/latest/networking/configuring/vxlan-ipip) support for Calico on their official documentation. Or you can use a customized Calico manifests [here](https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-aws/refs/heads/main/test/e2e/data/cni/calico_ipv6.yaml) for IPv6.
83+
84+
## IPv6 CIDR Allocations
85+
86+
### AWS-assigned IPv6 VPC CIDR
87+
88+
To request AWS to automatically assign an IPv6 CIDR from an AWS defined address pool, use the following setting:
89+
90+
```yaml
91+
spec:
92+
network:
93+
vpc:
94+
ipv6: {}
95+
```
96+
97+
### BYOIPv6 VPC CIDR
98+
99+
To define your own IPv6 address pool and CIDR set the following values:
100+
101+
```yaml
102+
spec:
103+
network:
104+
vpc:
105+
ipv6:
106+
poolId: pool-id
107+
cidrBlock: "2009:1234:ff00::/56"
108+
```
109+
110+
There must already be a provisioned pool and a set of IPv6 CIDRs for that.
111+
112+
### BYO IPv6 VPC
113+
114+
If you have a VPC that is IPv6 enabled (i.e. dual stack VPC) and you would like to use it, please define it in the `AWSCluster` specs:
115+
116+
```yaml
117+
spec:
118+
network:
119+
vpc:
120+
id: vpc-1234567890abcdefg
121+
cidrBlock: 10.0.0.0/16
122+
ipv6:
123+
cidrBlock: "2001:1234:ff00::/56"
124+
egressOnlyInternetGatewayId: eigw-1234567890abcdefg
125+
```
126+
127+
This has to be done explicitly because otherwise, it would break in the following two scenarios:
128+
- During an upgrade from 1.5 to >=2.0 where the VPC is ipv6 enabled, but CAPA was only recently made aware.
129+
- During a migration on the VPC, switching it from only IPv4 to Dual Stack (it would see that ipv6 is enabled and
130+
enforce it while doing that would not have been the intention of the user).

0 commit comments

Comments
 (0)