Skip to content

Commit 4ee7948

Browse files
authored
Merge pull request #270511 from dramasamy/dual-stack
[NotReleaseSpecific] NAKS dual-stack
2 parents e4b6ca7 + 48c09b9 commit 4ee7948

File tree

2 files changed

+206
-0
lines changed

2 files changed

+206
-0
lines changed

articles/operator-nexus/TOC.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -137,6 +137,8 @@
137137
- name: Nexus Kubernetes cluster
138138
expanded: false
139139
items:
140+
- name: Create dual-stack cluster
141+
href: howto-kubernetes-cluster-dual-stack.md
140142
- name: Understand agent pools
141143
href: howto-kubernetes-cluster-agent-pools.md
142144
- name: Connect to the cluster
Lines changed: 204 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,204 @@
1+
---
2+
title: Create dual-stack Azure Operator Nexus Kubernetes cluster
3+
description: Learn how to create dual-stack Azure Operator Nexus Kubernetes cluster.
4+
author: dramasamy
5+
ms.author: dramasamy
6+
ms.service: azure-operator-nexus
7+
ms.topic: how-to
8+
ms.date: 03/28/2024
9+
ms.custom: template-how-to-pattern, devx-track-bicep
10+
---
11+
12+
# Create dual-stack Azure Operator Nexus Kubernetes cluster
13+
14+
In this article, you learn how to create a dual-stack Nexus Kubernetes cluster. The dual-stack networking feature helps to enable both IPv4 and IPv6 communication in a Kubernetes cluster, allowing for greater flexibility and scalability in network communication. The focus of this guide is on the configuration aspects, providing examples to help you understand the process. By following this guide, you're able to effectively create a dual-stack Nexus Kubernetes cluster.
15+
16+
In a dual-stack Kubernetes cluster, both the nodes and the pods are configured with an IPv4 and IPv6 network address. This means that any pod that runs on a dual-stack cluster will be assigned both IPv4 and IPv6 addresses within the pod, and the cluster nodes' CNI (Container Network Interface) interface will also be assigned both an IPv4 and IPv6 address. However, any multus interfaces attached, such as SRIOV/DPDK, are the responsibility of the application owner and must be configured accordingly.
17+
18+
<!-- Network Address Translation (NAT) is configured to enable pods to access resources within the local network infrastructure. The source IP address of the traffic from the pods (either IPv4 or IPv6) is translated to the node's primary IP address corresponding to the same IP family (IPv4 to IPv4 and IPv6 to IPv6). This setup ensures seamless connectivity and resource access for the pods within the on-premises environment. -->
19+
20+
## Prerequisites
21+
22+
Before proceeding with this how-to guide, it's recommended that you:
23+
24+
* Refer to the Nexus Kubernetes cluster [QuickStart guide](./quickstarts-kubernetes-cluster-deployment-bicep.md) for a comprehensive overview and steps involved.
25+
* Ensure that you meet the outlined prerequisites to ensure smooth implementation of the guide.
26+
* Knowledge of Kubernetes concepts, including deployments and services.
27+
28+
## Limitations
29+
30+
* Single stack IPv6-only isn't supported for node or pod IP addresses. Workload Pods and services can use dual-stack (IPv4/IPv6).
31+
* Kubernetes administration API access to the cluster is IPv4 only. Any kubeconfig must be IPv4 because kube-vip for the kubernetes API server only sets up an IPv4 address.
32+
* Network Address Translation for IPv6 is disabled by default. If you need NAT for IPv6, you must enable it manually.
33+
34+
## Configuration options
35+
36+
Operator Nexus Kubernetes dual-stack networking relies on the pod and service CIDR to enable both IPv4 and IPv6 communication. Before configuring the dual-stack networking, it's important to understand the various configuration options available. These options allow you to define the behavior and parameters of the dual-stack networking according to your specific requirements. Let's explore the configuration options for dual-stack networking.
37+
38+
### Required parameters
39+
40+
To configure dual-stack networking in your Operator Nexus Kubernetes cluster, you need to define the `Pod` and `Service` CIDRs. These configurations are essential for defining the IP address range for Pods and Kubernetes services in the cluster.
41+
42+
* The `podCidrs` parameter takes a list of CIDR notation IP ranges to assign pod IPs from. Example, `["10.244.0.0/16", "fd12:3456:789a::/64"]`.
43+
* The `serviceCidrs` parameter takes a list of CIDR notation IP ranges to assign service IPs from. Example, `["10.96.0.0/16", "fd12:3456:789a:1::/108"]`.
44+
* The IPv6 subnet assigned to `serviceCidrs` can be no larger than a `/108`.
45+
46+
## Bicep template parameters for dual-stack configuration
47+
48+
The following JSON snippet shows the parameters required for creating dual-stack cluster in the [QuickStart Bicep template](./quickstarts-kubernetes-cluster-deployment-bicep.md).
49+
50+
```json
51+
"podCidrs": {
52+
"value": ["10.244.0.0/16", "fd12:3456:789a::/64"]
53+
},
54+
"serviceCidrs": {
55+
"value": ["10.96.0.0/16", "fd12:3456:789a:1::/108"]
56+
},
57+
```
58+
59+
To create a dual-stack cluster, you need to update the `kubernetes-deploy-parameters.json` file that you created during the [QuickStart](./quickstarts-kubernetes-cluster-deployment-bicep.md). Include the Pod and Service CIDR configuration in this file according to your desired settings, and change the cluster name to ensure that a new cluster is created with the updated configuration.
60+
61+
After updating the Pod and Service CIDR configuration to your parameter file, you can proceed with deploying the Bicep template. This action sets up your new dual-stack cluster with the specified Pod and Server CIDR configuration.
62+
63+
By following these instructions, you can create a dual-stack Nexus Kubernetes cluster with the desired IP pool configuration and take advantage of the dual-stack in your cluster services.
64+
65+
To enable dual-stack `LoadBalancer` services in your cluster, you must ensure that the [IP pools are configured](./howto-kubernetes-service-load-balancer.md) with both IPv4 and IPv6 addresses. This allows the LoadBalancer service to allocate IP addresses from the specified IP pools for the services, enabling effective communication between the services and the external network.
66+
67+
### Example parameters
68+
69+
This parameter file is intended to be used with the [QuickStart guide](./quickstarts-kubernetes-cluster-deployment-bicep.md) Bicep template for creating a dual-stack cluster. It contains the necessary configuration settings to set up the dual-stack cluster with BGP load balancer functionality. By using this parameter file with the Bicep template, you can create a dual-stack cluster with the desired BGP load balancer capabilities.
70+
71+
> [!IMPORTANT]
72+
> These instructions are for creating a new Operator Nexus Kubernetes cluster. Avoid applying the Bicep template to an existing cluster, as Pod and Service CIDR configurations are immutable.
73+
74+
```json
75+
{
76+
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
77+
"contentVersion": "1.0.0.0",
78+
"parameters": {
79+
"kubernetesClusterName":{
80+
"value": "dual-stack-cluster"
81+
},
82+
"adminGroupObjectIds": {
83+
"value": [
84+
"00000000-0000-0000-0000-000000000000"
85+
]
86+
},
87+
"cniNetworkId": {
88+
"value": "/subscriptions/<subscription_id>/resourceGroups/<resource_group>/providers/Microsoft.NetworkCloud/l3Networks/<l3Network-name>"
89+
},
90+
"cloudServicesNetworkId": {
91+
"value": "/subscriptions/<subscription_id>/resourceGroups/<resource_group>/providers/Microsoft.NetworkCloud/cloudServicesNetworks/<csn-name>"
92+
},
93+
"extendedLocation": {
94+
"value": "/subscriptions/<subscription_id>/resourceGroups/<managed_resource_group>/providers/microsoft.extendedlocation/customlocations/<custom-location-name>"
95+
},
96+
"location": {
97+
"value": "eastus"
98+
},
99+
"sshPublicKey": {
100+
"value": "ssh-rsa AAAAB...."
101+
},
102+
"podCidrs": {
103+
"value": ["10.244.0.0/16", "fd12:3456:789a::/64"]
104+
},
105+
"serviceCidrs": {
106+
"value": ["10.96.0.0/16", "fd12:3456:789a:1::/108"]
107+
},
108+
"ipAddressPools": {
109+
"value": [
110+
{
111+
"addresses": ["<IPv4>/<CIDR>", "<IPv6>/<CIDR>"],
112+
"name": "<pool-name>",
113+
"autoAssign": "True",
114+
"onlyUseHostIps": "True"
115+
}
116+
]
117+
}
118+
}
119+
}
120+
```
121+
122+
## Inspect the nodes to see both IP families
123+
124+
* Once the cluster is provisioned, confirm the nodes are provisioned with dual-stack networking using the `kubectl get nodes` command.
125+
126+
```azurecli
127+
kubectl get nodes -o=custom-columns="NAME:.metadata.name,ADDRESSES:.status.addresses[?(@.type=='InternalIP')].address"
128+
```
129+
130+
The output from the kubectl get nodes command shows the nodes have addresses and pod IP assignment space from both IPv4 and IPv6.
131+
132+
```output
133+
NAME ADDRESSES
134+
dual-stack-cluster-374cc36c-agentpool1-md-6ff45 10.14.34.20,fda0:d59c:da0a:e22:a8bb:ccff:fe6d:9e2a,fda0:d59c:da0a:e22::11,fe80::a8bb:ccff:fe6d:9e2a
135+
dual-stack-cluster-374cc36c-agentpool1-md-dpmqv 10.14.34.22,fda0:d59c:da0a:e22:a8bb:ccff:fe80:f66f,fda0:d59c:da0a:e22::13,fe80::a8bb:ccff:fe80:f66f
136+
dual-stack-cluster-374cc36c-agentpool1-md-tcqpf 10.14.34.21,fda0:d59c:da0a:e22:a8bb:ccff:fed5:a3fb,fda0:d59c:da0a:e22::12,fe80::a8bb:ccff:fed5:a3fb
137+
dual-stack-cluster-374cc36c-control-plane-gdmz8 10.14.34.19,fda0:d59c:da0a:e22:a8bb:ccff:fea8:5a37,fda0:d59c:da0a:e22::10,fe80::a8bb:ccff:fea8:5a37
138+
dual-stack-cluster-374cc36c-control-plane-smrxl 10.14.34.18,fda0:d59c:da0a:e22:a8bb:ccff:fe7b:cfa9,fda0:d59c:da0a:e22::f,fe80::a8bb:ccff:fe7b:cfa9
139+
dual-stack-cluster-374cc36c-control-plane-tjfc8 10.14.34.17,10.14.34.14,fda0:d59c:da0a:e22:a8bb:ccff:feaf:21ec,fda0:d59c:da0a:e22::c,fe80::a8bb:ccff:feaf:21ec
140+
```
141+
142+
## Create an example workload
143+
144+
Once the cluster has been created, you can deploy your workloads. This article walks you through an example workload deployment of an NGINX web server.
145+
146+
### Deploy an NGINX web server
147+
148+
1. Create an NGINX web server using the `kubectl create deployment nginx` command.
149+
150+
```bash-interactive
151+
kubectl create deployment nginx --image=mcr.microsoft.com/cbl-mariner/base/nginx:1.22 --replicas=3
152+
```
153+
154+
2. View the pod resources using the `kubectl get pods` command.
155+
156+
```bash-interactive
157+
kubectl get pods -o custom-columns="NAME:.metadata.name,IPs:.status.podIPs[*].ip,NODE:.spec.nodeName,READY:.status.conditions[?(@.type=='Ready')].status"
158+
```
159+
160+
The output shows the pods have both IPv4 and IPv6 addresses. The pods don't show IP addresses until they're ready.
161+
162+
```output
163+
NAME IPs NODE READY
164+
nginx-7d566f5967-gtqm8 10.244.31.200,fd12:3456:789a:0:9ca3:8a54:6c22:1fc8 dual-stack-cluster-374cc36c-agentpool1-md-6ff45 True
165+
nginx-7d566f5967-sctn2 10.244.106.73,fd12:3456:789a:0:1195:f83e:f6bd:4809 dual-stack-cluster-374cc36c-agentpool1-md-tcqpf True
166+
nginx-7d566f5967-wh2rp 10.244.100.196,fd12:3456:789a:0:c296:3da:b545:aa04 dual-stack-cluster-374cc36c-agentpool1-md-dpmqv True
167+
```
168+
169+
### Expose the workload via a `LoadBalancer` type service
170+
171+
1. Expose the NGINX deployment using the `kubectl expose deployment nginx` command.
172+
173+
```bash-interactive
174+
kubectl expose deployment nginx --name=nginx --port=80 --type=LoadBalancer --overrides='{"spec":{"ipFamilyPolicy": "PreferDualStack", "ipFamilies": ["IPv4", "IPv6"]}}'
175+
```
176+
177+
You receive an output that shows the services have been exposed.
178+
179+
```output
180+
service/nginx exposed
181+
```
182+
183+
2. Once the deployment is exposed and the `LoadBalancer` services are fully provisioned, get the IP addresses of the services using the `kubectl get services` command.
184+
185+
```bash-interactive
186+
kubectl get services
187+
```
188+
189+
```output
190+
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
191+
nginx LoadBalancer 10.96.119.27 10.14.35.240,fda0:d59c:da0a:e23:ffff:ffff:ffff:fffc 80:30122/TCP 10s
192+
```
193+
194+
```bash-interactive
195+
kubectl get services nginx -ojsonpath='{.spec.clusterIPs}'
196+
```
197+
198+
```output
199+
["10.96.119.27","fd12:3456:789a:1::e6bb"]
200+
```
201+
202+
## Next steps
203+
204+
You can try deploying a network function (NF) within your Nexus Kubernetes cluster utilizing the dual-stack address.

0 commit comments

Comments
 (0)