You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/aks/use-byo-cni.md
+15-4Lines changed: 15 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,17 +3,19 @@ title: Bring your own Container Network Interface (CNI) plugin
3
3
description: Learn how to utilize Azure Kubernetes Service with your own Container Network Interface (CNI) plugin
4
4
services: container-service
5
5
ms.topic: article
6
-
ms.date: 05/16/2022
6
+
ms.date: 3/30/2022
7
7
---
8
8
9
-
# Bring your own Container Network Interface (CNI) plugin with Azure Kubernetes Service (AKS)
9
+
# Bring your own Container Network Interface (CNI) plugin with Azure Kubernetes Service (AKS) (PREVIEW)
10
10
11
11
Kubernetes does not provide a network interface system by default; this functionality is provided by [network plugins][kubernetes-cni]. Azure Kubernetes Service provides several supported CNI plugins. Documentation for supported plugins can be found from the [networking concepts page][aks-network-concepts].
12
12
13
13
While the supported plugins meet most networking needs in Kubernetes, advanced users of AKS may desire to utilize the same CNI plugin used in on-premises Kubernetes environments or to make use of specific advanced functionality available in other CNI plugins.
14
14
15
15
This article shows how to deploy an AKS cluster with no CNI plugin pre-installed, which allows for installation of any third-party CNI plugin that works in Azure.
16
16
17
+
[!INCLUDE [preview features callout](./includes/preview/preview-callout.md)]
18
+
17
19
## Support
18
20
19
21
BYOCNI has support implications - Microsoft support will not be able to assist with CNI-related issues in clusters deployed with BYOCNI. For example, CNI-related issues would cover most east/west (pod to pod) traffic, along with `kubectl proxy` and similar commands. If CNI-related support is desired, a supported AKS network plugin can be used or support could be procured for the BYOCNI plugin from a third-party vendor.
@@ -22,8 +24,8 @@ Support will still be provided for non-CNI-related issues.
22
24
23
25
## Prerequisites
24
26
25
-
* For ARM/Bicep, use at least template version `2022-01-02-preview`
26
-
* For Azure CLI, use at least version `2.37.0`
27
+
* For ARM/Bicep, use at least template version 2022-01-02-preview
28
+
* For Azure CLI, use at least version 0.5.55 of the `aks-preview` extension
27
29
* The virtual network for the AKS cluster must allow outbound internet connectivity.
28
30
* AKS clusters may not use `169.254.0.0/16`, `172.30.0.0/16`, `172.31.0.0/16`, or `192.0.2.0/24` for the Kubernetes service address range, pod address range, or cluster virtual network address range.
29
31
* The cluster identity used by the AKS cluster must have at least [Network Contributor](../role-based-access-control/built-in-roles.md#network-contributor) permissions on the subnet within your virtual network. If you wish to define a [custom role](../role-based-access-control/custom-roles.md) instead of using the built-in Network Contributor role, the following permissions are required:
@@ -34,6 +36,15 @@ Support will still be provided for non-CNI-related issues.
34
36
35
37
## Cluster creation steps
36
38
39
+
### Install the aks-preview CLI extension
40
+
41
+
```azurecli-interactive
42
+
# Install the aks-preview extension
43
+
az extension add --name aks-preview
44
+
# Update the extension to make sure you have the latest version installed
Copy file name to clipboardExpand all lines: articles/aks/use-multiple-node-pools.md
+3Lines changed: 3 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -124,6 +124,9 @@ The following example output shows that *mynodepool* has been successfully creat
124
124
125
125
A workload may require splitting a cluster's nodes into separate pools for logical isolation. This isolation can be supported with separate subnets dedicated to each node pool in the cluster. This can address requirements such as having non-contiguous virtual network address space to split across node pools.
126
126
127
+
> [!NOTE]
128
+
> Make sure to use Azure CLI version `2.35.0` or later.
129
+
127
130
#### Limitations
128
131
129
132
* All subnets assigned to nodepools must belong to the same virtual network.
0 commit comments