|
| 1 | +--- |
| 2 | +title: Bring your own Container Network Interface (CNI) plugin (preview) |
| 3 | +description: Learn how to utilize Azure Kubernetes Service with your own Container Network Interface (CNI) plugin |
| 4 | +services: container-service |
| 5 | +ms.topic: article |
| 6 | +ms.date: 3/30/2022 |
| 7 | +--- |
| 8 | + |
| 9 | +# Bring your own Container Network Interface (CNI) plugin with Azure Kubernetes Service (AKS) (PREVIEW) |
| 10 | + |
| 11 | +Kubernetes does not provide a network interface system by default; this functionality is provided by [network plugins][kubernetes-cni]. Azure Kubernetes Service provides several supported CNI plugins. Documentation for supported plugins can be found from the [networking concepts page][aks-network-concepts]. |
| 12 | + |
| 13 | +While the supported plugins meet most networking needs in Kubernetes, advanced users of AKS may desire to utilize the same CNI plugin used in on-premises Kubernetes environments or to make use of specific advanced functionality available in other CNI plugins. |
| 14 | + |
| 15 | +This article shows how to deploy an AKS cluster with no CNI plugin pre-installed, which allows for installation of any third-party CNI plugin that works in Azure. |
| 16 | + |
| 17 | +[!INCLUDE [preview features callout](./includes/preview/preview-callout.md)] |
| 18 | + |
| 19 | +## Support |
| 20 | + |
| 21 | +BYOCNI has support implications - Microsoft support will not be able to assist with CNI-related issues in clusters deployed with BYOCNI. For example, CNI-related issues would cover most east/west (pod to pod) traffic, along with `kubectl proxy` and similar commands. If CNI-related support is desired, a supported AKS network plugin can be used or support could be procured for the BYOCNI plugin from a third-party vendor. |
| 22 | + |
| 23 | +Support will still be provided for non-CNI-related issues. |
| 24 | + |
| 25 | +## Prerequisites |
| 26 | + |
| 27 | +* For ARM/Bicep, use at least template version 2022-01-02-preview |
| 28 | +* For Azure CLI, use at least version 0.5.55 of the `aks-preview` extension |
| 29 | +* The virtual network for the AKS cluster must allow outbound internet connectivity. |
| 30 | +* AKS clusters may not use `169.254.0.0/16`, `172.30.0.0/16`, `172.31.0.0/16`, or `192.0.2.0/24` for the Kubernetes service address range, pod address range, or cluster virtual network address range. |
| 31 | +* The cluster identity used by the AKS cluster must have at least [Network Contributor](../role-based-access-control/built-in-roles.md#network-contributor) permissions on the subnet within your virtual network. If you wish to define a [custom role](../role-based-access-control/custom-roles.md) instead of using the built-in Network Contributor role, the following permissions are required: |
| 32 | + * `Microsoft.Network/virtualNetworks/subnets/join/action` |
| 33 | + * `Microsoft.Network/virtualNetworks/subnets/read` |
| 34 | +* The subnet assigned to the AKS node pool cannot be a [delegated subnet](../virtual-network/subnet-delegation-overview.md). |
| 35 | +* AKS doesn't apply Network Security Groups (NSGs) to its subnet and will not modify any of the NSGs associated with that subnet. If you provide your own subnet and add NSGs associated with that subnet, you must ensure the security rules in the NSGs allow traffic within the node CIDR range. For more details, see [Network security groups][aks-network-nsg]. |
| 36 | + |
| 37 | +## Cluster creation steps |
| 38 | + |
| 39 | +### Install the aks-preview CLI extension |
| 40 | + |
| 41 | +```azurecli-interactive |
| 42 | +# Install the aks-preview extension |
| 43 | +az extension add --name aks-preview |
| 44 | +
|
| 45 | +# Update the extension to make sure you have the latest version installed |
| 46 | +az extension update --name aks-preview |
| 47 | +``` |
| 48 | + |
| 49 | +### Deploy a cluster |
| 50 | + |
| 51 | +# [Azure CLI](#tab/azure-cli) |
| 52 | + |
| 53 | +Deploying a BYOCNI cluster requires passing the `--network-plugin` parameter with the parameter value of `none`. |
| 54 | + |
| 55 | +1. First, create a resource group to create the cluster in: |
| 56 | + ```azurecli-interactive |
| 57 | + az group create -l <Region> -n <ResourceGroupName> |
| 58 | + ``` |
| 59 | +
|
| 60 | +1. Then create the cluster itself: |
| 61 | + ```azurecli-interactive |
| 62 | + az aks create -l <Region> -g <ResourceGroupName> -n <ClusterName> --network-plugin none |
| 63 | + ``` |
| 64 | +
|
| 65 | +# [Azure Resource Manager](#tab/azure-resource-manager) |
| 66 | +
|
| 67 | +When using an Azure Resource Manager template to deploy, pass `none` to the `networkPlugin` parameter to the `networkProfile` object. See the [Azure Resource Manager template documentation][deploy-arm-template] for help with deploying this template, if needed. |
| 68 | +
|
| 69 | +```json |
| 70 | +{ |
| 71 | + "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", |
| 72 | + "contentVersion": "1.0.0.0", |
| 73 | + "parameters": { |
| 74 | + "clusterName": { |
| 75 | + "type": "string", |
| 76 | + "defaultValue": "aksbyocni" |
| 77 | + }, |
| 78 | + "location": { |
| 79 | + "type": "string", |
| 80 | + "defaultValue": "[resourceGroup().location]" |
| 81 | + }, |
| 82 | + "kubernetesVersion": { |
| 83 | + "type": "string", |
| 84 | + "defaultValue": "1.22" |
| 85 | + }, |
| 86 | + "nodeCount": { |
| 87 | + "type": "int", |
| 88 | + "defaultValue": 3 |
| 89 | + }, |
| 90 | + "nodeSize": { |
| 91 | + "type": "string", |
| 92 | + "defaultValue": "Standard_B2ms" |
| 93 | + } |
| 94 | + }, |
| 95 | + "resources": [ |
| 96 | + { |
| 97 | + "type": "Microsoft.ContainerService/managedClusters", |
| 98 | + "apiVersion": "2022-02-02-preview", |
| 99 | + "name": "[parameters('clusterName')]", |
| 100 | + "location": "[parameters('location')]", |
| 101 | + "identity": { |
| 102 | + "type": "SystemAssigned" |
| 103 | + }, |
| 104 | + "properties": { |
| 105 | + "agentPoolProfiles": [ |
| 106 | + { |
| 107 | + "name": "nodepool1", |
| 108 | + "count": "[parameters('nodeCount')]", |
| 109 | + "mode": "System", |
| 110 | + "vmSize": "[parameters('nodeSize')]" |
| 111 | + } |
| 112 | + ], |
| 113 | + "dnsPrefix": "[parameters('clusterName')]", |
| 114 | + "kubernetesVersion": "[parameters('kubernetesVersion')]", |
| 115 | + "networkProfile": { |
| 116 | + "networkPlugin": "none" |
| 117 | + } |
| 118 | + } |
| 119 | + } |
| 120 | + ] |
| 121 | +} |
| 122 | +``` |
| 123 | + |
| 124 | +# [Bicep](#tab/bicep) |
| 125 | + |
| 126 | +When using a Bicep template to deploy, pass `none` to the `networkPlugin` parameter to the `networkProfile` object. See the [Bicep template documentation][deploy-bicep-template] for help with deploying this template, if needed. |
| 127 | + |
| 128 | +```bicep |
| 129 | +param clusterName string = 'aksbyocni' |
| 130 | +param location string = resourceGroup().location |
| 131 | +param kubernetesVersion string = '1.22' |
| 132 | +param nodeCount int = 3 |
| 133 | +param nodeSize string = 'Standard_B2ms' |
| 134 | +
|
| 135 | +resource aksCluster 'Microsoft.ContainerService/managedClusters@2022-02-02-preview' = { |
| 136 | + name: clusterName |
| 137 | + location: location |
| 138 | + identity: { |
| 139 | + type: 'SystemAssigned' |
| 140 | + } |
| 141 | + properties: { |
| 142 | + agentPoolProfiles: [ |
| 143 | + { |
| 144 | + name: 'nodepool1' |
| 145 | + count: nodeCount |
| 146 | + mode: 'System' |
| 147 | + vmSize: nodeSize |
| 148 | + } |
| 149 | + ] |
| 150 | + dnsPrefix: clusterName |
| 151 | + kubernetesVersion: kubernetesVersion |
| 152 | + networkProfile: { |
| 153 | + networkPlugin: 'none' |
| 154 | + } |
| 155 | + } |
| 156 | +} |
| 157 | +``` |
| 158 | + |
| 159 | +### Deploy a CNI plugin |
| 160 | + |
| 161 | +When AKS provisioning completes, the cluster will be online, but all of the nodes will be in a `NotReady` state: |
| 162 | + |
| 163 | +```bash |
| 164 | +$ kubectl get nodes |
| 165 | +NAME STATUS ROLES AGE VERSION |
| 166 | +aks-nodepool1-23902496-vmss000000 NotReady agent 6m9s v1.21.9 |
| 167 | + |
| 168 | +$ kubectl get node -o custom-columns='NAME:.metadata.name,STATUS:.status.conditions[?(@.type=="Ready")].message' |
| 169 | +NAME STATUS |
| 170 | +aks-nodepool1-23902496-vmss000000 container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized |
| 171 | +``` |
| 172 | + |
| 173 | +At this point, the cluster is ready for installation of a CNI plugin. |
| 174 | + |
| 175 | +--- |
| 176 | +## Next steps |
| 177 | + |
| 178 | +Learn more about networking in AKS in the following articles: |
| 179 | + |
| 180 | +* [Use a static IP address with the Azure Kubernetes Service (AKS) load balancer](static-ip.md) |
| 181 | +* [Use an internal load balancer with Azure Container Service (AKS)](internal-lb.md) |
| 182 | + |
| 183 | +* [Create a basic ingress controller with external network connectivity][aks-ingress-basic] |
| 184 | +* [Enable the HTTP application routing add-on][aks-http-app-routing] |
| 185 | +* [Create an ingress controller that uses an internal, private network and IP address][aks-ingress-internal] |
| 186 | +* [Create an ingress controller with a dynamic public IP and configure Let's Encrypt to automatically generate TLS certificates][aks-ingress-tls] |
| 187 | +* [Create an ingress controller with a static public IP and configure Let's Encrypt to automatically generate TLS certificates][aks-ingress-static-tls] |
| 188 | + |
| 189 | +<!-- LINKS - External --> |
| 190 | +[kubernetes-cni]: https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/ |
| 191 | +[cni-networking]: https://github.com/Azure/azure-container-networking/blob/master/docs/cni.md |
| 192 | +[kubenet]: https://kubernetes.io/docs/concepts/cluster-administration/network-plugins/#kubenet |
| 193 | + |
| 194 | +<!-- LINKS - Internal --> |
| 195 | +[az-aks-create]: /cli/azure/aks#az_aks_create |
| 196 | +[aks-ssh]: ssh.md |
| 197 | +[ManagedClusterAgentPoolProfile]: /azure/templates/microsoft.containerservice/managedclusters#managedclusteragentpoolprofile-object |
| 198 | +[aks-network-concepts]: concepts-network.md |
| 199 | +[aks-network-nsg]: concepts-network.md#network-security-groups |
| 200 | +[aks-ingress-basic]: ingress-basic.md |
| 201 | +[aks-ingress-tls]: ingress-tls.md |
| 202 | +[aks-ingress-static-tls]: ingress-static-ip.md |
| 203 | +[aks-http-app-routing]: http-application-routing.md |
| 204 | +[aks-ingress-internal]: ingress-internal-ip.md |
| 205 | +[az-extension-add]: /cli/azure/extension#az_extension_add |
| 206 | +[az-extension-update]: /cli/azure/extension#az_extension_update |
| 207 | +[az-feature-register]: /cli/azure/feature#az_feature_register |
| 208 | +[az-feature-list]: /cli/azure/feature#az_feature_list |
| 209 | +[az-provider-register]: /cli/azure/provider#az_provider_register |
| 210 | +[network-policy]: use-network-policies.md |
| 211 | +[nodepool-upgrade]: use-multiple-node-pools.md#upgrade-a-node-pool |
| 212 | +[network-comparisons]: concepts-network.md#compare-network-models |
| 213 | +[system-node-pools]: use-system-pools.md |
| 214 | +[prerequisites]: configure-azure-cni.md#prerequisites |
0 commit comments