You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/aks/use-multiple-node-pools.md
+97-13Lines changed: 97 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,13 +3,13 @@ title: Use multiple node pools in Azure Kubernetes Service (AKS)
3
3
description: Learn how to create and manage multiple node pools for a cluster in Azure Kubernetes Service (AKS)
4
4
services: container-service
5
5
ms.topic: article
6
-
ms.date: 02/14/2020
6
+
ms.date: 03/10/2020
7
7
8
8
---
9
9
10
10
# Create and manage multiple node pools for a cluster in Azure Kubernetes Service (AKS)
11
11
12
-
In Azure Kubernetes Service (AKS), nodes of the same configuration are grouped together into *node pools*. These node pools contain the underlying VMs that run your applications. The initial number of nodes and their size (SKU) are defined when you create an AKS cluster, which creates a *default node pool*. To support applications that have different compute or storage demands, you can create additional node pools. For example, use these additional node pools to provide GPUs for compute-intensive applications, or access to high-performance SSD storage.
12
+
In Azure Kubernetes Service (AKS), nodes of the same configuration are grouped together into *node pools*. These node pools contain the underlying VMs that run your applications. The initial number of nodes and their size (SKU) is defined when you create an AKS cluster, which creates a *default node pool*. To support applications that have different compute or storage demands, you can create additional node pools. For example, use these additional node pools to provide GPUs for compute-intensive applications, or access to high-performance SSD storage.
13
13
14
14
> [!NOTE]
15
15
> This feature enables higher control over how to create and manage multiple node pools. As a result, separate commands are required for create/update/delete. Previously cluster operations through `az aks create` or `az aks update` used the managedCluster API and were the only option to change your control plane and a single node pool. This feature exposes a separate operation set for agent pools through the agentPool API and require use of the `az aks nodepool` command set to execute operations on an individual node pool.
@@ -29,8 +29,8 @@ The following limitations apply when you create and manage AKS clusters that sup
29
29
* The AKS cluster must use the Standard SKU load balancer to use multiple node pools, the feature is not supported with Basic SKU load balancers.
30
30
* The AKS cluster must use virtual machine scale sets for the nodes.
31
31
* The name of a node pool may only contain lowercase alphanumeric characters and must begin with a lowercase letter. For Linux node pools the length must be between 1 and 12 characters, for Windows node pools the length must be between 1 and 6 characters.
32
-
* All node pools must reside in the same vnet and subnet.
33
-
* When creating multiple node pools at cluster create time, all Kubernetes versions used by node pools must match the version set for the control plane. This can be updated after the cluster has been provisioned by using per node pool operations.
32
+
* All node pools must reside in the same virtual network and subnet.
33
+
* When creating multiple node pools at cluster create time, all Kubernetes versions used by node pools must match the version set for the control plane. This version can be updated after the cluster has been provisioned by using per node pool operations.
34
34
35
35
## Create an AKS cluster
36
36
@@ -191,11 +191,11 @@ An AKS cluster has two cluster resource objects with Kubernetes versions associa
191
191
192
192
A control plane maps to one or many node pools. The behavior of an upgrade operation depends on which Azure CLI command is used.
193
193
194
-
Upgrading an AKS control plane requires using `az aks upgrade`. This upgrades the control plane version and all node pools in the cluster.
194
+
Upgrading an AKS control plane requires using `az aks upgrade`. This command upgrades the control plane version and all node pools in the cluster.
195
195
196
196
Issuing the `az aks upgrade` command with the `--control-plane-only` flag upgrades only the cluster control plane. None of the associated node pools in the cluster are changed.
197
197
198
-
Upgrading individual node pools requires using `az aks nodepool upgrade`. This upgrades only the target node pool with the specified Kubernetes version
198
+
Upgrading individual node pools requires using `az aks nodepool upgrade`. This command upgrades only the target node pool with the specified Kubernetes version
199
199
200
200
### Validation rules for upgrades
201
201
@@ -208,7 +208,7 @@ The valid Kubernetes upgrades for a cluster's control plane and node pools are v
208
208
209
209
* Rules for submitting an upgrade operation:
210
210
* You cannot downgrade the control plane or a node pool Kubernetes version.
211
-
* If a node pool Kubernetes version is not specified, behavior depends on the client being used. Declaration in Resource Manager templates fall back to the existing version defined for the node pool if used, if none is set the control plane version is used to fall back on.
211
+
* If a node pool Kubernetes version is not specified, behavior depends on the client being used. Declaration in Resource Manager templates falls back to the existing version defined for the node pool if used, if none is set the control plane version is used to fall back on.
212
212
* You can either upgrade or scale a control plane or a node pool at a given time, you cannot submit multiple operations on a single control plane or node pool resource simultaneously.
213
213
214
214
## Scale a node pool manually
@@ -445,12 +445,50 @@ Events:
445
445
446
446
Only pods that have this taint applied can be scheduled on nodes in *gpunodepool*. Any other pod would be scheduled in the *nodepool1* node pool. If you create additional node pools, you can use additional taints and tolerations to limit what pods can be scheduled on those node resources.
447
447
448
-
## Specify a tag for a node pool
448
+
## Specify a taint, label, or tag for a node pool
449
449
450
-
You can apply an Azure tag to node pools in your AKS cluster. Tags applied to a node pool are applied to each node within the node pool and are persisted through upgrades. Tags are also applied to new nodes added to a node pool during scale out operations. Adding a tag can help with tasks such as policy tracking or cost estimation.
450
+
When creating a node pool, you can add taints, labels, or tags to that node pool. When you add a taint, label, or tag, all nodes within that node pool also get that taint, label, or tag.
451
+
452
+
To create a node pool with a taint, use [az aks nodepool add][az-aks-nodepool-add]. Specify the name *taintnp* and use the `--node-taints` parameter to specify *sku=gpu:NoSchedule* for the taint.
453
+
454
+
```azurecli-interactive
455
+
az aks nodepool add \
456
+
--resource-group myResourceGroup \
457
+
--cluster-name myAKSCluster \
458
+
--name taintnp \
459
+
--node-count 1 \
460
+
--node-taints sku=gpu:NoSchedule \
461
+
--no-wait
462
+
```
463
+
464
+
The following example output from the [az aks nodepool list][az-aks-nodepool-list] command shows that *taintnp* is *Creating* nodes with the specified *nodeTaints*:
465
+
466
+
```console
467
+
$ az aks nodepool list -g myResourceGroup --cluster-name myAKSCluster
468
+
469
+
[
470
+
{
471
+
...
472
+
"count": 1,
473
+
...
474
+
"name": "taintnp",
475
+
"orchestratorVersion": "1.15.7",
476
+
...
477
+
"provisioningState": "Creating",
478
+
...
479
+
"nodeTaints": {
480
+
"sku": "gpu:NoSchedule"
481
+
},
482
+
...
483
+
},
484
+
...
485
+
]
486
+
```
487
+
488
+
The taint information is visible in Kubernetes for handling scheduling rules for nodes.
451
489
452
490
> [!IMPORTANT]
453
-
> To use node pool tags, you need the *aks-preview* CLI extension version 0.4.29 or higher. Install the *aks-preview* Azure CLI extension using the [az extension add][az-extension-add] command, then check for any available updates using the [az extension update][az-extension-update] command:
491
+
> To use node pool labels and tags, you need the *aks-preview* CLI extension version 0.4.35 or higher. Install the *aks-preview* Azure CLI extension using the [az extension add][az-extension-add] command, then check for any available updates using the [az extension update][az-extension-update] command:
454
492
>
455
493
> ```azurecli-interactive
456
494
> # Install the aks-preview extension
@@ -460,7 +498,51 @@ You can apply an Azure tag to node pools in your AKS cluster. Tags applied to a
460
498
> az extension update --name aks-preview
461
499
> ```
462
500
463
-
Create a node pool using the [az aks node pool add][az-aks-nodepool-add]. Specify the name *tagnodepool* and use the `--tag` parameter to specify *dept=IT* and *costcenter=9999* for tags.
501
+
You can also add labels to a node pool during node pool creation. Labels set at the node pool are added to each node in the node pool. These [labels are visible in Kubernetes][kubernetes-labels] for handling scheduling rules for nodes.
502
+
503
+
To create a node pool with a label, use [az aks nodepool add][az-aks-nodepool-add]. Specify the name *labelnp* and use the `--labels` parameter to specify *dept=IT* and *costcenter=9999* for labels.
504
+
505
+
```azurecli-interactive
506
+
az aks nodepool add \
507
+
--resource-group myResourceGroup \
508
+
--cluster-name myAKSCluster \
509
+
--name labelnp \
510
+
--node-count 1 \
511
+
--labels dept=IT costcenter=9999 \
512
+
--no-wait
513
+
```
514
+
515
+
> [!NOTE]
516
+
> Label can only be set for node pools during node pool creation. Labels must also be a key/value pair and have a [valid syntax][kubernetes-label-syntax].
517
+
518
+
The following example output from the [az aks nodepool list][az-aks-nodepool-list] command shows that *labelnp* is *Creating* nodes with the specified *nodeLabels*:
519
+
520
+
```console
521
+
$ az aks nodepool list -g myResourceGroup --cluster-name myAKSCluster
522
+
523
+
[
524
+
{
525
+
...
526
+
"count": 1,
527
+
...
528
+
"name": "labelnp",
529
+
"orchestratorVersion": "1.15.7",
530
+
...
531
+
"provisioningState": "Creating",
532
+
...
533
+
"nodeLabels": {
534
+
"dept": "IT",
535
+
"costcenter": "9999"
536
+
},
537
+
...
538
+
},
539
+
...
540
+
]
541
+
```
542
+
543
+
You can apply an Azure tag to node pools in your AKS cluster. Tags applied to a node pool are applied to each node within the node pool and are persisted through upgrades. Tags are also applied to new nodes added to a node pool during scale-out operations. Adding a tag can help with tasks such as policy tracking or cost estimation.
544
+
545
+
Create a node pool using the [az aks nodepool add][az-aks-nodepool-add]. Specify the name *tagnodepool* and use the `--tag` parameter to specify *dept=IT* and *costcenter=9999* for tags.
464
546
465
547
```azurecli-interactive
466
548
az aks nodepool add \
@@ -613,13 +695,13 @@ It may take a few minutes to update your AKS cluster depending on the node pool
613
695
> [!WARNING]
614
696
> During the preview of assigning a public IP per node, it cannot be used with the *Standard Load Balancer SKU in AKS* due to possible load balancer rules conflicting with VM provisioning. As a result of this limitation, Windows agent pools are not supported with this preview feature. While in preview you must use the *Basic Load Balancer SKU* if you need to assign a public IP per node.
615
697
616
-
AKS nodes do not require their own public IP addresses for communication. However, some scenarios may require nodes in a node pool to have their own public IP addresses. An example is gaming, where a console needs to make a direct connection to a cloud virtual machine to minimize hops. This can be achieved by registering for a separate preview feature, Node Public IP (preview).
698
+
AKS nodes do not require their own public IP addresses for communication. However, some scenarios may require nodes in a node pool to have their own public IP addresses. An example is gaming, where a console needs to make a direct connection to a cloud virtual machine to minimize hops. This scenario can be achieved by registering for a separate preview feature, Node Public IP (preview).
617
699
618
700
```azurecli-interactive
619
701
az feature register --name NodePublicIPPreview --namespace Microsoft.ContainerService
620
702
```
621
703
622
-
After successful registration, deploy an Azure Resource Manager template following the same instructions as [above](#manage-node-pools-using-a-resource-manager-template) and add the boolean value property `enableNodePublicIP` to agentPoolProfiles. Set the value to `true` as by default it is set as `false` if not specified. This is a create-time only property and requires a minimum API version of 2019-06-01. This can be applied to both Linux and Windows node pools.
704
+
After successful registration, deploy an Azure Resource Manager template following the same instructions as [above](#manage-node-pools-using-a-resource-manager-template) and add the boolean value property `enableNodePublicIP` to agentPoolProfiles. Set the value to `true` as by default it is set as `false` if not specified. This property is a create-time only property and requires a minimum API version of 2019-06-01. This can be applied to both Linux and Windows node pools.
623
705
624
706
## Clean up resources
625
707
@@ -648,6 +730,8 @@ To create and use Windows Server container node pools, see [Create a Windows Ser
Copy file name to clipboardExpand all lines: articles/key-vault/private-link-service.md
+77-6Lines changed: 77 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,15 +1,15 @@
1
1
---
2
2
title: Integrate with Azure Private Link Service
3
3
description: Learn how to integrate Azure Key Vault with Azure Private Link Service
4
-
author: msmbaldwin
5
-
ms.author: mbaldwin
6
-
ms.date: 01/28/2020
4
+
author: ShaneBala-keyvault
5
+
ms.author: sudbalas
6
+
ms.date: 03/08/2020
7
7
ms.service: key-vault
8
8
ms.topic: quickstart
9
9
10
10
---
11
11
12
-
# Integrate Key Vault with Azure Private Link (Preview)
12
+
# Integrate Key Vault with Azure Private Link
13
13
14
14
Azure Private Link Service enables you to access Azure Services (for example, Azure Key Vault, Azure Storage, and Azure Cosmos DB) and Azure hosted customer/partner services over a Private Endpoint in your virtual network.
15
15
@@ -30,7 +30,7 @@ Your private endpoint and virtual network must be in the same region. When you s
30
30
31
31
Your private endpoint uses a private IP address in your virtual network.
32
32
33
-
## Establish a private link connection to key vault
33
+
## Establish a private link connection to Key Vault using the Azure portal
34
34
35
35
First, create a virtual network by following the steps in [Create a virtual network using the Azure portal](../virtual-network/quick-create-portal.md)
36
36
@@ -77,6 +77,60 @@ You can choose to create a private endpoint for any Azure resource in using this
77
77

78
78

79
79
80
+
## Establish a private link connection to Key Vault using CLI
81
+
82
+
### Login to Azure CLI
83
+
```console
84
+
az login
85
+
```
86
+
### Select your Azure Subscription
87
+
```console
88
+
az account set --subscription {AZURE SUBSCRIPTION ID}
az network private-endpoint show --resource-group {RG} --name {Private Endpoint Name}
133
+
```
80
134
## Manage private link connection
81
135
82
136
When you create a private endpoint, the connection must be approved. If the resource for which you are creating a private endpoint is in your directory, you will be able to approve the connection request provided you have sufficient permissions; if you are connecting to an Azure resource in another directory, you must wait for the owner of that resource to approve your connection request.
@@ -90,7 +144,7 @@ There are four provisioning states:
90
144
| Reject | Rejected | Connection was rejected by the private link resource owner. |
91
145
| Remove | Disconnected | Connection was removed by the private link resource owner, the private endpoint becomes informative and should be deleted for cleanup. |
92
146
93
-
### How to manage a private endpoint connection to key vault
147
+
### How to manage a private endpoint connection to Key Vault using the Azure portal
94
148
95
149
1. Log in to the Azure portal.
96
150
1. In the search bar, type in "key vaults"
@@ -103,6 +157,23 @@ There are four provisioning states:
103
157
104
158

105
159
160
+
## How to manage a private endpoint connection to Key Vault using Azure CLI
161
+
162
+
### Approve a Private Link Connection Request
163
+
```console
164
+
az keyvault private-endpoint-connection approve --approval-description {"OPTIONAL DESCRIPTION"} --resource-group {RG} --vault-name {KEY VAULT NAME} –name {PRIVATE LINK CONNECTION NAME}
165
+
```
166
+
167
+
### Deny a Private Link Connection Request
168
+
```console
169
+
az keyvault private-endpoint-connection reject --rejection-description {"OPTIONAL DESCRIPTION"} --resource-group {RG} --vault-name {KEY VAULT NAME} –name {PRIVATE LINK CONNECTION NAME}
170
+
```
171
+
172
+
### Delete a Private Link Connection Request
173
+
```console
174
+
az keyvault private-endpoint-connection delete --resource-group {RG} --vault-name {KEY VAULT NAME} --name {PRIVATE LINK CONNECTION NAME}
175
+
```
176
+
106
177
## Validate that the private link connection works
107
178
108
179
You should validate that the resources within the same subnet of the private endpoint resource are connecting to your key vault over a private IP address, and that they have the correct private DNS zone integration.
0 commit comments