Skip to content

Commit b8e67bc

Browse files
authored
Merge pull request #90409 from seanmck/aks-why2rg
Added better explanation for the node resource group in AKS
2 parents 3f37bb1 + f00c2cc commit b8e67bc

File tree

1 file changed

+4
-2
lines changed

1 file changed

+4
-2
lines changed

articles/aks/faq.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ manager: jeconnoc
77

88
ms.service: container-service
99
ms.topic: article
10-
ms.date: 07/08/2019
10+
ms.date: 10/02/2019
1111
ms.author: mlearned
1212
---
1313

@@ -55,7 +55,9 @@ For Windows Server nodes (currently in preview in AKS), Windows Update does not
5555

5656
## Why are two resource groups created with AKS?
5757

58-
Each AKS deployment spans two resource groups:
58+
AKS builds upon a number of Azure infrastructure resources, including virtual machine scale sets, virtual networks, and managed disks. This enables you to leverage many of the core capabilities of the Azure platform within the managed Kubernetes environment provided by AKS. For example, most Azure virtual machine types can be used directly with AKS and Azure Reservations can be used to receive discounts on those resources automatically.
59+
60+
To enable this architecture, each AKS deployment spans two resource groups:
5961

6062
1. You create the first resource group. This group contains only the Kubernetes service resource. The AKS resource provider automatically creates the second resource group during deployment. An example of the second resource group is *MC_myResourceGroup_myAKSCluster_eastus*. For information on how to specify the name of this second resource group, see the next section.
6163
1. The second resource group, known as the *node resource group*, contains all of the infrastructure resources associated with the cluster. These resources include the Kubernetes node VMs, virtual networking, and storage. By default, the node resource group has a name like *MC_myResourceGroup_myAKSCluster_eastus*. AKS automatically deletes the node resource whenever the cluster is deleted, so it should only be used for resources which share the cluster's lifecycle.

0 commit comments

Comments
 (0)