You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: AKS-Hybrid/aks-hci-ip-address-planning.md
+23-1Lines changed: 23 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -50,9 +50,32 @@ Continuing with this example, and adding it to the following table, you get:
50
50
| AKS Arc VMs, K8s version upgrade and control plane IP | Reserve 16 IP addresses | Make this reservation through IP pools in the Azure Local logical network. |
51
51
| Load balancer IPs | 3 IP address for Kubernetes services, for Jane's voting application. | These IP addresses are used when you install a load balancer on cluster A. You can use the MetalLB Arc extension, or bring your own 3rd party load balancer. Ensure that this IP is in the same subnet as the Arc logical network, but outside the IP pool defined in the Arc VM logical network. |
52
52
53
+
#### Example CLI commands for IP address reservation for Kubernetes clusters and applications
54
+
55
+
This section describes the set of commands Jane runs for her scenario. First, create a logical network with an IP pool that has at least 16 IP addresses. We created the IP pool with 20 IP addresses to provide the option to scale on day N. For detailed information about parameter options in logical networks, see [`az stack-hci-vm network lnet create`](/cli/azure/stack-hci-vm/network/lnet#az-stack-hci-vm-network-lnet-create):
Now you can enable MetalLB load balancer with an IP pool of 3 IP addresses, in the same subnet as the Arc VM logical network. You can add more IP pools later if your application needs an increase. For detailed requirements, see the [MetalLB Arc extension overview](load-balancer-overview).
### LNETs considerations for AKS clusters and Arc VMs
54
76
55
77
Logical networks on Azure Local are used by both AKS clusters and Arc VMs. You can configure logical networks in one of the following 2 ways:
78
+
56
79
- Share a logical network between AKS and Arc VMs.
57
80
- Define separate logical networks for AKS clusters and Arc VMs.
58
81
@@ -66,7 +89,6 @@ Sharing a logical network between AKS and Arc VMs on Azure Local offers the bene
66
89
|**Security considerations**| Increased risk of cross-communication vulnerabilities if not properly segmented. | Better security as each network can be segmented and isolated more strictly. |
67
90
|**Impact of network failures**| A failure in the shared network can affect both AKS and Arc VMs simultaneously. | A failure in one network affects only the workloads within that network, reducing overall risk. |
68
91
69
-
70
92
## IP address range allocation for pod CIDR and service CIDR
Copy file name to clipboardExpand all lines: AKS-Hybrid/aks-hci-network-system-requirements.md
+2Lines changed: 2 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -37,6 +37,8 @@ The following parameters are required in order to use a logical network for AKS
37
37
|`--gateway`| Gateway. The gateway IP address must be within the scope of the address prefix. Usage: `--gateway 10.220.32.16`. ||
38
38
|`--ip-allocation-method`| The IP address allocation method. Supported values are "Static". Usage: `--ip-allocation-method "Static"`. ||
39
39
|`--vm-switch-name`| The name of the VM switch. Usage: `--vm-switch-name "vm-switch-01"`. ||
40
+
|`--ip-pool-start`| If you use MetalLB or any other third party load balancer in L2/ARP mode, we highly recommend using IP pools to separate AKS Arc IP requirements from load balancer IPs. This recommendation is to help avoid IP address conflicts that can lead to unintended and hard-to-diagnose failures. This value is the start IP address of your IP pool. The address must be in the range of the address prefix. Usage: `--ip-pool-start "10.220.32.18"`. | Optional, but highly recommended. |
41
+
|`--ip-pool-end`| If you use MetalLB or any other third party load balancer in L2/ARP mode, we highly recommend using IP pools to separate AKS Arc IP requirements from load balancer IPs. This recommendation is to help avoid IP address conflicts that can lead to unintended and hard-to-diagnose failures. This value is the end IP address of your IP pool. The address must be in the range of the address prefix. Usage: `--ip-pool-end "10.220.32.38"`. | Optional, but highly recommended. |
Copy file name to clipboardExpand all lines: AKS-Hybrid/disable-windows-nodepool.md
+16-15Lines changed: 16 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -22,28 +22,28 @@ This how-to article walks you through how to disable the Windows nodepool featur
22
22
23
23
Before you begin, make sure you have the following prerequisites in place:
24
24
25
-
-**Azure Local deployed**: This article is only applicable if you already deployed Azure Local. You cannot run the commands in this article before you deploy Azure Local. We currently do not support the ability to make this change before the initial Azure Local deployment.
26
-
-**Custom Location ID**: Azure Resource Manager ID of the custom location. The custom location is configured during the Azure Local deployment. If you're in the Azure portal, go to the **Overview > Server** page in the Azure Stack HCI system resource. You should see a custom location for your cluster.
27
-
-**Azure resource group**: The Azure resource group where Azure Local is deployed.
28
-
- Azure RBAC permissions to update Azure Stack HCI configuration. Make sure you have the following roles. For more information, see [required permissions for deployment](/azure/azure-local/deploy/deployment-arc-register-server-permissions?tabs=powershell#assign-required-permissions-for-deployment):
25
+
-**Azure Local deployed**. This article is only applicable if you already deployed Azure Local, release 2411. You cannot run the commands in this article before you deploy Azure Local release 2411. We currently do not support the ability to make this change before the initial Azure Local release 2411 deployment.
26
+
-**Azure RBAC permissions to update Azure Local configuration**. Make sure you have the following roles. To learn more, visit [required permissions for deployment](/hci/deploy/deployment-arc-register-server-permissions?tabs=powershell#assign-required-permissions-for-deployment):
29
27
- Azure Stack HCI Administrator
30
28
- Reader
29
+
-**Custom Location**. Name of the custom location. The custom location is configured during the Azure Local deployment. If you're in the Azure portal, go to the **Overview > Server** page in the Azure Local system resource. You should see a custom location for your cluster.
30
+
-**Azure resource group**. The Azure resource group in which Azure Local is deployed.
31
31
32
-
## Set environment variables
32
+
## Recommended option: disable Windows nodepool from an Azure CloudShell session
33
33
34
34
To help simplify configuration, the following steps define environment variables that are referenced in this article. Remember to replace the values shown with your own values.
35
35
36
-
Set the custom location and the resource group values in environment variables.\:
36
+
Set the custom location and the resource group values in environment variables:
37
37
38
38
```azurecli
39
-
$customlocationID = <The custom location ARM ID for Azure Local>
40
-
$resourceGroup = <The Azure resource group where Azure Local is deployed>
39
+
$customlocationName = <The custom location name for Azure Local>
40
+
$resourceGroup = <The Azure resource group in which Azure Local is deployed>
41
41
```
42
42
43
43
Next, run the following command to obtain the `clusterName` parameter. This parameter is the name of the Arc Resource Bridge that you deployed on Azure Local:
44
44
45
45
```azurecli
46
-
az customlocation show -n $customlocationID -g $resourceGroup --query hostResourceId
46
+
az customlocation show -n $customlocationName -g $resourceGroup --query hostResourceId
47
47
```
48
48
49
49
Expected output:
@@ -77,22 +77,23 @@ You should have two extensions installed on your custom location: AKS Arc and Ar
77
77
$extensionName = <Name of AKS Arc extension you deployed on the custom location>
78
78
```
79
79
80
-
Once you have the extension name, create variables for the following parameters.
80
+
After you have the extension name, create variables for the following parameters, and then disable the Windows nodepool feature:
81
81
82
82
```azurecli
83
83
$extensionVersion = "$(az k8s-extension show -n $extensionName -g $resourceGroup -c $clusterName --cluster-type appliances --query version -o tsv)"
## Update the AKS Arc extension to disable the Windows nodepool feature
88
+
## Alternate option: disable Windows nodepool after connecting to an Azure Local physical node via Remote Desktop
88
89
89
-
After you set the environment variables, you can run the following command from an Azure CloudShell session to update the AKS Arc k8s extension. This command disables the Windows nodepool feature and deletes any associated VHDs:
90
+
If for some reason you're not able to use Azure CloudShell or a machine with connectivity to Azure in order to disable Windows nodepool, you can disable Windows nodepool after connecting to any one of the Azure Local physical nodes with Remote Desktop. You must first sign in to Azure:
## Validate if the Windows nodepool feature is disabled
96
+
###Validate if the Windows nodepool feature is disabled
96
97
97
98
You can check if the configuration settings were applied by running `az k8s-extension show`, as follows:
98
99
@@ -111,7 +112,7 @@ Expected output:
111
112
Next, check if Windows nodepools were disabled by running the following command:
112
113
113
114
```azurecli
114
-
az aksarc get-versions --resource-group $resourceGroup --custom-location $customlocationID
115
+
az aksarc get-versions --resource-group $resourceGroup --custom-location $customlocationName
115
116
```
116
117
117
118
The output for `osType=Windows` should say "Windows nodepool feature is disabled" and the `ready` state should be `false`, for each Kubernetes version option:
@@ -154,5 +155,5 @@ The Windows VHDs that were previously downloaded are automatically deleted if th
154
155
155
156
## Next steps
156
157
157
-
-[What's new in AKS on Azure Stack HCI](aks-overview.md)
158
+
-[What's new in AKS on Azure Local](aks-overview.md)
158
159
-[Create AKS clusters](aks-create-clusters-cli.md)
0 commit comments