You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: AKS-Hybrid/aks-arc-diagnostic-checker.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -22,14 +22,14 @@ It can be difficult to identify environment-related issues, such as networking c
22
22
23
23
Before you begin, make sure you have the following prerequisites. If you don't meet the requirements for running the diagnostic checker tool, [file a support request](aks-troubleshoot.md#open-a-support-request):
24
24
25
-
- Direct access to the Azure Stack HCI cluster where you created the AKS cluster. This access can be through remote desktop (RDP), or you can also sign in to one of the Azure Stack HCI physical nodes.
25
+
- Direct access to the Azure Local cluster where you created the AKS cluster. This access can be through remote desktop (RDP), or you can also sign in to one of the Azure Local physical nodes.
26
26
- Review the [networking concepts for creating an AKS cluster](aks-hci-network-system-requirements.md) and the [AKS cluster architecture](cluster-architecture.md).
27
27
- The name of the logical network attached to the AKS cluster.
28
28
- An SSH private key for the AKS cluster, used to sign in to the AKS cluster [control plane node](cluster-architecture.md#control-plane-nodes) VM.
29
29
30
30
## Obtain control plane node VM IP of the AKS cluster
31
31
32
-
Run the following command from any one physical node in your Azure Stack HCI cluster. Ensure that you're passing the name, and not the Azure Resource Manager ID of the AKS cluster:
32
+
Run the following command from any one physical node in your Azure Local cluster. Ensure that you're passing the name, and not the Azure Resource Manager ID of the AKS cluster:
This article describes how to create Kubernetes clusters in Azure Stack HCI using Azure CLI. The workflow is as follows:
17
+
This article describes how to create Kubernetes clusters in Azure Local using Azure CLI. The workflow is as follows:
18
18
19
-
1. Create a Kubernetes cluster in Azure Stack HCI 23H2 using Azure CLI. The cluster is Azure Arc-connected by default.
19
+
1. Create a Kubernetes cluster in Azure Local, version 23H2 using Azure CLI. The cluster is Azure Arc-connected by default.
20
20
1. While creating the cluster, you provide a Microsoft Entra group that contains the list of Microsoft Entra users with Kubernetes cluster administrator access.
21
21
1. Access the cluster using kubectl and your Microsoft Entra ID.
22
22
1. Run a sample multi-container application with a web front end and a Redis instance in the cluster.
23
23
24
24
## Before you begin
25
25
26
26
- Before you begin, make sure you have the following details from your on-premises infrastructure administrator:
27
-
-**Azure subscription ID** - The Azure subscription ID where Azure Stack HCI is used for deployment and registration.
28
-
-**Custom Location ID** - Azure Resource Manager ID of the custom location. The custom location is configured during the Azure Stack HCI cluster deployment. Your infrastructure admin should give you the Resource Manager ID of the custom location. This parameter is required in order to create Kubernetes clusters. You can also get the Resource Manager ID using `az customlocation show --name "<custom location name>" --resource-group <azure resource group> --query "id" -o tsv`, if the infrastructure admin provides a custom location name and resource group name.
29
-
-**Network ID** - Azure Resource Manager ID of the Azure Stack HCI logical network created following [these steps](aks-networks.md). Your admin should give you the ID of the logical network. This parameter is required in order to create Kubernetes clusters. You can also get the Azure Resource Manager ID using `az stack-hci-vm network lnet show --name "<lnet name>" --resource-group <azure resource group> --query "id" -o tsv` if you know the resource group in which the logical network was created.
30
-
- You can run the steps in this article in a local development machine to create a Kubernetes cluster on your remote Azure Stack HCI deployment. Make sure you have the latest version of [Az CLI](/cli/azure/install-azure-cli) on your development machine. You can also choose to upgrade your Az CLI version using `az upgrade`.
27
+
-**Azure subscription ID** - The Azure subscription ID where Azure Local is used for deployment and registration.
28
+
-**Custom Location ID** - Azure Resource Manager ID of the custom location. The custom location is configured during the Azure Local cluster deployment. Your infrastructure admin should give you the Resource Manager ID of the custom location. This parameter is required in order to create Kubernetes clusters. You can also get the Resource Manager ID using `az customlocation show --name "<custom location name>" --resource-group <azure resource group> --query "id" -o tsv`, if the infrastructure admin provides a custom location name and resource group name.
29
+
-**Network ID** - Azure Resource Manager ID of the Azure Local logical network created following [these steps](aks-networks.md). Your admin should give you the ID of the logical network. This parameter is required in order to create Kubernetes clusters. You can also get the Azure Resource Manager ID using `az stack-hci-vm network lnet show --name "<lnet name>" --resource-group <azure resource group> --query "id" -o tsv` if you know the resource group in which the logical network was created.
30
+
- You can run the steps in this article in a local development machine to create a Kubernetes cluster on your remote Azure Local deployment. Make sure you have the latest version of [Az CLI](/cli/azure/install-azure-cli) on your development machine. You can also choose to upgrade your Az CLI version using `az upgrade`.
31
31
- To connect to the Kubernetes cluster from anywhere, create a Microsoft Entra group and add members to it. All the members in the Microsoft Entra group have cluster administrator access to the cluster. Make sure to add yourself as a member to the Microsoft Entra group. If you don't add yourself, you cannot access the Kubernetes cluster using kubectl. For more information about creating Microsoft Entra groups and adding users, see [Manage Microsoft Entra groups and group membership](/entra/fundamentals/how-to-manage-groups).
32
32
-[Download and install kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl) on your development machine. The Kubernetes command-line tool, kubectl, enables you to run commands against Kubernetes clusters. You can use kubectl to deploy applications, inspect and manage cluster resources, and view logs.
This article describes how to create Kubernetes clusters in Azure Stack HCI using the Azure portal. The workflow is as follows:
16
+
This article describes how to create Kubernetes clusters in Azure Local using the Azure portal. The workflow is as follows:
17
17
18
18
- How to create a Kubernetes cluster using the Azure portal. By default, the cluster is Azure Arc-connected.
19
19
- While creating the cluster, you provide a Microsoft Entra group that contains the list of Microsoft Entra users with Kubernetes cluster administrator access.
@@ -81,5 +81,5 @@ This article describes how to create Kubernetes clusters in Azure Stack HCI usin
81
81
82
82
## Next steps
83
83
84
-
-[Review AKS on Azure Stack HCI 23H2 prerequisites](aks-hci-network-system-requirements.md)
85
-
-[What's new in AKS on Azure Stack HCI](aks-whats-new-23h2.md)
84
+
-[Review AKS on Azure Local, version 23H2 prerequisites](aks-hci-network-system-requirements.md)
85
+
-[What's new in AKS on Azure Local](aks-whats-new-23h2.md)
Copy file name to clipboardExpand all lines: AKS-Hybrid/aks-hci-ip-address-planning.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -29,7 +29,7 @@ In the following scenario walk-through, you reserve IP addresses from a single n
29
29
30
30
### Example walkthrough for IP address reservation for Kubernetes clusters and applications
31
31
32
-
Jane is an IT administrator just starting with AKS enabled by Azure Arc. Jane wants to deploy two Kubernetes clusters: Kubernetes cluster A and Kubernetes cluster B on the Azure Stack HCI cluster. Jane also wants to run a voting application on top of cluster A. This application has three instances of the front-end UI running across the two clusters and one instance of the backend database. All the AKS clusters and services are running in a single network, with a single subnet.
32
+
Jane is an IT administrator just starting with AKS enabled by Azure Arc. Jane wants to deploy two Kubernetes clusters: Kubernetes cluster A and Kubernetes cluster B on the Azure Local cluster. Jane also wants to run a voting application on top of cluster A. This application has three instances of the front-end UI running across the two clusters and one instance of the backend database. All the AKS clusters and services are running in a single network, with a single subnet.
33
33
34
34
- Kubernetes cluster A has 3 control plane nodes and 5 worker nodes.
35
35
- Kubernetes cluster B has 1 control plane node and 3 worker nodes.
@@ -48,17 +48,17 @@ Continuing with this example, and adding it to the following table, you get:
48
48
49
49
| Parameter | Number of IP addresses | How and where to make this reservation |
50
50
|------------------|---------|---------------|
51
-
| AKS Arc VMs and K8s version upgrade | Reserve 14 IP addresses | Make this reservation through IP pools in the Azure Stack HCI logical network. |
51
+
| AKS Arc VMs and K8s version upgrade | Reserve 14 IP addresses | Make this reservation through IP pools in the Azure Local logical network. |
52
52
| Control plane IP | Reserve 2 IP addresses, one for AKS Arc cluster | Use the `controlPlaneIP` parameter to pass the IP address for control plane IP. Ensure that this IP is in the same subnet as the Arc logical network, but outside the IP pool defined in the Arc logical network. |
53
53
| Load balancer IPs | 3 IP address for Kubernetes services, for Jane's voting application. | These IP addresses are used when you install a load balancer on cluster A. You can use the MetalLB Arc extension, or bring your own 3rd party load balancer. Ensure that this IP is in the same subnet as the Arc logical network, but outside the IP pool defined in the Arc VM logical network. |
54
54
55
55
### LNETs considerations for AKS clusters and Arc VMs
56
56
57
-
Logical networks on Azure Stack HCI are used by both AKS clusters and Arc VMs. You can configure logical networks in one of the following 2 ways:
57
+
Logical networks on Azure Local are used by both AKS clusters and Arc VMs. You can configure logical networks in one of the following 2 ways:
58
58
- Share a logical network between AKS and Arc VMs.
59
59
- Define separate logical networks for AKS clusters and Arc VMs.
60
60
61
-
Sharing a logical network between AKS and Arc VMs on Azure Stack HCI offers the benefit of streamlined communication, cost savings, and simplified network management. However, this approach also introduces potential challenges such as resource contention, security risks, and complexity in troubleshooting.
61
+
Sharing a logical network between AKS and Arc VMs on Azure Local offers the benefit of streamlined communication, cost savings, and simplified network management. However, this approach also introduces potential challenges such as resource contention, security risks, and complexity in troubleshooting.
62
62
63
63
|**Criteria**|**Sharing a logical network**|**Defining separate logical networks**|
0 commit comments