Skip to content

Commit b783dd0

Browse files
authored
Merge pull request #264508 from MicrosoftDocs/main
1/29/2024 AM Publish
2 parents 251ce6d + 67d2917 commit b783dd0

File tree

68 files changed

+516
-304
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

68 files changed

+516
-304
lines changed

articles/aks/azure-nfs-volume.md

Lines changed: 13 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -4,29 +4,29 @@ titleSuffix: Azure Kubernetes Service
44
description: Learn how to manually create an Ubuntu Linux NFS Server persistent volume for use with pods in Azure Kubernetes Service (AKS)
55
author: ozboms
66
ms.topic: article
7-
ms.date: 06/13/2022
7+
ms.date: 01/24/2024
88
ms.author: obboms
99
---
1010

1111
# Manually create and use a Linux NFS (Network File System) Server with Azure Kubernetes Service (AKS)
1212

13-
Sharing data between containers is often a necessary component of container-based services and applications. You usually have various pods that need access to the same information on an external persistent volume. While Azure Files is an option, creating an NFS Server on an Azure VM is another form of persistent shared storage.
13+
Sharing data between containers is often a necessary component of container-based services and applications. You usually have various pods that need access to the same information on an external persistent volume. While [Azure Files][azure-files-overview] is an option, creating an NFS Server on an Azure VM is another form of persistent shared storage.
1414

1515
This article will show you how to create an NFS Server on an Azure Ubuntu virtual machine, and set up your AKS cluster with access to this shared file system as a persistent volume.
1616

1717
## Before you begin
1818

19-
This article assumes that you have the following components and configuration to support this configuration:
19+
This article assumes that you have the following to support this configuration:
2020

21-
* An existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
21+
* An existing AKS cluster. If you don't have an AKS cluster, for guidance on a designing an enterprise-scale implementation of AKS, see [Plan your AKS design][plan-aks-design].
2222
* Your AKS cluster needs to be on the same or peered Azure virtual network (VNet) as the NFS Server. The cluster must be created on an existing VNet, which can be the same VNet as your NFS Server VM. The steps for configuring with an existing VNet are described in the following articles: [creating AKS Cluster in existing VNET][aks-virtual-network] and [connecting virtual networks with VNET peering][peer-virtual-networks].
2323
* An Azure Ubuntu [Linux virtual machine][azure-linux-vm] running version 18.04 or later. To deploy a Linux VM on Azure, see [Create and manage Linux VMs][linux-create].
2424

25-
If you deploy your AKS cluster first, Azure automatically populates the virtual network settings when deploying your Azure Ubuntu VM, associating the Ubuntu VM on the same VNet. But if you want to work with peered networks instead, consult the documentation above.
25+
If you deploy your AKS cluster first, Azure automatically populates the virtual network settings when deploying your Azure Ubuntu VM, associating the Ubuntu VM on the same VNet. If you want to work with peered networks instead, consult the documentation above.
2626

2727
## Deploying the NFS Server onto a virtual machine
2828

29-
1. To deploy an NFS Server on the Azure Ubuntu virtual machine, copy the following Bash script and save it to your local machine. Replace the value for the variable **AKS_SUBNET** with the correct one from your AKS cluster or else the default value specified opens your NFS Server to all ports and connections. In this article, the file is named `nfs-server-setup.sh`.
29+
1. To deploy an NFS Server on the Azure Ubuntu virtual machine, copy the following Bash script and save it to your local machine. Replace the value for the variable **AKS_SUBNET** with the correct one from your AKS cluster, otherwise the default value specified opens your NFS Server to all ports and connections. In this article, the file is named `nfs-server-setup.sh`.
3030

3131
```bash
3232
#!/bin/bash
@@ -92,11 +92,11 @@ If you deploy your AKS cluster first, Azure automatically populates the virtual
9292

9393
## Connecting AKS cluster to NFS Server
9494

95-
You can connect the NFS Server to your AKS cluster by provisioning a persistent volume and persistent volume claim that specifies how to access the volume. Connecting the two resources in the same or peered virtual networks is necessary. To learn how to set up the cluster in the same VNet, see: [Creating AKS Cluster in existing VNet][aks-virtual-network].
95+
You can connect to the NFS Server from your AKS cluster by provisioning a persistent volume and persistent volume claim that specifies how to access the volume. Connecting the two resources in the same or peered virtual networks is necessary. To learn how to set up the cluster in the same VNet, see: [Creating AKS Cluster in existing VNet][aks-virtual-network].
9696

97-
Once both resources are on the same virtual or peered VNet, next provision a persistent volume and a persistent volume claim in your AKS Cluster. The containers can then mount the NFS drive to their local directory.
97+
Once both resources are on the same virtual or peered VNet, provision a persistent volume and a persistent volume claim in your AKS Cluster. The containers can then mount the NFS drive to their local directory.
9898

99-
1. Create a *pv-azurefilesnfs.yaml* file with a *PersistentVolume*. For example:
99+
1. Create a YAML manifest named *pv-azurefilesnfs.yaml* with a *PersistentVolume*. For example:
100100

101101
```yaml
102102
apiVersion: v1
@@ -117,7 +117,7 @@ Once both resources are on the same virtual or peered VNet, next provision a per
117117

118118
Replace the values for **NFS_INTERNAL_IP**, **NFS_NAME** and **NFS_EXPORT_FILE_PATH** with the actual settings from your NFS Server.
119119

120-
2. Create a *pvc-azurefilesnfs.yaml* file with a *PersistentVolumeClaim* that uses the *PersistentVolume*. For example:
120+
2. Create a YAML manifest named *pvc-azurefilesnfs.yaml* with a *PersistentVolumeClaim* that uses the *PersistentVolume*. For example:
121121

122122
>[!IMPORTANT]
123123
>**storageClassName** value needs to remain an empty string or the claim won't work.
@@ -145,7 +145,7 @@ Once both resources are on the same virtual or peered VNet, next provision a per
145145
146146
If you can't connect to the server from your AKS cluster, the issue might be the exported directory or its parent, doesn't have sufficient permissions to access the NFS Server VM.
147147
148-
Check that both your export directory and its parent directory have 777 permissions.
148+
Check that both your export directory and its parent directory are granted 777 permissions.
149149
150150
You can check permissions by running the following command and the directories should have *'drwxrwxrwx'* permissions:
151151
@@ -159,17 +159,13 @@ ls -l
159159
* To learn more on setting up your NFS Server or to help debug issues, see the following tutorial from the Ubuntu community [NFS Tutorial][nfs-tutorial]
160160
161161
<!-- LINKS - external -->
162-
[kubernetes-volumes]: https://kubernetes.io/docs/concepts/storage/volumes/
163162
[nfs-tutorial]: https://help.ubuntu.com/community/SettingUpNFSHowTo#Pre-Installation_Setup
164163
165164
<!-- LINKS - internal -->
165+
[plan-aks-design]: /azure/architecture/reference-architectures/containers/aks-start-here?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json
166166
[aks-virtual-network]: ./configure-kubenet.md#create-an-aks-cluster-in-the-virtual-network
167167
[peer-virtual-networks]: ../virtual-network/tutorial-connect-virtual-networks-portal.md
168-
[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
169-
[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
170-
[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
171168
[operator-best-practices-storage]: operator-best-practices-storage.md
172169
[azure-linux-vm]: ../virtual-machines/linux/endorsed-distros.md
173-
[create-nfs-share-linux-vm]: ../storage/files/storage-files-quick-create-use-linux.md
174-
[require-secure-transfer]: ../storage/common/storage-require-secure-transfer.md
175170
[linux-create]: ../virtual-machines/linux/tutorial-manage-vm.md
171+
[azure-files-overview]: ../storage/files/storage-files-introduction.md

articles/aks/node-pool-snapshot.md

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: Snapshot Azure Kubernetes Service (AKS) node pools
33
description: Learn how to snapshot AKS cluster node pools and create clusters and node pools from a snapshot.
44
ms.topic: how-to
55
ms.custom: devx-track-azurecli
6-
ms.date: 06/05/2023
6+
ms.date: 01/29/2024
77
ms.author: allensu
88
author: asudbring
99
---
@@ -18,7 +18,7 @@ The snapshot is an Azure resource that contains the configuration information fr
1818

1919
## Before you begin
2020

21-
This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
21+
This article assumes that you have an existing AKS cluster. If you don't have an AKS cluster, for guidance on a designing an enterprise-scale implementation of AKS, see [Plan your AKS design][plan-aks-design].
2222

2323
### Limitations
2424

@@ -103,9 +103,7 @@ az aks create --name myAKSCluster2 --resource-group myResourceGroup --snapshot-i
103103
- Learn more about multiple node pools with [Create multiple node pools][use-multiple-node-pools].
104104

105105
<!-- LINKS - internal -->
106-
[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
107-
[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
108-
[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
106+
[plan-aks-design]: /azure/architecture/reference-architectures/containers/aks-start-here?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json
109107
[supported-versions]: supported-kubernetes-versions.md
110108
[upgrade-cluster]: upgrade-cluster.md
111109
[node-image-upgrade]: node-image-upgrade.md

articles/aks/planned-maintenance.md

Lines changed: 11 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -5,36 +5,40 @@ titleSuffix: Azure Kubernetes Service
55
description: Learn how to use Planned Maintenance to schedule and control cluster and node image upgrades in Azure Kubernetes Service (AKS).
66
ms.topic: article
77
ms.custom: devx-track-azurecli
8-
ms.date: 01/26/2024
8+
ms.date: 01/29/2024
99
ms.author: nickoman
1010
author: nickomang
1111
---
1212

1313
# Use Planned Maintenance to schedule and control upgrades for your Azure Kubernetes Service (AKS) cluster
1414

15-
Your AKS cluster has regular maintenance performed on it automatically. There are two types of regular maintenance - AKS initiated and those that you initiate. Planned Maintenance feature allows you to run both types of maintenance in a cadence of your choice thereby minimizing any workload impact.
15+
Your AKS cluster has regular maintenance performed on it automatically. There are two types of regular maintenance - AKS initiated and those that you initiate. Planned Maintenance feature allows you to run both types of maintenance in a cadence of your choice, thereby minimizing any workload impact.
1616

1717
AKS initiated maintenance refers to the AKS releases. These releases are weekly rounds of fixes and feature and component updates that affect your clusters. The type of maintenance that you initiate regularly are [cluster auto-upgrades][aks-upgrade] and [Node OS automatic security updates][node-image-auto-upgrade].
1818

19-
There are currently three available configuration types: `default`, `aksManagedAutoUpgradeSchedule`, `aksManagedNodeOSUpgradeSchedule`:
19+
This article describes the maintenance options availble and how to configure a maintenance schedule for your AKS clusters.
20+
21+
## Overview
22+
23+
There are currently three available maintenance schedule configuration types: `default`, `aksManagedAutoUpgradeSchedule`, `aksManagedNodeOSUpgradeSchedule`:
2024

2125
- `default` corresponds to a basic configuration that is used to control AKS releases, these releases can take up to two weeks to roll out to all regions from the initial time of shipping due to Azure Safe Deployment Practices (SDP). Choose `default` to schedule these updates in such a way that it's least disruptive for you. You can monitor the status of an ongoing AKS release by region from the [weekly releases tracker][release-tracker].
2226

2327
- `aksManagedAutoUpgradeSchedule` controls when cluster upgrades scheduled by your designated auto-upgrade channel are performed. More finely controlled cadence and recurrence settings are possible than in a `default` configuration. For more information on cluster auto-upgrade, see [Automatically upgrade an Azure Kubernetes Service (AKS) cluster][aks-upgrade].
2428

2529
- `aksManagedNodeOSUpgradeSchedule` controls when the node operating system security patching scheduled by your node OS auto-upgrade channel are performed. More finely controlled cadence and recurrence settings are possible than in a `default configuration. For more information on node OS auto-upgrade channel, see [Automatically patch and update AKS cluster node images][node-image-auto-upgrade]
2630

27-
We recommend using `aksManagedAutoUpgradeSchedule` for all cluster upgrade scenarios and `aksManagedNodeOSUpgradeSchedule` for all node OS security patching scenarios, while `default` is meant exclusively for the AKS weekly releases. You can port `default` configurations to the `aksManagedAutoUpgradeSchedule` or `aksManagedNodeOSUpgradeSchedule` configurations via the `az aks maintenanceconfiguration update` command.
31+
We recommend using `aksManagedAutoUpgradeSchedule` for all cluster upgrade scenarios and `aksManagedNodeOSUpgradeSchedule` for all node OS security patching scenarios. The `default` option is meant exclusively for AKS weekly releases. You can switch the `default` configuration to the `aksManagedAutoUpgradeSchedule` or `aksManagedNodeOSUpgradeSchedule` configurations using the `az aks maintenanceconfiguration update` command.
2832

2933
## Before you begin
3034

31-
This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
35+
This article assumes that you have an existing AKS cluster. If you don't have an AKS cluster, for guidance on a designing an enterprise-scale implementation of AKS, see [Plan your AKS design][plan-aks-design].
3236

3337
Be sure to upgrade Azure CLI to the latest version using [`az upgrade`](/cli/azure/update-azure-cli#manual-update).
3438

3539
## Creating a maintenance window
3640

37-
To create a maintenance window, you can use the `az aks maintenanceconfiguration add` command using the `--name` value `default`, `aksManagedAutoUpgradeSchedule`, or `aksManagedNodeOSUpgradeSchedule`. The name value should reflect the desired configuration type. Using any other name causes your maintenance window not to run.
41+
To create a maintenance window, you can use the `az aks maintenanceconfiguration add` command using the `--name` value `default`, `aksManagedAutoUpgradeSchedule`, or `aksManagedNodeOSUpgradeSchedule`. The name value should reflect the desired configuration type. Using any other name prevents your maintenance window from running.
3842

3943
> [!NOTE]
4044
> When using auto-upgrade, to ensure proper functionality, use a maintenance window with a duration of four hours or more.
@@ -328,9 +332,7 @@ az aks maintenanceconfiguration delete -g myResourceGroup --cluster-name myAKSCl
328332
- To get started with upgrading your AKS cluster, see [Upgrade an AKS cluster][aks-upgrade]
329333

330334
<!-- LINKS - Internal -->
331-
[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
332-
[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
333-
[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
335+
[plan-aks-design]: /azure/architecture/reference-architectures/containers/aks-start-here?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json
334336
[aks-support-policies]: support-policies.md
335337
[aks-faq]: faq.md
336338
[az-extension-add]: /cli/azure/extension#az_extension_add

articles/aks/scale-down-mode.md

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ When an Azure VM is in the `Stopped` (deallocated) state, you will not be charge
2121
> In order to preserve any deallocated VMs, you must set Scale-down Mode to Deallocate. That includes VMs that have been deallocated using IaaS APIs (Virtual Machine Scale Set APIs). Setting Scale-down Mode to Delete will remove any deallocate VMs.
2222
> Once applied the deallocated mode and scale down operation occurred, those nodes keep registered in APIserver and appear as NotReady state.
2323
24-
This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
24+
This article assumes that you have an existing AKS cluster. If you don't have an AKS cluster, for guidance on a designing an enterprise-scale implementation of AKS, see [Plan your AKS design][plan-aks-design].
2525

2626
### Limitations
2727

@@ -79,9 +79,7 @@ az aks nodepool add --enable-cluster-autoscaler --min-count 1 --max-count 10 --m
7979
- To learn more about the cluster autoscaler, see [Automatically scale a cluster to meet application demands on AKS][cluster-autoscaler]
8080

8181
<!-- LINKS - Internal -->
82-
[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
83-
[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
84-
[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
82+
[plan-aks-design]: /azure/architecture/reference-architectures/containers/aks-start-here?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json
8583
[aks-upgrade]: upgrade-cluster.md
8684
[cluster-autoscaler]: cluster-autoscaler.md
8785
[ephemeral-os]: concepts-storage.md#ephemeral-os-disk

articles/azure-arc/resource-bridge/system-requirements.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -175,7 +175,7 @@ By default, these files are generated in the current CLI directory when `createc
175175

176176
### Kubeconfig
177177

178-
The appliance VM hosts a management Kubernetes cluster. The kubeconfig is a low-privilege Kubernetes configuration file that is used to maintain the appliance VM. By default, it's generated in the current CLI directory when the `deploy` command completes. The kubeconfig should be saved in a secure location to the management machine, because it's required for maintaining the appliance VM.
178+
The appliance VM hosts a management Kubernetes cluster. The kubeconfig is a low-privilege Kubernetes configuration file that is used to maintain the appliance VM. By default, it's generated in the current CLI directory when the `deploy` command completes. The kubeconfig should be saved in a secure location on the management machine, because it's required for maintaining the appliance VM. If the kubeconfig is lost, it can be retrieved by running the `az arcappliance get-credentials` command.
179179

180180
### HCI login configuration file (Azure Stack HCI only)
181181

articles/azure-arc/servers/ssh-arc-troubleshoot.md

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -120,6 +120,15 @@ Resolution:
120120
- Confirm success by running ```az provider show -n Microsoft.HybridConnectivity```, verify that `registrationState` is set to `Registered`
121121
- Restart the hybrid agent on the Arc-enabled server
122122

123+
### Cannot connect after updating CLI tool and Arc agent
124+
125+
This issue occurs when the updated command creates a new service configuration before the Arc agent is updated. This will only impact Azure Arc versions older than 1.31 when updating to a version 1.31 or newer. Error:
126+
127+
- Connection closed by UNKNOWN port 65535
128+
129+
Resolution:
130+
131+
- Delete the existing service configuration and allow it to be re-created by the CLI command at the next connection. Run ```az rest --method delete --uri https://management.azure.com/subscriptions/<SUB_ID>/resourceGroups/<RG_NAME>/providers/Microsoft.HybridCompute/machines/<VM_NAME>/providers/Microsoft.HybridConnectivity/endpoints/default/serviceconfigurations/SSH?api-version=2023-03-15```
123132

124133
## Disable SSH to Arc-enabled servers
125134

0 commit comments

Comments
 (0)