Skip to content

Commit 64c1e1c

Browse files
authored
Merge pull request #230057 from MicrosoftDocs/main
Publish to live, Thursday 4 AM PST, 3/9
2 parents f59b642 + 5ef7f29 commit 64c1e1c

File tree

101 files changed

+1181
-690
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

101 files changed

+1181
-690
lines changed

.openpublishing.redirection.defender-for-cloud.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -787,7 +787,7 @@
787787
},
788788
{
789789
"source_path_from_root": "/articles/defender-for-cloud/os-coverage.md",
790-
"redirect_url": "/azure/defender-for-cloud/monitoring-components",
790+
"redirect_url": "/azure/defender-for-cloud/support-matrix-defender-for-cloud#supported-operating-systems",
791791
"redirect_document_id": false
792792
},
793793
{

articles/active-directory/hybrid/four-steps.md

Lines changed: 35 additions & 39 deletions
Large diffs are not rendered by default.

articles/active-directory/manage-apps/grant-admin-consent.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -52,6 +52,7 @@ To grant tenant-wide admin consent to an app listed in **Enterprise applications
5252
1. Select the application to which you want to grant tenant-wide admin consent, and then select **Permissions**.
5353
:::image type="content" source="media/grant-tenant-wide-admin-consent/grant-tenant-wide-admin-consent.png" alt-text="Screenshot shows how to grant tenant-wide admin consent.":::
5454

55+
1. Add the redirect **URI** (https://entra.microsoft.com/TokenAuthorize) as permitted redirect **URI** to the app.
5556
1. Carefully review the permissions that the application requires. If you agree with the permissions the application requires, select **Grant admin consent**.
5657

5758
## Grant admin consent in App registrations

articles/aks/operator-best-practices-multi-region.md

Lines changed: 20 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,8 @@
11
---
2-
title: Best practices for AKS business continuity and disaster recovery
3-
description: Learn a cluster operator's best practices to achieve maximum uptime for your applications, providing high availability and preparing for disaster recovery in Azure Kubernetes Service (AKS).
2+
title: Best practices for business continuity and disaster recovery in Azure Kubernetes Service (AKS)
3+
description: Best practices for a cluster operator to achieve maximum uptime for your applications and to provide high availability and prepare for disaster recovery in Azure Kubernetes Service (AKS).
44
ms.topic: conceptual
5-
ms.date: 03/11/2021
6-
ms.author: thfalgou
5+
ms.date: 03/08/2023
76
ms.custom: fasttrack-edit
87
#Customer intent: As an AKS cluster operator, I want to plan for business continuity or disaster recovery to help protect my cluster from region problems.
98
---
@@ -15,8 +14,9 @@ As you manage clusters in Azure Kubernetes Service (AKS), application uptime bec
1514
This article focuses on how to plan for business continuity and disaster recovery in AKS. You learn how to:
1615

1716
> [!div class="checklist"]
17+
1818
> * Plan for AKS clusters in multiple regions.
19-
> * Route traffic across multiple clusters by using Azure Traffic Manager.
19+
> * Route traffic across multiple clusters using Azure Traffic Manager.
2020
> * Use geo-replication for your container image registries.
2121
> * Plan for application state across multiple clusters.
2222
> * Replicate storage across multiple regions.
@@ -30,15 +30,17 @@ This article focuses on how to plan for business continuity and disaster recover
3030
An AKS cluster is deployed into a single region. To protect your system from region failure, deploy your application into multiple AKS clusters across different regions. When planning where to deploy your AKS cluster, consider:
3131

3232
* [**AKS region availability**](./quotas-skus-regions.md#region-availability)
33-
* Choose regions close to your users.
33+
* Choose regions close to your users.
3434
* AKS continually expands into new regions.
35+
3536
* [**Azure paired regions**](../availability-zones/cross-region-replication-azure.md)
3637
* For your geographic area, choose two regions paired together.
37-
* AKS platform updates (planned maintenance) are serialized with a delay of at least 24 hours between paired regions.
38-
* Recovery efforts for paired regions are prioritized where needed.
38+
* AKS platform updates (planned maintenance) are serialized with a delay of at least 24 hours between paired regions.
39+
* Recovery efforts for paired regions are prioritized where needed.
40+
3941
* **Service availability**
4042
* Decide whether your paired regions should be hot/hot, hot/warm, or hot/cold.
41-
* Do you want to run both regions at the same time, with one region *ready* to start serving traffic? Or,
43+
* Do you want to run both regions at the same time, with one region *ready* to start serving traffic? *or*
4244
* Do you want to give one region time to get ready to serve traffic?
4345

4446
AKS region availability and paired regions are a joint consideration. Deploy your AKS clusters into paired regions designed to manage region disaster recovery together. For example, AKS is available in East US and West US. These regions are paired. Choose these two regions when you're creating an AKS BC/DR strategy.
@@ -66,11 +68,12 @@ For information on how to set up endpoints and routing, see [Configure priority
6668
### Application routing with Azure Front Door Service
6769

6870
Using split TCP-based anycast protocol, [Azure Front Door Service](../frontdoor/front-door-overview.md) promptly connects your end users to the nearest Front Door POP (Point of Presence). More features of Azure Front Door Service:
71+
6972
* TLS termination
7073
* Custom domain
7174
* Web application firewall
7275
* URL Rewrite
73-
* Session affinity
76+
* Session affinity
7477

7578
Review the needs of your application traffic to understand which solution is the most suitable.
7679

@@ -83,18 +86,16 @@ Before peering virtual networks with running AKS clusters, use the standard Load
8386
## Enable geo-replication for container images
8487

8588
> **Best practice**
86-
>
89+
>
8790
> Store your container images in Azure Container Registry and geo-replicate the registry to each AKS region.
8891
89-
To deploy and run your applications in AKS, you need a way to store and pull the container images. Container Registry integrates with AKS, so it can securely store your container images or Helm charts. Container Registry supports multimaster geo-replication to automatically replicate your images to Azure regions around the world.
92+
To deploy and run your applications in AKS, you need a way to store and pull the container images. Container Registry integrates with AKS, so it can securely store your container images or Helm charts. Container Registry supports multimaster geo-replication to automatically replicate your images to Azure regions around the world.
9093

91-
To improve performance and availability:
92-
1. Use Container Registry geo-replication to create a registry in each region where you have an AKS cluster.
93-
1. Each AKS cluster then pulls container images from the local container registry in the same region:
94+
To improve performance and availability, use Container Registry geo-replication to create a registry in each region where you have an AKS cluster.Each AKS cluster will then pull container images from the local container registry in the same region.
9495

9596
![Container Registry geo-replication for container images](media/operator-best-practices-bc-dr/acr-geo-replication.png)
9697

97-
When you use Container Registry geo-replication to pull images from the same region, the results are:
98+
Using Container Registry geo-replication to pull images from the same region has the following benefits:
9899

99100
* **Faster**: Pull images from high-speed, low-latency network connections within the same Azure region.
100101
* **More reliable**: If a region is unavailable, your AKS cluster pulls the images from an available container registry.
@@ -105,14 +106,15 @@ Geo-replication is a *Premium* SKU container registry feature. For information o
105106
## Remove service state from inside containers
106107

107108
> **Best practice**
108-
>
109+
>
109110
> Avoid storing service state inside the container. Instead, use an Azure platform as a service (PaaS) that supports multi-region replication.
110111
111112
*Service state* refers to the in-memory or on-disk data required by a service to function. State includes the data structures and member variables that the service reads and writes. Depending on how the service is architected, the state might also include files or other resources stored on the disk. For example, the state might include the files a database uses to store data and transaction logs.
112113

113114
State can be either externalized or co-located with the code that manipulates the state. Typically, you externalize state by using a database or other data store that runs on different machines over the network or that runs out of process on the same machine.
114115

115116
Containers and microservices are most resilient when the processes that run inside them don't retain state. Since applications almost always contain some state, use a PaaS solution, such as:
117+
116118
* Azure Cosmos DB
117119
* Azure Database for PostgreSQL
118120
* Azure Database for MySQL
@@ -139,6 +141,7 @@ Your applications might use Azure Storage for their data. If so, your applicatio
139141
Your applications might require persistent storage even after a pod is deleted. In Kubernetes, you can use persistent volumes to persist data storage. Persistent volumes are mounted to a node VM and then exposed to the pods. Persistent volumes follow pods even if the pods are moved to a different node inside the same cluster.
140142

141143
The replication strategy you use depends on your storage solution. The following common storage solutions provide their own guidance about disaster recovery and replication:
144+
142145
* [Gluster](https://docs.gluster.org/en/latest/Administrator-Guide/Geo-Replication/)
143146
* [Ceph](https://docs.ceph.com/docs/master/cephfs/disaster-recovery/)
144147
* [Rook](https://rook.io/docs/rook/v1.2/ceph-disaster-recovery.html)

articles/azure-arc/kubernetes/conceptual-extensions.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
---
22
title: "Cluster extensions - Azure Arc-enabled Kubernetes"
3-
ms.date: 01/23/2023
3+
ms.date: 03/08/2023
44
ms.topic: conceptual
55
description: "This article provides a conceptual overview of the Azure Arc-enabled Kubernetes cluster extensions capability."
66
---
@@ -34,6 +34,9 @@ Both the `config-agent` and `extensions-manager` components running in the clust
3434
>
3535
> Protected configuration settings for an extension instance are stored for up to 48 hours in the Azure Arc-enabled Kubernetes services. As a result, if the cluster remains disconnected during the 48 hours after the extension resource was created on Azure, the extension changes from a `Pending` state to `Failed` state. To prevent this, we recommend bringing clusters online regularly.
3636
37+
> [!IMPORTANT]
38+
> Currently, Azure Arc-enabled Kubernetes cluster extensions aren't supported on ARM64-based clusters. To [install and use cluster extensions](extensions.md), the cluster must have at least one node of operating system and architecture type `linux/amd64`.
39+
3740
## Extension scope
3841

3942
Each extension type defines the scope at which they operate on the cluster. Extension installations on Arc-enabled Kubernetes clusters are either *cluster-scoped* or *namespace-scoped*.

articles/azure-arc/kubernetes/extensions.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: "Azure Arc-enabled Kubernetes cluster extensions"
33
ms.custom: event-tier1-build-2022, ignite-2022
4-
ms.date: 01/23/2023
4+
ms.date: 03/08/2023
55
ms.topic: how-to
66
description: "Deploy and manage lifecycle of extensions on Azure Arc-enabled Kubernetes clusters."
77
---
@@ -39,7 +39,7 @@ Before you begin, read the [conceptual overview of Arc-enabled Kubernetes cluste
3939
az extension update --name k8s-extension
4040
```
4141
42-
* An existing Azure Arc-enabled Kubernetes connected cluster.
42+
* An existing Azure Arc-enabled Kubernetes connected cluster, with at least one node of operating system and architecture type `linux/amd64`.
4343
* If you haven't connected a cluster yet, use our [quickstart](quickstart-connect-cluster.md).
4444
* [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to the latest version.
4545

articles/azure-arc/kubernetes/network-requirements.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -27,5 +27,6 @@ For a complete list of network requirements for Azure Arc features and Azure Arc
2727

2828
## Next steps
2929

30+
- Learn about other [requirements for Arc-enabled Kubernetes](system-requirements.md).
3031
- Use our [quickstart](quickstart-connect-cluster.md) to connect your cluster.
3132
- Review [frequently asked questions](faq.md) about Arc-enabled Kubernetes.

articles/azure-arc/kubernetes/quickstart-connect-cluster.md

Lines changed: 10 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
title: "Quickstart: Connect an existing Kubernetes cluster to Azure Arc"
33
description: In this quickstart, you learn how to connect an Azure Arc-enabled Kubernetes cluster.
44
ms.topic: quickstart
5-
ms.date: 03/07/2023
5+
ms.date: 03/08/2023
66
ms.custom: template-quickstart, mode-other, devx-track-azurecli, devx-track-azurepowershell
77
ms.devlang: azurecli
88
---
@@ -20,20 +20,10 @@ In addition to the prerequisites below, be sure to meet all [network requirement
2020
### [Azure CLI](#tab/azure-cli)
2121

2222
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
23-
2423
* A basic understanding of [Kubernetes core concepts](../../aks/concepts-clusters-workloads.md).
25-
26-
* An identity (user or service principal) which can be used to [log in to Azure CLI](/cli/azure/authenticate-azure-cli) and connect your cluster to Azure Arc.
27-
28-
> [!IMPORTANT]
29-
>
30-
> * The identity must have 'Read' and 'Write' permissions on the Azure Arc-enabled Kubernetes resource type (`Microsoft.Kubernetes/connectedClusters`).
31-
> * If connecting the cluster to an existing resource group (rather than a new one created by this identity), the identity must have 'Read' permission for that resource group.
32-
> * The [Kubernetes Cluster - Azure Arc Onboarding built-in role](../../role-based-access-control/built-in-roles.md#kubernetes-cluster---azure-arc-onboarding) can be used for this identity. This role is useful for at-scale onboarding, as it has only the granular permissions required to connect clusters to Azure Arc, and doesn't have permission to update, delete, or modify any other clusters or other Azure resources.
33-
34-
* [Install or upgrade Azure CLI](/cli/azure/install-azure-cli) to the latest version.
35-
36-
* Install the latest version of **connectedk8s** Azure CLI extension:
24+
* An [identity (user or service principal)](system-requirements.md#azure-ad-identity-requirements) which can be used to [log in to Azure CLI](/cli/azure/authenticate-azure-cli) and connect your cluster to Azure Arc.
25+
* The latest version of [Azure CLI](/cli/azure/install-azure-cli).
26+
* The latest version of **connectedk8s** Azure CLI extension, installed by running the following command:
3727

3828
```azurecli
3929
az extension add --name connectedk8s
@@ -45,48 +35,34 @@ In addition to the prerequisites below, be sure to meet all [network requirement
4535
* Self-managed Kubernetes cluster using [Cluster API](https://cluster-api.sigs.k8s.io/user/quick-start.html)
4636

4737
>[!NOTE]
48-
> The cluster needs to have at least one node of operating system and architecture type `linux/amd64`. Clusters with only `linux/arm64` nodes aren't yet supported.
49-
50-
* At least 850 MB free for the Arc agents that will be deployed on the cluster, and capacity to use approximately 7% of a single CPU. For a multi-node Kubernetes cluster environment, pods can get scheduled on different nodes.
38+
> The cluster needs to have at least one node of operating system and architecture type `linux/amd64` and/or `linux/arm64`. See [Cluster requirements](system-requirements.md#cluster-requirements) for more about ARM64 scenarios.
5139
40+
* At least 850 MB free for the Arc agents that will be deployed on the cluster, and capacity to use approximately 7% of a single CPU.
5241
* A [kubeconfig file](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) and context pointing to your cluster.
53-
5442
* Install [Helm 3](https://helm.sh/docs/intro/install). Ensure that the Helm 3 version is < 3.7.0.
5543

5644
### [Azure PowerShell](#tab/azure-powershell)
5745

5846
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
59-
6047
* A basic understanding of [Kubernetes core concepts](../../aks/concepts-clusters-workloads.md).
61-
48+
* An [identity (user or service principal)](system-requirements.md#azure-ad-identity-requirements) which can be used to [log in to Azure PowerShell](/powershell/azure/authenticate-azureps) and connect your cluster to Azure Arc.
6249
* [Azure PowerShell version 6.6.0 or later](/powershell/azure/install-az-ps)
63-
64-
* Install the **Az.ConnectedKubernetes** PowerShell module:
50+
* The **Az.ConnectedKubernetes** PowerShell module, installed by running the following command:
6551

6652
```azurepowershell-interactive
6753
Install-Module -Name Az.ConnectedKubernetes
6854
```
6955
70-
* An identity (user or service principal) which can be used to [log in to Azure PowerShell](/powershell/azure/authenticate-azureps) and connect your cluster to Azure Arc.
71-
72-
> [!IMPORTANT]
73-
>
74-
> * The identity must have 'Read' and 'Write' permissions on the Azure Arc-enabled Kubernetes resource type (`Microsoft.Kubernetes/connectedClusters`).
75-
> * If connecting the cluster to an existing resource group (rather than a new one created by this identity), the identity must have 'Read' permission for that resource group.
76-
> * The [Kubernetes Cluster - Azure Arc Onboarding built-in role](../../role-based-access-control/built-in-roles.md#kubernetes-cluster---azure-arc-onboarding) is useful for at-scale onboarding as it has the granular permissions required to only connect clusters to Azure Arc. This role doesn't have the permissions to update, delete, or modify any other clusters or other Azure resources.
77-
7856
* An up-and-running Kubernetes cluster. If you don't have one, you can create a cluster using one of these options:
7957
* [Kubernetes in Docker (KIND)](https://kind.sigs.k8s.io/)
8058
* Create a Kubernetes cluster using Docker for [Mac](https://docs.docker.com/docker-for-mac/#kubernetes) or [Windows](https://docs.docker.com/docker-for-windows/#kubernetes)
8159
* Self-managed Kubernetes cluster using [Cluster API](https://cluster-api.sigs.k8s.io/user/quick-start.html)
8260
8361
>[!NOTE]
84-
> The cluster needs to have at least one node of operating system and architecture type `linux/amd64`. Clusters with only `linux/arm64` nodes aren't yet supported.
85-
86-
* At least 850 MB free for the Arc agents that will be deployed on the cluster, and capacity to use approximately 7% of a single CPU. For a multi-node Kubernetes cluster environment, pods can get scheduled on different nodes.
62+
> The cluster needs to have at least one node of operating system and architecture type `linux/amd64` and/or `linux/arm64`. See [Cluster requirements](system-requirements.md#cluster-requirements) for more about ARM64 scenarios.
8763
64+
* At least 850 MB free for the Arc agents that will be deployed on the cluster, and capacity to use approximately 7% of a single CPU.
8865
* A [kubeconfig file](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) and context pointing to your cluster.
89-
9066
* Install [Helm 3](https://helm.sh/docs/intro/install). Ensure that the Helm 3 version is < 3.7.0.
9167
9268
---

0 commit comments

Comments
 (0)