Skip to content

Commit e1f10de

Browse files
authored
Merge pull request #191752 from MicrosoftDocs/main
3/15 AM Publish
2 parents 9a9741c + 2067cdc commit e1f10de

File tree

164 files changed

+1755
-1002
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

164 files changed

+1755
-1002
lines changed

articles/active-directory/app-provisioning/skip-out-of-scope-deletions.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -76,7 +76,7 @@ Copy the updated text from Step 3 into the "Request Body".
7676

7777
Click on “Run Query”.
7878

79-
You should get the output as "Success – Status Code 204".
79+
You should get the output as "Success – Status Code 204". If you receive an error you may need to check that your account has Read/Write permissions for ServicePrincipalEndpoint. You can find this permission by clicking on the *Modify permissions* tab in Graph Explorer.
8080

8181
![PUT response](./media/skip-out-of-scope-deletions/skip-06.png)
8282

articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-product-data-sources.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ You can use the **Data Collectors** dashboard in CloudKnox Permissions Managemen
6262
1. Select the ellipses **(...)** at the end of the row in the table.
6363
1. Select **Edit Configuration**.
6464

65-
The **M-CIEM Onboarding - Summary** box displays.
65+
The **CloudKnox Onboarding - Summary** box displays.
6666

6767
1. Select **Edit** (the pencil icon) for each field you want to change.
6868
1. Select **Verify now & save**.

articles/active-directory/hybrid/how-to-connect-install-prerequisites.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,6 @@ Before you install Azure AD Connect, there are a few things that you need.
3636

3737
### On-premises Active Directory
3838
* The Active Directory schema version and forest functional level must be Windows Server 2003 or later. The domain controllers can run any version as long as the schema version and forest-level requirements are met.
39-
* If you plan to use the feature *password writeback*, the domain controllers must be on Windows Server 2016 or later.
4039
* The domain controller used by Azure AD must be writable. Using a read-only domain controller (RODC) *isn't supported*, and Azure AD Connect doesn't follow any write redirects.
4140
* Using on-premises forests or domains by using "dotted" (name contains a period ".") NetBIOS names *isn't supported*.
4241
* We recommend that you [enable the Active Directory recycle bin](how-to-connect-sync-recycle-bin.md).

articles/active-directory/standards/memo-22-09-meet-identity-requirements.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ ms.collection: M365-identity-device-management
1717

1818
# Meeting identity requirements of Memorandum 22-09 with Azure Active Directory
1919

20-
This series of articles offer guidance for employing Azure Active Directory (Azure AD) as a centralized identity management system for implementing Zero Trust principles as described by the US Federal Government’s Office of Management and Budget (OMB) [Memorandum M-22-09](https://www.whitehouse.gov/wp-content/uploads/2022/01/M-22-09.pdf). Throughout this document wee refer to it as "The memo."
20+
This series of articles offer guidance for employing Azure Active Directory (Azure AD) as a centralized identity management system for implementing Zero Trust principles as described by the US Federal Government’s Office of Management and Budget (OMB) [Memorandum M-22-09](https://www.whitehouse.gov/wp-content/uploads/2022/01/M-22-09.pdf). Throughout this document we refer to it as "The memo."
2121

2222
The release of Memorandum 22-09 is designed to support Zero trust initiatives within federal agencies; it also provides regulatory guidance in supporting Federal Cybersecurity and Data Privacy Laws. The Memo cites the [Department of Defense (DoD) Zero Trust Reference Architecture](https://dodcio.defense.gov/Portals/0/Documents/Library/(U)ZT_RA_v1.1(U)_Mar21.pdf),
2323

articles/aks/csi-storage-drivers.md

Lines changed: 19 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: Enable Container Storage Interface (CSI) drivers on Azure Kubernetes Serv
33
description: Learn how to enable the Container Storage Interface (CSI) drivers for Azure disks and Azure Files in an Azure Kubernetes Service (AKS) cluster.
44
services: container-service
55
ms.topic: article
6-
ms.date: 03/10/2022
6+
ms.date: 03/11/2022
77
author: palma21
88

99
---
@@ -17,18 +17,12 @@ The CSI storage driver support on AKS allows you to natively use:
1717
- [*Azure Files*](azure-files-csi.md), which can be used to mount an SMB 3.0/3.1 share backed by an Azure Storage account to pods. With Azure Files, you can share data across multiple nodes and pods. Azure Files can use Azure Standard Storage backed by regular HDDs or Azure Premium Storage backed by high-performance SSDs.
1818

1919
> [!IMPORTANT]
20-
> Starting in Kubernetes version 1.21, Kubernetes will use CSI drivers only and by default. These drivers are the future of storage support in Kubernetes.
20+
> Starting in Kubernetes version 1.21, AKS will use CSI drivers only and by default. CSI migration is also turned on starting from AKS 1.21, existing in-tree persistent volumes continue to function as they always have; however, behind the scenes Kubernetes hands control of all storage management operations (previously targeting in-tree drivers) to CSI drivers.
2121
>
2222
> Please remove manual installed open source Azure Disk and Azure File CSI drivers before upgrading to AKS 1.21.
2323
>
2424
> *In-tree drivers* refers to the current storage drivers that are part of the core Kubernetes code versus the new CSI drivers, which are plug-ins.
2525
26-
## Limitations
27-
28-
- This feature can only be set at cluster creation time.
29-
- The minimum Kubernetes minor version that supports CSI drivers is v1.17.
30-
- The default storage class will be the `managed-csi` storage class.
31-
3226
## Install CSI storage drivers on a new cluster with version < 1.21
3327

3428
Create a new cluster that can use CSI storage drivers for Azure disks and Azure Files by using the following CLI commands. Use the `--aks-custom-headers` flag to set the `EnableAzureDiskFileCSIDriver` feature.
@@ -67,14 +61,9 @@ $ echo $(kubectl get CSINode <NODE NAME> -o jsonpath="{.spec.drivers[1].allocata
6761
- [Set up Azure File CSI driver on AKS cluster](https://github.com/kubernetes-sigs/azurefile-csi-driver/blob/master/docs/install-driver-on-aks.md)
6862

6963
## Migrating custom in-tree storage classes to CSI
70-
If you have created custom storage classes based on the in-tree storage drivers, these will need to be migrated when you have upgraded your cluster to 1.21.x.
71-
72-
Whilst explicit migration to the CSI provider is not needed for your storage classes to still be valid, to be able to use CSI features (snapshotting etc.) you will need to carry out the migration.
73-
74-
Migration of these storage classes will involve deleting the existing storage classes, and re-provisioning them with the provisioner set to **disk.csi.azure.com** if using Azure Disks, and **files.csi.azure.com** if using Azure Files.
75-
76-
Whilst this will update the mapping of the storage classes, the binding of the Persistent Volume to the CSI provisioner will only take place at provisioning time. This could be during a cordon & drain operation (cluster update) or by detaching and reattaching the Volume.
64+
If you have created in-tree driver storage classes, those storage classes will continue to work since CSI migration is turned on after upgrading your cluster to 1.21.x, while if you want to use CSI features (snapshotting etc.) you will need to carry out the migration.
7765

66+
Migration of these storage classes will involve deleting the existing storage classes, and re-creating them with the provisioner set to **disk.csi.azure.com** if using Azure Disks, and **files.csi.azure.com** if using Azure Files.
7867

7968
### Migrating Storage Class provisioner
8069

@@ -86,12 +75,11 @@ As an example for Azure disks:
8675
kind: StorageClass
8776
apiVersion: storage.k8s.io/v1
8877
metadata:
89-
name: managed-premium-retain
78+
name: custom-managed-premium
9079
provisioner: kubernetes.io/azure-disk
91-
reclaimPolicy: Retain
80+
reclaimPolicy: Delete
9281
parameters:
93-
storageaccounttype: Premium_LRS
94-
kind: Managed
82+
storageAccountType: Premium_LRS
9583
```
9684
9785
#### CSI storage class definition
@@ -100,26 +88,30 @@ parameters:
10088
kind: StorageClass
10189
apiVersion: storage.k8s.io/v1
10290
metadata:
103-
name: managed-premium-retain
91+
name: custom-managed-premium
10492
provisioner: disk.csi.azure.com
105-
reclaimPolicy: Retain
93+
reclaimPolicy: Delete
10694
parameters:
107-
storageaccounttype: Premium_LRS
108-
kind: Managed
95+
storageAccountType: Premium_LRS
10996
```
11097
11198
The CSI storage system supports the same features as the In-tree drivers, so the only change needed would be the provisioner.
11299
113-
114-
### Migrating in-tree disk persistent volumes
100+
## Migrating in-tree persistent volumes
115101
116102
> [!IMPORTANT]
117103
> If your in-tree Persistent Volume reclaimPolicy is set to Delete you will need to change the Persistent Volume to Retain to persist your data. This can be achieved via a [patch operation on the PV](https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/). For example:
118104
> ```console
119105
> $ kubectl patch pv pv-azuredisk --type merge --patch '{"spec": {"persistentVolumeReclaimPolicy": "Retain"}}'
120106
> ```
121107

122-
If you have in-tree persistent volumes, get disk ID from `azureDisk.diskURI` and then follow this [guide][azure-disk-static-mount] to set up CSI driver persistent volumes
108+
### Migrating in-tree Azure Disk persistent volumes
109+
110+
If you have in-tree Azure Disk persistent volumes, get `diskURI` from in-tree persistent volumes and then follow this [guide][azure-disk-static-mount] to set up CSI driver persistent volumes
111+
112+
### Migrating in-tree Azure File persistent volumes
113+
114+
If you have in-tree Azure File persistent volumes, get `secretName`, `shareName` from in-tree persistent volumes and then follow this [guide][azure-file-static-mount] to set up CSI driver persistent volumes
123115

124116
## Next steps
125117

@@ -140,6 +132,7 @@ If you have in-tree persistent volumes, get disk ID from `azureDisk.diskURI` and
140132
<!-- LINKS - internal -->
141133
[azure-disk-volume]: azure-disk-volume.md
142134
[azure-disk-static-mount]: azure-disk-volume.md#mount-disk-as-volume
135+
[azure-file-static-mount]: azure-files-volume.md#mount-file-share-as-a-persistent-volume
143136
[azure-files-pvc]: azure-files-dynamic-pv.md
144137
[premium-storage]: ../virtual-machines/disks-types.md
145138
[az-disk-list]: /cli/azure/disk#az_disk_list

articles/app-service/environment/migrate.md

Lines changed: 15 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: Migrate to App Service Environment v3 by using the migration feature
33
description: Overview of the migration feature for migration to App Service Environment v3
44
author: seligj95
55
ms.topic: article
6-
ms.date: 2/10/2022
6+
ms.date: 3/14/2022
77
ms.author: jordanselig
88
ms.custom: references_regions
99
---
@@ -19,13 +19,22 @@ App Service can now automate migration of your App Service Environment v2 to an
1919

2020
At this time, App Service Environment migrations to v3 using the migration feature support both [Internal Load Balancer (ILB)](create-ilb-ase.md) and [external (internet facing with public IP)](create-external-ase.md) App Service Environment v2 in the following regions:
2121

22-
- West Central US
23-
- Canada Central
24-
- UK South
25-
- Germany West Central
26-
- East Asia
2722
- Australia East
23+
- Australia Central
2824
- Australia Southeast
25+
- Canada Central
26+
- Central India
27+
- East Asia
28+
- East US
29+
- East US 2
30+
- France Central
31+
- Germany West Central
32+
- Korea Central
33+
- Norway East
34+
- Switzerland North
35+
- UAE North
36+
- UK South
37+
- West Central US
2938

3039
You can find the version of your App Service Environment by navigating to your App Service Environment in the [Azure portal](https://portal.azure.com) and selecting **Configuration** under **Settings** on the left-hand side. You can also use [Azure Resource Explorer](https://resources.azure.com/) and review the value of the `kind` property for your App Service Environment.
3140

articles/applied-ai-services/form-recognizer/includes/get-started/java.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -92,7 +92,7 @@ You will create the following directory structure:
9292

9393
:::image type="content" source="../../media/quickstarts/java-directories.png" alt-text="Screenshot: Java directory structure":::
9494

95-
Navigate to the java directory and create a file called *FormRecognizer.java*. Open it in your preferred editor or IDE and add the following package declaration and `import` statements:
95+
Navigate to the Java directory and create a file called *FormRecognizer.java*. Open it in your preferred editor or IDE and add the following package declaration and `import` statements:
9696

9797
```java
9898
import com.azure.ai.formrecognizer.*;

articles/azure-functions/bring-dependency-to-functions.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -84,7 +84,7 @@ One of the simplest ways to bring in dependencies is to put the files/artifact t
8484
| - local.settings.json
8585
| - pom.xml
8686
```
87-
For java specifically, you need to specifically include the artifacts into the build/target folder when copying resources. Here's an example on how to do it in Maven:
87+
For Java specifically, you need to specifically include the artifacts into the build/target folder when copying resources. Here's an example on how to do it in Maven:
8888

8989
```xml
9090
...

articles/azure-functions/functions-how-to-github-actions.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -330,7 +330,7 @@ env:
330330
AZURE_FUNCTIONAPP_NAME: your-app-name # set this to your function app name on Azure
331331
POM_XML_DIRECTORY: '.' # set this to the directory which contains pom.xml file
332332
POM_FUNCTIONAPP_NAME: your-app-name # set this to the function app name in your local development environment
333-
JAVA_VERSION: '1.8.x' # set this to the java version to use
333+
JAVA_VERSION: '1.8.x' # set this to the Java version to use
334334
335335
jobs:
336336
build-and-deploy:

articles/azure-functions/functions-versions.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -149,6 +149,8 @@ The following are some changes to be aware of before upgrading a 3.x app to 4.x.
149149

150150
- Default and maximum timeouts are now enforced in 4.x Linux consumption function apps. ([#1915](https://github.com/Azure/Azure-Functions/issues/1915))
151151

152+
- Azure Functions 4.x uses Azure.Identity and Azure.Security.KeyVault.Secrets for the Key Vault provider and has deprecated the use of Microsoft.Azure.KeyVault. See the Key Vault option in [Secret Repositories](security-concepts.md#secret-repositories) for more information on how to configure function app settings. ([#2048](https://github.com/Azure/Azure-Functions/issues/2048))
153+
152154
- Function apps that share storage accounts will fail to start if their computed hostnames are the same. Use a separate storage account for each function app. ([#2049](https://github.com/Azure/Azure-Functions/issues/2049))
153155

154156
::: zone pivot="programming-language-csharp"

0 commit comments

Comments
 (0)