You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/active-directory/app-provisioning/skip-out-of-scope-deletions.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -76,7 +76,7 @@ Copy the updated text from Step 3 into the "Request Body".
76
76
77
77
Click on “Run Query”.
78
78
79
-
You should get the output as "Success – Status Code 204".
79
+
You should get the output as "Success – Status Code 204". If you receive an error you may need to check that your account has Read/Write permissions for ServicePrincipalEndpoint. You can find this permission by clicking on the *Modify permissions* tab in Graph Explorer.
Copy file name to clipboardExpand all lines: articles/active-directory/hybrid/how-to-connect-install-prerequisites.md
-1Lines changed: 0 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -36,7 +36,6 @@ Before you install Azure AD Connect, there are a few things that you need.
36
36
37
37
### On-premises Active Directory
38
38
* The Active Directory schema version and forest functional level must be Windows Server 2003 or later. The domain controllers can run any version as long as the schema version and forest-level requirements are met.
39
-
* If you plan to use the feature *password writeback*, the domain controllers must be on Windows Server 2016 or later.
40
39
* The domain controller used by Azure AD must be writable. Using a read-only domain controller (RODC) *isn't supported*, and Azure AD Connect doesn't follow any write redirects.
41
40
* Using on-premises forests or domains by using "dotted" (name contains a period ".") NetBIOS names *isn't supported*.
42
41
* We recommend that you [enable the Active Directory recycle bin](how-to-connect-sync-recycle-bin.md).
# Meeting identity requirements of Memorandum 22-09 with Azure Active Directory
19
19
20
-
This series of articles offer guidance for employing Azure Active Directory (Azure AD) as a centralized identity management system for implementing Zero Trust principles as described by the US Federal Government’s Office of Management and Budget (OMB) [Memorandum M-22-09](https://www.whitehouse.gov/wp-content/uploads/2022/01/M-22-09.pdf). Throughout this document wee refer to it as "The memo."
20
+
This series of articles offer guidance for employing Azure Active Directory (Azure AD) as a centralized identity management system for implementing Zero Trust principles as described by the US Federal Government’s Office of Management and Budget (OMB) [Memorandum M-22-09](https://www.whitehouse.gov/wp-content/uploads/2022/01/M-22-09.pdf). Throughout this document we refer to it as "The memo."
21
21
22
22
The release of Memorandum 22-09 is designed to support Zero trust initiatives within federal agencies; it also provides regulatory guidance in supporting Federal Cybersecurity and Data Privacy Laws. The Memo cites the [Department of Defense (DoD) Zero Trust Reference Architecture](https://dodcio.defense.gov/Portals/0/Documents/Library/(U)ZT_RA_v1.1(U)_Mar21.pdf),
description: Learn how to enable the Container Storage Interface (CSI) drivers for Azure disks and Azure Files in an Azure Kubernetes Service (AKS) cluster.
4
4
services: container-service
5
5
ms.topic: article
6
-
ms.date: 03/10/2022
6
+
ms.date: 03/11/2022
7
7
author: palma21
8
8
9
9
---
@@ -17,18 +17,12 @@ The CSI storage driver support on AKS allows you to natively use:
17
17
-[*Azure Files*](azure-files-csi.md), which can be used to mount an SMB 3.0/3.1 share backed by an Azure Storage account to pods. With Azure Files, you can share data across multiple nodes and pods. Azure Files can use Azure Standard Storage backed by regular HDDs or Azure Premium Storage backed by high-performance SSDs.
18
18
19
19
> [!IMPORTANT]
20
-
> Starting in Kubernetes version 1.21, Kubernetes will use CSI drivers only and by default. These drivers are the future of storage support in Kubernetes.
20
+
> Starting in Kubernetes version 1.21, AKS will use CSI drivers only and by default. CSI migration is also turned on starting from AKS 1.21, existing in-tree persistent volumes continue to function as they always have; however, behind the scenes Kubernetes hands control of all storage management operations (previously targeting in-tree drivers) to CSI drivers.
21
21
>
22
22
> Please remove manual installed open source Azure Disk and Azure File CSI drivers before upgrading to AKS 1.21.
23
23
>
24
24
> *In-tree drivers* refers to the current storage drivers that are part of the core Kubernetes code versus the new CSI drivers, which are plug-ins.
25
25
26
-
## Limitations
27
-
28
-
- This feature can only be set at cluster creation time.
29
-
- The minimum Kubernetes minor version that supports CSI drivers is v1.17.
30
-
- The default storage class will be the `managed-csi` storage class.
31
-
32
26
## Install CSI storage drivers on a new cluster with version < 1.21
33
27
34
28
Create a new cluster that can use CSI storage drivers for Azure disks and Azure Files by using the following CLI commands. Use the `--aks-custom-headers` flag to set the `EnableAzureDiskFileCSIDriver` feature.
-[Set up Azure File CSI driver on AKS cluster](https://github.com/kubernetes-sigs/azurefile-csi-driver/blob/master/docs/install-driver-on-aks.md)
68
62
69
63
## Migrating custom in-tree storage classes to CSI
70
-
If you have created custom storage classes based on the in-tree storage drivers, these will need to be migrated when you have upgraded your cluster to 1.21.x.
71
-
72
-
Whilst explicit migration to the CSI provider is not needed for your storage classes to still be valid, to be able to use CSI features (snapshotting etc.) you will need to carry out the migration.
73
-
74
-
Migration of these storage classes will involve deleting the existing storage classes, and re-provisioning them with the provisioner set to **disk.csi.azure.com** if using Azure Disks, and **files.csi.azure.com** if using Azure Files.
75
-
76
-
Whilst this will update the mapping of the storage classes, the binding of the Persistent Volume to the CSI provisioner will only take place at provisioning time. This could be during a cordon & drain operation (cluster update) or by detaching and reattaching the Volume.
64
+
If you have created in-tree driver storage classes, those storage classes will continue to work since CSI migration is turned on after upgrading your cluster to 1.21.x, while if you want to use CSI features (snapshotting etc.) you will need to carry out the migration.
77
65
66
+
Migration of these storage classes will involve deleting the existing storage classes, and re-creating them with the provisioner set to **disk.csi.azure.com** if using Azure Disks, and **files.csi.azure.com** if using Azure Files.
78
67
79
68
### Migrating Storage Class provisioner
80
69
@@ -86,12 +75,11 @@ As an example for Azure disks:
86
75
kind: StorageClass
87
76
apiVersion: storage.k8s.io/v1
88
77
metadata:
89
-
name: managed-premium-retain
78
+
name: custom-managed-premium
90
79
provisioner: kubernetes.io/azure-disk
91
-
reclaimPolicy: Retain
80
+
reclaimPolicy: Delete
92
81
parameters:
93
-
storageaccounttype: Premium_LRS
94
-
kind: Managed
82
+
storageAccountType: Premium_LRS
95
83
```
96
84
97
85
#### CSI storage class definition
@@ -100,26 +88,30 @@ parameters:
100
88
kind: StorageClass
101
89
apiVersion: storage.k8s.io/v1
102
90
metadata:
103
-
name: managed-premium-retain
91
+
name: custom-managed-premium
104
92
provisioner: disk.csi.azure.com
105
-
reclaimPolicy: Retain
93
+
reclaimPolicy: Delete
106
94
parameters:
107
-
storageaccounttype: Premium_LRS
108
-
kind: Managed
95
+
storageAccountType: Premium_LRS
109
96
```
110
97
111
98
The CSI storage system supports the same features as the In-tree drivers, so the only change needed would be the provisioner.
112
99
113
-
114
-
### Migrating in-tree disk persistent volumes
100
+
## Migrating in-tree persistent volumes
115
101
116
102
> [!IMPORTANT]
117
103
> If your in-tree Persistent Volume reclaimPolicy is set to Delete you will need to change the Persistent Volume to Retain to persist your data. This can be achieved via a [patch operation on the PV](https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/). For example:
If you have in-tree persistent volumes, get disk ID from `azureDisk.diskURI` and then follow this [guide][azure-disk-static-mount] to set up CSI driver persistent volumes
108
+
### Migrating in-tree Azure Disk persistent volumes
109
+
110
+
If you have in-tree Azure Disk persistent volumes, get `diskURI` from in-tree persistent volumes and then follow this [guide][azure-disk-static-mount] to set up CSI driver persistent volumes
If you have in-tree Azure File persistent volumes, get `secretName`, `shareName` from in-tree persistent volumes and then follow this [guide][azure-file-static-mount] to set up CSI driver persistent volumes
123
115
124
116
## Next steps
125
117
@@ -140,6 +132,7 @@ If you have in-tree persistent volumes, get disk ID from `azureDisk.diskURI` and
Copy file name to clipboardExpand all lines: articles/app-service/environment/migrate.md
+15-6Lines changed: 15 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@ title: Migrate to App Service Environment v3 by using the migration feature
3
3
description: Overview of the migration feature for migration to App Service Environment v3
4
4
author: seligj95
5
5
ms.topic: article
6
-
ms.date: 2/10/2022
6
+
ms.date: 3/14/2022
7
7
ms.author: jordanselig
8
8
ms.custom: references_regions
9
9
---
@@ -19,13 +19,22 @@ App Service can now automate migration of your App Service Environment v2 to an
19
19
20
20
At this time, App Service Environment migrations to v3 using the migration feature support both [Internal Load Balancer (ILB)](create-ilb-ase.md) and [external (internet facing with public IP)](create-external-ase.md) App Service Environment v2 in the following regions:
21
21
22
-
- West Central US
23
-
- Canada Central
24
-
- UK South
25
-
- Germany West Central
26
-
- East Asia
27
22
- Australia East
23
+
- Australia Central
28
24
- Australia Southeast
25
+
- Canada Central
26
+
- Central India
27
+
- East Asia
28
+
- East US
29
+
- East US 2
30
+
- France Central
31
+
- Germany West Central
32
+
- Korea Central
33
+
- Norway East
34
+
- Switzerland North
35
+
- UAE North
36
+
- UK South
37
+
- West Central US
29
38
30
39
You can find the version of your App Service Environment by navigating to your App Service Environment in the [Azure portal](https://portal.azure.com) and selecting **Configuration** under **Settings** on the left-hand side. You can also use [Azure Resource Explorer](https://resources.azure.com/) and review the value of the `kind` property for your App Service Environment.
Navigate to the java directory and create a file called *FormRecognizer.java*. Open it in your preferred editor or IDE and add the following package declaration and`import` statements:
95
+
Navigate to the Java directory and create a file called *FormRecognizer.java*. Open it in your preferred editor or IDE and add the following package declaration and `import` statements:
Copy file name to clipboardExpand all lines: articles/azure-functions/bring-dependency-to-functions.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -84,7 +84,7 @@ One of the simplest ways to bring in dependencies is to put the files/artifact t
84
84
| - local.settings.json
85
85
| - pom.xml
86
86
```
87
-
For java specifically, you need to specifically include the artifacts into the build/target folder when copying resources. Here's an example on how to do it in Maven:
87
+
For Java specifically, you need to specifically include the artifacts into the build/target folder when copying resources. Here's an example on how to do it in Maven:
Copy file name to clipboardExpand all lines: articles/azure-functions/functions-versions.md
+2Lines changed: 2 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -149,6 +149,8 @@ The following are some changes to be aware of before upgrading a 3.x app to 4.x.
149
149
150
150
- Default and maximum timeouts are now enforced in 4.x Linux consumption function apps. ([#1915](https://github.com/Azure/Azure-Functions/issues/1915))
151
151
152
+
- Azure Functions 4.x uses Azure.Identity and Azure.Security.KeyVault.Secrets for the Key Vault provider and has deprecated the use of Microsoft.Azure.KeyVault. See the Key Vault option in [Secret Repositories](security-concepts.md#secret-repositories) for more information on how to configure function app settings. ([#2048](https://github.com/Azure/Azure-Functions/issues/2048))
153
+
152
154
- Function apps that share storage accounts will fail to start if their computed hostnames are the same. Use a separate storage account for each function app. ([#2049](https://github.com/Azure/Azure-Functions/issues/2049))
0 commit comments