You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/active-directory/saas-apps/atlassian-cloud-provisioning-tutorial.md
+2-3Lines changed: 2 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -151,8 +151,7 @@ Once you've configured provisioning, use the following resources to monitor your
151
151
3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
152
152
153
153
## Connector Limitations
154
-
155
-
* Atlassian Cloud allows provisioning of users only from [verified domains](https://confluence.atlassian.com/cloud/organization-administration-938859734.html).
154
+
* Atlassian Cloud only supports provisioning updates for users with verified domains. Changes made to users from a non-verified domain will not be pushed to Atlassian Cloud. Learn more about Atlassian verified domains [here] (https://support.atlassian.com/provisioning-users/docs/understand-user-provisioning/).
156
155
* Atlassian Cloud does not support group renames today. This means that any changes to the displayName of a group in Azure AD will not be updated and reflected in Atlassian Cloud.
157
156
* The value of the **mail** user attribute in Azure AD is only populated if the user has a Microsoft Exchange Mailbox. If the user does not have one, it is recommended to map a different desired attribute to the **emails** attribute in Atlassian Cloud.
158
157
@@ -172,4 +171,4 @@ Once you've configured provisioning, use the following resources to monitor your
Copy file name to clipboardExpand all lines: articles/aks/certificate-rotation.md
+17-12Lines changed: 17 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,16 +1,19 @@
1
1
---
2
-
title: Rotate certificates in Azure Kubernetes Service (AKS)
3
-
description: Learn how to rotate your certificates in an Azure Kubernetes Service (AKS) cluster.
2
+
title: Certificate Rotation in Azure Kubernetes Service (AKS)
3
+
description: Learn certificate rotation in an Azure Kubernetes Service (AKS) cluster.
4
4
services: container-service
5
5
ms.topic: article
6
-
ms.date: 3/4/2022
6
+
ms.date: 3/29/2022
7
7
---
8
8
9
-
# Rotate certificates in Azure Kubernetes Service (AKS)
9
+
# Certificate rotation in Azure Kubernetes Service (AKS)
10
10
11
-
Azure Kubernetes Service (AKS) uses certificates for authentication with many of its components. Periodically, you may need to rotate those certificates for security or policy reasons. For example, you may have a policy to rotate all your certificates every 90 days.
11
+
Azure Kubernetes Service (AKS) uses certificates for authentication with many of its components. If you have a RBAC-enabled cluster built after March 2022 it is enabled with certificate auto-rotation. Periodically, you may need to rotate those certificates for security or policy reasons. For example, you may have a policy to rotate all your certificates every 90 days.
12
12
13
-
This article shows you how to rotate the certificates in your AKS cluster.
13
+
> [!NOTE]
14
+
> Certificate auto-rotation will not be enabled by default for non-RBAC enabled AKS clusters.
15
+
16
+
This article shows you how certificate rotation works in your AKS cluster.
14
17
15
18
## Before you begin
16
19
@@ -28,7 +31,7 @@ AKS generates and uses the following certificates, Certificate Authorities, and
28
31
* The `kubectl` client has a certificate for communicating with the AKS cluster.
29
32
30
33
> [!NOTE]
31
-
> AKS clusters created prior to May 2019 have certificates that expire after two years. Any cluster created after May 2019 or any cluster that has its certificates rotated have Cluster CA certificates that expire after 30 years. All other AKS certificates, which use the Cluster CA to for signing, will expire after two years and are automatically rotated during AKS version upgrade happened after 8/1/2021. To verify when your cluster was created, use `kubectl get nodes` to see the *Age* of your node pools.
34
+
> AKS clusters created prior to May 2019 have certificates that expire after two years. Any cluster created after May 2019 or any cluster that has its certificates rotated have Cluster CA certificates that expire after 30 years. All other AKS certificates, which use the Cluster CA for signing, will expire after two years and are automatically rotated during an AKS version upgrade which happened after 8/1/2021. To verify when your cluster was created, use `kubectl get nodes` to see the *Age* of your node pools.
32
35
>
33
36
> Additionally, you can check the expiration date of your cluster's certificate. For example, the following bash command displays the client certificate details for the *myAKSCluster* cluster in resource group *rg*
Azure Kubernetes Service will automatically rotate non-ca certificates on both the control plane and agent nodes before they expire with no downtime for the cluster.
56
-
57
58
For AKS to automatically rotate non-CA certificates, the cluster must have [TLS Bootstrapping](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) which has been enabled by default in all Azure regions.
58
59
60
+
> [!Note]
61
+
> If you have an existing cluster you have to upgrade that cluster to enable Certificate Auto-Rotation.
62
+
63
+
For any AKS clusters created or upgraded after March 2022 Azure Kubernetes Service will automatically rotate non-ca certificates on both the control plane and agent nodes within 80% of the client certificate valid time, before they expire with no downtime for the cluster.
64
+
59
65
#### How to check whether current agent node pool is TLS Bootstrapping enabled?
60
66
To verify if TLS Bootstrapping is enabled on your cluster browse to the following paths. On a Linux node: /var/lib/kubelet/bootstrap-kubeconfig, on a Windows node, it’s c:\k\bootstrap-config.
61
67
@@ -69,8 +75,7 @@ To verify if TLS Bootstrapping is enabled on your cluster browse to the followin
69
75
70
76
Auto cert rotation won't be enabled on non-rbac cluster.
71
77
72
-
73
-
## Rotate your cluster certificates
78
+
## Manually rotate your cluster certificates
74
79
75
80
> [!WARNING]
76
81
> Rotating your certificates using `az aks rotate-certs` will recreate all of your nodes and their OS Disks and can cause up to 30 minutes of downtime for your AKS cluster.
description: Learn about storage in Azure Kubernetes Service (AKS), including volumes, persistent volumes, storage classes, and claims
4
4
services: container-service
5
5
ms.topic: conceptual
6
-
ms.date: 03/11/2021
6
+
ms.date: 03/30/2022
7
7
8
8
---
9
9
@@ -30,23 +30,33 @@ This article introduces the core concepts that provide storage to your applicati
30
30
31
31
Kubernetes typically treats individual pods as ephemeral, disposable resources. Applications have different approaches available to them for using and persisting data. A *volume* represents a way to store, retrieve, and persist data across pods and through the application lifecycle.
32
32
33
-
Traditional volumes are created as Kubernetes resources backed by Azure Storage. You can manually create data volumes to be assigned to pods directly, or have Kubernetes automatically create them. Data volumes can useAzure Disks or Azure Files.
33
+
Traditional volumes are created as Kubernetes resources backed by Azure Storage. You can manually create data volumes to be assigned to pods directly, or have Kubernetes automatically create them. Data volumes can use: [Azure Disks][disks-types], [Azure Files][storage-files-planning], [Azure NetApp Files][azure-netapp-files-service-levels], or [Azure Blobs][storage-account-overview].
34
34
35
35
### Azure Disks
36
36
37
-
Use *Azure Disks* to create a Kubernetes *DataDisk* resource. Disks can use:
38
-
* Azure Premium storage, backed by high-performance SSDs, or
39
-
* Azure Standard storage, backed by regular HDDs.
37
+
Use *Azure Disks* to create a Kubernetes *DataDisk* resource. Disks types include:
38
+
* Ultra Disks
39
+
* Premium SSDs
40
+
* Standard SSDs
41
+
* Standard HDDs
40
42
41
43
> [!TIP]
42
-
>For most production and development workloads, use Premium storage.
44
+
>For most production and development workloads, use Premium SSD.
43
45
44
46
Since Azure Disks are mounted as *ReadWriteOnce*, they're only available to a single pod. For storage volumes that can be accessed by multiple pods simultaneously, use Azure Files.
45
47
46
48
### Azure Files
47
-
Use *Azure Files* to mount an SMB 3.0 share backed by an Azure Storage account to pods. Files let you share data across multiple nodes and pods and can use:
48
-
* Azure Premium storage, backed by high-performance SSDs, or
49
-
* Azure Standard storage backed by regular HDDs.
49
+
Use *Azure Files* to mount an SMB 3.1.1 share or NFS 4.1 share backed by an Azure storage accounts to pods. Files let you share data across multiple nodes and pods and can use:
50
+
* Azure Premium storage backed by high-performance SSDs
51
+
* Azure Standard storage backed by regular HDDs
52
+
53
+
### Azure NetApp Files
54
+
* Ultra Storage
55
+
* Premium Storage
56
+
* Standard Storage
57
+
58
+
### Azure Blob Storage
59
+
* Block Blobs
50
60
51
61
### Volume types
52
62
Kubernetes volumes represent more than just a traditional disk for storing and retrieving information. Kubernetes volumes can also be used as a way to inject data into a pod for use by the containers.
@@ -92,15 +102,6 @@ To define different tiers of storage, such as Premium and Standard, you can crea
92
102
93
103
The StorageClass also defines the *reclaimPolicy*. When you delete the pod and the persistent volume is no longer required, the reclaimPolicy controls the behavior of the underlying Azure storage resource. The underlying storage resource can either be deleted or kept for use with a future pod.
94
104
95
-
In AKS, four initial `StorageClasses` are created for cluster using the in-tree storage plugins:
96
-
97
-
| Permission | Reason |
98
-
|---|---|
99
-
|`default`| Uses Azure StandardSSD storage to create a Managed Disk. The reclaim policy ensures that the underlying Azure Disk is deleted when the persistent volume that used it is deleted. |
100
-
|`managed-premium`| Uses Azure Premium storage to create a Managed Disk. The reclaim policy again ensures that the underlying Azure Disk is deleted when the persistent volume that used it is deleted. |
101
-
|`azurefile`| Uses Azure Standard storage to create an Azure File Share. The reclaim policy ensures that the underlying Azure File Share is deleted when the persistent volume that used it is deleted. |
102
-
|`azurefile-premium`| Uses Azure Premium storage to create an Azure File Share. The reclaim policy ensures that the underlying Azure File Share is deleted when the persistent volume that used it is deleted.|
103
-
104
105
For clusters using the [Container Storage Interface (CSI) drivers][csi-storage-drivers] the following extra `StorageClasses` are created:
105
106
106
107
| Permission | Reason |
@@ -118,23 +119,24 @@ Unless you specify a StorageClass for a persistent volume, the default StorageCl
118
119
You can create a StorageClass for additional needs using `kubectl`. The following example uses Premium Managed Disks and specifies that the underlying Azure Disk should be *retained* when you delete the pod:
119
120
120
121
```yaml
121
-
kind: StorageClass
122
122
apiVersion: storage.k8s.io/v1
123
+
kind: StorageClass
123
124
metadata:
124
125
name: managed-premium-retain
125
-
provisioner: kubernetes.io/azure-disk
126
-
reclaimPolicy: Retain
126
+
provisioner: disk.csi.azure.com
127
127
parameters:
128
-
storageaccounttype: Premium_LRS
129
-
kind: Managed
128
+
skuName: Premium_LRS
129
+
reclaimPolicy: Retain
130
+
volumeBindingMode: WaitForFirstConsumer
131
+
allowVolumeExpansion: true
130
132
```
131
133
132
134
> [!NOTE]
133
135
> AKS reconciles the default storage classes and will overwrite any changes you make to those storage classes.
134
136
135
137
## Persistent volume claims
136
138
137
-
A PersistentVolumeClaim requests either Disk or File storage of a particular StorageClass, access mode, and size. The Kubernetes API server can dynamically provision the underlying Azure storage resource if no existing resource can fulfill the claim based on the defined StorageClass.
139
+
A PersistentVolumeClaim requests storage of a particular StorageClass, access mode, and size. The Kubernetes API server can dynamically provision the underlying Azure storage resource if no existing resource can fulfill the claim based on the defined StorageClass.
138
140
139
141
The pod definition includes the volume mount once the volume has been connected to the pod.
140
142
@@ -152,7 +154,7 @@ metadata:
152
154
spec:
153
155
accessModes:
154
156
- ReadWriteOnce
155
-
storageClassName: managed-premium
157
+
storageClassName: managed-premium-retain
156
158
resources:
157
159
requests:
158
160
storage: 5Gi
@@ -198,12 +200,12 @@ For mounting a volume in a Windows container, specify the drive letter and path.
198
200
199
201
For associated best practices, see [Best practices for storage and backups in AKS][operator-best-practices-storage].
200
202
201
-
To see how to create dynamic and static volumes that use Azure Disks or Azure Files, see the following how-to articles:
203
+
To see how to use CSI drivers, see the following how-to articles:
202
204
203
-
-[Create a static volume using Azure Disks][aks-static-disks]
204
-
-[Create a static volume using Azure Files][aks-static-files]
205
-
-[Create a dynamic volume using Azure Disks][aks-dynamic-disks]
206
-
-[Create a dynamic volume using Azure Files][aks-dynamic-files]
205
+
-[Enable Container Storage Interface(CSI) drivers for Azure disks and Azure Files on Azure Kubernetes Service(AKS)][csi-storage-drivers]
206
+
-[Use Azure disk Container Storage Interface(CSI) drivers in Azure Kubernetes Service(AKS)][azure-disk-csi]
Copy file name to clipboardExpand all lines: articles/aks/kubernetes-walkthrough.md
+3Lines changed: 3 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -293,6 +293,8 @@ To learn more about AKS, and walk through a complete code to deployment example,
293
293
> [!div class="nextstepaction"]
294
294
> [AKS tutorial][aks-tutorial]
295
295
296
+
This quickstart is for introductory purposes. For guidance on a creating full solutions with AKS for production, see [AKS solution guidance][aks-solution-guidance].
Copy file name to clipboardExpand all lines: articles/aks/managed-aad.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -190,7 +190,7 @@ When deploying an AKS Cluster, local accounts are enabled by default. Even when
190
190
> On clusters with Azure AD integration enabled, users belonging to a group specified by `aad-admin-group-object-ids` will still be able to gain access via non-admin credentials. On clusters without Azure AD integration enabled and `properties.disableLocalAccounts` set to true, obtaining both user and admin credentials will fail.
191
191
192
192
> [!NOTE]
193
-
> After disabling local accounts users on an already existing AKS cluster where users might have used local account/s, admin must [rotate the cluster certificates](certificate-rotation.md#rotate-your-cluster-certificates), in order to revoke the certificates those users might have access to. If this is a new cluster than no action is required.
193
+
> After disabling local accounts users on an already existing AKS cluster where users might have used local account/s, admin must [rotate the cluster certificates](certificate-rotation.md), in order to revoke the certificates those users might have access to. If this is a new cluster then no action is required.
0 commit comments