You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/aks/faq.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -37,7 +37,7 @@ AKS relies on the releases and patches from [Kubernetes](https://kubernetes.io/r
37
37
38
38
AKS initiates auto-upgrades for unsupported clusters. When a cluster in an n-3 version (where n is the latest supported AKS GA minor version) is about to drop to n-4, AKS automatically upgrades the cluster to n-2 to remain in an AKS support [policy][supported-kubernetes-versions]. Automatically upgrading a platform supported cluster to a supported version is enabled by default.
39
39
40
-
For example, kubernetes v1.25 upgrades to v1.26 during the v1.29 GA release. To minimize disruptions, set up [maintenance windows][planned-maintenance]. See [auto-upgrade][auto-upgrade-cluster] for details on automatic upgrade channels.
40
+
For example, Kubernetes v1.25 upgrades to v1.26 during the v1.29 GA release. To minimize disruptions, set up [maintenance windows][planned-maintenance]. See [auto-upgrade][auto-upgrade-cluster] for details on automatic upgrade channels.
41
41
42
42
### Can I run Windows Server containers on AKS?
43
43
@@ -177,7 +177,7 @@ Yes. There are two options for limiting access to the API server:
177
177
178
178
### Are security updates applied to AKS agent nodes?
179
179
180
-
AKS patches CVEs that have a "vendor fix" every week. CVEs without a fix are waiting on a "vendor fix" before it can be remediated. The AKS images are automatically updated inside of 30 days. We recommend you apply an updated Node Image on a regular cadence to ensure that latest patched images and OS patches are all applied and current. You can do this using one of the following methods:
180
+
AKS patches CVEs that have a "vendor fix" every week. CVEs without a fix are waiting on a "vendor fix" before they can be remediated. The AKS images are automatically updated inside of 30 days. We recommend you apply an updated Node Image on a regular cadence to ensure that latest patched images and OS patches are all applied and current. You can do this using one of the following methods:
181
181
182
182
- Manually, through the Azure portal or the Azure CLI.
183
183
- By upgrading your AKS cluster. The cluster upgrades [cordon and drain nodes][cordon-drain] automatically and then bring a new node online with the latest Ubuntu image and a new patch version or a minor Kubernetes version. For more information, see [Upgrade an AKS cluster][aks-upgrade].
@@ -193,9 +193,9 @@ Microsoft provides guidance for other actions you can take to secure your worklo
193
193
194
194
No, all data is stored in the cluster's region.
195
195
196
-
### How to avoid permission ownership setting slow issues when the volume has numerous files?
196
+
### How to avoid permission ownership setting slow issues when the volume has numerous files
197
197
198
-
Traditionally if your pod is running as a nonroot user (which you should), you must specify a `fsGroup` inside the pod's security context so the volume can be readable and writable by the Pod. This requirement is covered in more detail in [here](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/).
198
+
Traditionally if your pod is running as a nonroot user (which you should), you must specify a `fsGroup` inside the pod's security context so the volume can be readable and writable by the Pod. This requirement is covered in more detail [here](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/).
199
199
200
200
A side effect of setting `fsGroup` is that each time a volume is mounted, Kubernetes must recursively `chown()` and `chmod()` all the files and directories inside the volume (with a few exceptions noted below). This scenario happens even if group ownership of the volume already matches the requested `fsGroup`. It can be expensive for larger volumes with lots of small files, which can cause pod startup to take a long time. This scenario has been a known problem before v1.20, and the workaround is setting the Pod run as root:
201
201
@@ -374,11 +374,11 @@ No, delete/remove any nodes in a failed state or otherwise from the cluster befo
374
374
375
375
Most commonly, this error arises if you have one or more Network Security Groups (NSGs) still in use that are associated with the cluster. Remove them and attempt the delete again.
376
376
377
-
### I ran an upgrade, but now my pods are in crash loops, and readiness probes fail?
377
+
### I ran an upgrade, but now my pods are in crash loops, and readiness probes fail
378
378
379
379
Confirm your service principal hasn't expired. See: [AKS service principal](./kubernetes-service-principal.md) and [AKS update credentials](./update-credentials.md).
380
380
381
-
### My cluster was working, but suddenly can't provision LoadBalancers, mount PVCs, etc.?
381
+
### My cluster was working, but suddenly can't provision LoadBalancers, mount PVCs, etc.
382
382
383
383
Confirm your service principal hasn't expired. See: [AKS service principal](./kubernetes-service-principal.md) and [AKS update credentials](./update-credentials.md).
0 commit comments