You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/aks/concepts-clusters-workloads.md
+1-3Lines changed: 1 addition & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -363,9 +363,7 @@ Two Kubernetes resources, however, let you manage these types of applications: *
363
363
364
364
Modern application development often aims for stateless applications. For stateful applications, like those that include database components, you can use *StatefulSets*. Like deployments, a StatefulSet creates and manages at least one identical pod. Replicas in a StatefulSet follow a graceful, sequential approach to deployment, scale, upgrade, and termination operations. The naming convention, network names, and storage persist as replicas are rescheduled with a StatefulSet.
365
365
366
-
You can define the application in YAML format using `kind: StatefulSet`. From there, the StatefulSet Controller handles the deployment and management of the required replicas. Data writes to persistent storage, provided by Azure Managed Disks or Azure Files. With StatefulSets, the underlying persistent storage remains, even when the StatefulSet is deleted.
367
-
368
-
For more information, see [Kubernetes StatefulSets][kubernetes-statefulsets].
366
+
You can define the application in YAML format using `kind: StatefulSet`. From there, the StatefulSet Controller handles the deployment and management of the required replicas. Data writes to persistent storage, provided by Azure Managed Disks or Azure Files. The underlying persistent storage remains even when the StatefulSet is deleted, unless the `spec.persistentVolumeClaimRetentionPolicy` is set to `Delete`. For more information, see [Kubernetes StatefulSets][kubernetes-statefulsets].
369
367
370
368
> [!IMPORTANT]
371
369
> Replicas in a StatefulSet are scheduled and run across any available node in an AKS cluster. To ensure at least one pod in your set runs on a node, you should use a DaemonSet instead.
Copy file name to clipboardExpand all lines: articles/aks/node-autoprovision.md
+66-52Lines changed: 66 additions & 52 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,8 @@ description: Learn about Azure Kubernetes Service (AKS) node autoprovisioning (p
4
4
ms.topic: article
5
5
ms.custom: devx-track-azurecli
6
6
ms.date: 01/18/2024
7
-
ms.author: juda
7
+
ms.author: schaffererin
8
+
author: schaffererin
8
9
#Customer intent: As a cluster operator or developer, how to scale my cluster based on workload requirements and right size my nodes automatically
9
10
---
10
11
@@ -68,7 +69,7 @@ NAP is based on the Open Source [Karpenter](https://karpenter.sh) project, and t
68
69
- The only network configuration allowed is Cilium + Overlay + Azure
69
70
- You can't enable in a cluster where node pools have cluster autoscaler enabled
70
71
71
-
### Unsupported features:
72
+
### Unsupported features
72
73
73
74
- Windows node pools
74
75
- Applying custom configuration to the node kubelet
@@ -84,69 +85,82 @@ NAP is based on the Open Source [Karpenter](https://karpenter.sh) project, and t
84
85
85
86
## Enable node autoprovisioning
86
87
87
-
To enable node autoprovisioning, create a new cluster using the az aks create command and set --node-provisioning-mode to "Auto". You'll also need to use overlay networking and the cilium network policy.
88
+
### Enable node autoprovisioning on a new cluster
88
89
89
90
### [Azure CLI](#tab/azure-cli)
90
91
91
-
```azurecli-interactive
92
-
az aks create --name karpuktest --resource-group karpuk --node-provisioning-mode Auto --network-plugin azure --network-plugin-mode overlay --network-dataplane cilium
92
+
- Enable node autoprovisioning on a new cluster using the `az aks create` command and set `--node-provisioning-mode` to `Auto`. You also need to set the `--network-plugin` to `azure`, `--network-plugin-mode` to `overlay`, and `--network-dataplane` to `cilium`.
93
93
94
-
```
94
+
```azurecli-interactive
95
+
az aks create --name $CLUSTER_NAME --resource-group $RESOURCE_GROUP_NAME --node-provisioning-mode Auto --network-plugin azure --network-plugin-mode overlay --network-dataplane cilium
96
+
```
95
97
96
-
### [Azure ARM](#tab/azure-arm)
98
+
### [ARM template](#tab/arm)
97
99
98
-
```azurecli-interactive
99
-
az deployment group create --resource-group napcluster --template-file ./nap.json
100
-
```
100
+
- Enable node autoprovisioning on a new cluster using the `az deployment group create` command and specify the `--template-file` parameter with the path to the ARM template file.
101
+
102
+
```azurecli-interactive
103
+
az deployment group create --resource-group $RESOURCE_GROUP_NAME --template-file ./nap.json
### Enable node autoprovisioning on an existing cluster
157
+
158
+
- Enable node autoprovisioning on an existing cluster using the `az aks update` command and set `--node-provisioning-mode` to `Auto`. You also need to set the `--network-plugin` to `azure`, `--network-plugin-mode` to `overlay`, and `--network-dataplane` to `cilium`.
159
+
160
+
```azurecli-interactive
161
+
az aks update --name $CLUSTER_NAME --resource-group $RESOURCE_GROUP_NAME --node-provisioning-mode Auto --network-plugin azure --network-plugin-mode overlay --network-dataplane cilium
162
+
```
163
+
150
164
## Node pools
151
165
152
166
Node autoprovision uses a list of VM SKUs as a starting point to decide which is best suited for the workloads that are in a pending state. Having control over what SKU you want in the initial pool allows you to specify specific SKU families, or VM types and the maximum amount of resources a provisioner uses.
Copy file name to clipboardExpand all lines: articles/aks/static-ip.md
+50-8Lines changed: 50 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ ms.author: allensu
7
7
ms.subservice: aks-networking
8
8
ms.custom: devx-track-azurecli
9
9
ms.topic: how-to
10
-
ms.date: 09/22/2023
10
+
ms.date: 06/03/2024
11
11
#Customer intent: As a cluster operator or developer, I want to create and manage static IP address resources in Azure that I can use beyond the lifecycle of an individual Kubernetes service deployed in an AKS cluster.
12
12
---
13
13
@@ -65,11 +65,53 @@ This article shows you how to create a static public IP address and assign it to
65
65
66
66
## Create a service using the static IP address
67
67
68
-
1. Ensure the cluster identity used by the AKS cluster has delegated permissions to the public IP's resource group using the [`az role assignment create`][az-role-assignment-create] command.
68
+
1. First, determine which type of managed identity your AKS cluster is using, system-assigned or user-assigned. If you're not certain, call the [az aks show][az-aks-show] command and query for the identity's *type* property.
69
+
70
+
```azurecli
71
+
az aks show \
72
+
--name myAKSCluster \
73
+
--resource-group myResourceGroup \
74
+
--query identity.type \
75
+
--output tsv
76
+
```
77
+
78
+
If the cluster is using a managed identity, the value of the *type* property will be either **SystemAssigned** or **UserAssigned**.
79
+
80
+
If the cluster is using a service principal, the value of the *type* property will be null. Consider upgrading your cluster to use a managed identity.
81
+
82
+
1. If your AKS cluster uses a system-assigned managed identity, then query for the managed identity's principal ID as follows:
83
+
84
+
```azurecli-interactive
85
+
# Get the principal ID for a system-assigned managed identity.
86
+
CLIENT_ID=$(az aks show \
87
+
--name myAKSCluster \
88
+
--resource-group myNetworkResourceGroup \
89
+
--query identity.principalId \
90
+
--output tsv)
91
+
```
92
+
93
+
If your AKS cluster uses a user-assigned managed identity, then the principal ID will be null. Query for the user-assigned managed identity's client ID instead:
94
+
95
+
```azurecli-interactive
96
+
# Get the client ID for a user-assigned managed identity.
1. Assign delegated permissions for the managed identity used by the AKS cluster for the public IP's resource group by calling the [`az role assignment create`][az-role-assignment-create] command.
69
105
70
106
```azurecli-interactive
71
-
CLIENT_ID=$(az aks show --name myAKSCluster --resource-group myNetworkResourceGroup --query identity.principalId -o tsv)
72
-
RG_SCOPE=$(az group show --name <node resource group> --query id -o tsv)
107
+
# Get the resource ID for the node resource group.
108
+
RG_SCOPE=$(az group show \
109
+
--name <node resource group> \
110
+
--query id \
111
+
--output tsv)
112
+
113
+
# Assign the Network Contributor role to the managed identity,
114
+
# scoped to the node resource group.
73
115
az role assignment create \
74
116
--assignee ${CLIENT_ID} \
75
117
--role "Network Contributor" \
@@ -79,7 +121,7 @@ This article shows you how to create a static public IP address and assign it to
79
121
> [!IMPORTANT]
80
122
> If you customized your outbound IP, make sure your cluster identity has permissions to both the outbound public IP and the inbound public IP.
81
123
82
-
2. Create a file named `load-balancer-service.yaml` and copy in the contents of the following YAML file, providing your own public IP address created in the previous step and the node resource group name.
124
+
1. Create a file named `load-balancer-service.yaml` and copy in the contents of the following YAML file, providing your own public IP address created in the previous step and the node resource group name.
83
125
84
126
> [!IMPORTANT]
85
127
> Adding the `loadBalancerIP` property to the load balancer YAML manifest is deprecating following [upstream Kubernetes](https://github.com/kubernetes/kubernetes/pull/107235). While current usage remains the same and existing services are expected to work without modification, we **highly recommend setting service annotations** instead. To set service annotations, you can either use `service.beta.kubernetes.io/azure-pip-name` for public IP name, or use `service.beta.kubernetes.io/azure-load-balancer-ipv4` for an IPv4 address and `service.beta.kubernetes.io/azure-load-balancer-ipv6` for an IPv6 address, as shown in the example YAML.
@@ -103,7 +145,7 @@ This article shows you how to create a static public IP address and assign it to
103
145
> [!NOTE]
104
146
> Adding the `service.beta.kubernetes.io/azure-pip-name` annotation ensures the most efficient LoadBalancer creation and is highly recommended to avoid potential throttling.
105
147
106
-
3. Set a public-facing DNS label to the service using the `service.beta.kubernetes.io/azure-dns-label-name` service annotation. This publishes a fully qualified domain name (FQDN) for your service using Azure's public DNS servers and top-level domain. The annotation value must be unique within the Azure location, so we recommend you use a sufficiently qualified label. Azure automatically appends a default suffix in the location you selected, such as `<location>.cloudapp.azure.com`, to the name you provide, creating the FQDN.
148
+
1. Set a public-facing DNS label to the service using the `service.beta.kubernetes.io/azure-dns-label-name` service annotation. This publishes a fully qualified domain name (FQDN) for your service using Azure's public DNS servers and top-level domain. The annotation value must be unique within the Azure location, so we recommend you use a sufficiently qualified label. Azure automatically appends a default suffix in the location you selected, such as `<location>.cloudapp.azure.com`, to the name you provide, creating the FQDN.
107
149
108
150
> [!NOTE]
109
151
> If you want to publish the service on your own domain, see [Azure DNS][azure-dns-zone] and the [external-dns][external-dns] project.
@@ -125,13 +167,13 @@ This article shows you how to create a static public IP address and assign it to
125
167
app: azure-load-balancer
126
168
```
127
169
128
-
4. Create the service and deployment using the `kubectl apply` command.
170
+
1. Create the service and deployment using the `kubectl apply` command.
129
171
130
172
```console
131
173
kubectl apply -f load-balancer-service.yaml
132
174
```
133
175
134
-
5. To see the DNS label for your load balancer, use the `kubectl describe service` command.
176
+
1. To see the DNS label for your load balancer, use the `kubectl describe service` command.
Copy file name to clipboardExpand all lines: articles/application-gateway/for-containers/alb-controller-backend-health-metrics.md
+33-15Lines changed: 33 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,21 +6,21 @@ author: greglin
6
6
ms.service: application-gateway
7
7
ms.subservice: appgw-for-containers
8
8
ms.topic: article
9
-
ms.date: 02/27/2024
9
+
ms.date: 06/03/2024
10
10
ms.author: greglin
11
11
---
12
12
13
13
# ALB Controller - Backend Health and Metrics
14
14
15
-
Understanding backend health of your Kubernetes services and pods is crucial in identifying issues and assistance in troubleshooting. To help facilitate visibility into backend health, ALB Controller exposes backend health and metrics endpoints in all ALB Controller deployments.
15
+
Understanding backend health of your Kubernetes services and pods is crucial in identifying issues and assistance in troubleshooting. To help facilitate visibility into backend health, ALB Controller exposes backend health and metrics endpoints in all ALB Controller deployments.
16
16
17
17
ALB Controller's backend health exposes three different experiences:
18
18
19
19
1. Summarized backend health by Application Gateway for Containers resource
20
20
2. Summarized backend health by Kubernetes service
21
21
3. Detailed backend health for a specified Kubernetes service
22
22
23
-
ALB Controller's metric endpoint exposes both metrics and summary of backend health. This endpoint enables exposure to Prometheus.
23
+
ALB Controller's metric endpoint exposes both metrics and summary of backend health. This endpoint enables exposure to Prometheus.
24
24
25
25
Access to these endpoints can be reached via the following URLs:
26
26
@@ -35,27 +35,45 @@ Any clients or pods that have connectivity to this pod and port may access these
35
35
36
36
### Discovering backend health
37
37
38
-
Run the following kubectl command to identify your ALB Controller pod and its corresponding IP address.
38
+
The ALB Controller exposes backend health on the ALB controller pod that is acting as primary.
39
+
40
+
To find the primary pod, run the following command:
Once the kubectl command is listening, open another terminal (or cloud shell session) and execute curl to 127.0.0.1 to be redirected to the pod.
54
56
55
57
```bash
56
-
curl http://10.1.0.247:8000
58
+
curl http://127.0.0.1:8000
57
59
```
58
60
61
+
# [Access backend health via controller pod directly](#tab/backend-health-direct-access)
62
+
63
+
Run the following kubectl command to identify the IP address of the primary ALB Controller pod.
64
+
65
+
```bash
66
+
kubectl get pod <alb controller pod name from previous step> -n $CONTROLLER_NAMESPACE -o jsonpath="{.status.podIP}"
67
+
```
68
+
69
+
Once you have the IP address of your alb-controller pod, you may validate the backend health service is running by browsing to http://\<pod-ip\>:8000.
70
+
71
+
```bash
72
+
curl http://<your-pod-ip>:8000
73
+
```
74
+
75
+
---
76
+
59
77
Example response:
60
78
61
79
```text
@@ -188,9 +206,9 @@ Example output:
188
206
189
207
## Metrics
190
208
191
-
ALB Controller currently surfaces metrics following [text based format](https://prometheus.io/docs/instrumenting/exposition_formats/#text-based-format) to be exposed to Prometheus.
209
+
ALB Controller currently surfaces metrics following [text based format](https://prometheus.io/docs/instrumenting/exposition_formats/#text-based-format) to be exposed to Prometheus. Access to these logs are available on port 8001 of the primary alb controller pod `http://\<alb-controller-pod-ip\>:8001/metrics`.
192
210
193
-
The following Application Gateway for Containers specific metrics are currently available today:
0 commit comments