Skip to content

Commit 98de50d

Browse files
authored
Merge pull request #277087 from MicrosoftDocs/main
6/4 11:00 AM IST Publish
2 parents 91237bd + c87b64b commit 98de50d

File tree

40 files changed

+424
-262
lines changed

40 files changed

+424
-262
lines changed

articles/ai-services/openai/how-to/file-search.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -80,7 +80,7 @@ client = AzureOpenAI(
8080

8181
assistant = client.beta.assistants.create(
8282
name="Financial Analyst Assistant",
83-
instructions="You are an expert financial analyst. Use you knowledge base to answer questions about audited financial statements.",
83+
instructions="You are an expert financial analyst. Use your knowledge base to answer questions about audited financial statements.",
8484
model="gpt-4-turbo",
8585
tools=[{"type": "file_search"}],
8686
)

articles/aks/concepts-clusters-workloads.md

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -363,9 +363,7 @@ Two Kubernetes resources, however, let you manage these types of applications: *
363363

364364
Modern application development often aims for stateless applications. For stateful applications, like those that include database components, you can use *StatefulSets*. Like deployments, a StatefulSet creates and manages at least one identical pod. Replicas in a StatefulSet follow a graceful, sequential approach to deployment, scale, upgrade, and termination operations. The naming convention, network names, and storage persist as replicas are rescheduled with a StatefulSet.
365365

366-
You can define the application in YAML format using `kind: StatefulSet`. From there, the StatefulSet Controller handles the deployment and management of the required replicas. Data writes to persistent storage, provided by Azure Managed Disks or Azure Files. With StatefulSets, the underlying persistent storage remains, even when the StatefulSet is deleted.
367-
368-
For more information, see [Kubernetes StatefulSets][kubernetes-statefulsets].
366+
You can define the application in YAML format using `kind: StatefulSet`. From there, the StatefulSet Controller handles the deployment and management of the required replicas. Data writes to persistent storage, provided by Azure Managed Disks or Azure Files. The underlying persistent storage remains even when the StatefulSet is deleted, unless the `spec.persistentVolumeClaimRetentionPolicy` is set to `Delete`. For more information, see [Kubernetes StatefulSets][kubernetes-statefulsets].
369367

370368
> [!IMPORTANT]
371369
> Replicas in a StatefulSet are scheduled and run across any available node in an AKS cluster. To ensure at least one pod in your set runs on a node, you should use a DaemonSet instead.

articles/aks/node-autoprovision.md

Lines changed: 66 additions & 52 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,8 @@ description: Learn about Azure Kubernetes Service (AKS) node autoprovisioning (p
44
ms.topic: article
55
ms.custom: devx-track-azurecli
66
ms.date: 01/18/2024
7-
ms.author: juda
7+
ms.author: schaffererin
8+
author: schaffererin
89
#Customer intent: As a cluster operator or developer, how to scale my cluster based on workload requirements and right size my nodes automatically
910
---
1011

@@ -68,7 +69,7 @@ NAP is based on the Open Source [Karpenter](https://karpenter.sh) project, and t
6869
- The only network configuration allowed is Cilium + Overlay + Azure
6970
- You can't enable in a cluster where node pools have cluster autoscaler enabled
7071
71-
### Unsupported features:
72+
### Unsupported features
7273
7374
- Windows node pools
7475
- Applying custom configuration to the node kubelet
@@ -84,69 +85,82 @@ NAP is based on the Open Source [Karpenter](https://karpenter.sh) project, and t
8485
8586
## Enable node autoprovisioning
8687
87-
To enable node autoprovisioning, create a new cluster using the az aks create command and set --node-provisioning-mode to "Auto". You'll also need to use overlay networking and the cilium network policy.
88+
### Enable node autoprovisioning on a new cluster
8889
8990
### [Azure CLI](#tab/azure-cli)
9091
91-
```azurecli-interactive
92-
az aks create --name karpuktest --resource-group karpuk --node-provisioning-mode Auto --network-plugin azure --network-plugin-mode overlay --network-dataplane cilium
92+
- Enable node autoprovisioning on a new cluster using the `az aks create` command and set `--node-provisioning-mode` to `Auto`. You also need to set the `--network-plugin` to `azure`, `--network-plugin-mode` to `overlay`, and `--network-dataplane` to `cilium`.
9393
94-
```
94+
```azurecli-interactive
95+
az aks create --name $CLUSTER_NAME --resource-group $RESOURCE_GROUP_NAME --node-provisioning-mode Auto --network-plugin azure --network-plugin-mode overlay --network-dataplane cilium
96+
```
9597
96-
### [Azure ARM](#tab/azure-arm)
98+
### [ARM template](#tab/arm)
9799
98-
```azurecli-interactive
99-
az deployment group create --resource-group napcluster --template-file ./nap.json
100-
```
100+
- Enable node autoprovisioning on a new cluster using the `az deployment group create` command and specify the `--template-file` parameter with the path to the ARM template file.
101+
102+
```azurecli-interactive
103+
az deployment group create --resource-group $RESOURCE_GROUP_NAME --template-file ./nap.json
104+
```
101105
102-
```arm
103-
{
104-
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
105-
"contentVersion": "1.0.0.0",
106-
"metadata": {},
107-
"parameters": {},
108-
"resources": [
106+
The `nap.json` file should contain the following ARM template:
107+
108+
```JSON
109109
{
110-
"type": "Microsoft.ContainerService/managedClusters",
111-
"apiVersion": "2023-09-02-preview",
112-
"sku": {
113-
"name": "Base",
114-
"tier": "Standard"
115-
},
116-
"name": "napcluster",
117-
"location": "uksouth",
118-
"identity": {
119-
"type": "SystemAssigned"
120-
},
121-
"properties": {
122-
"networkProfile": {
123-
"networkPlugin": "azure",
124-
"networkPluginMode": "overlay",
125-
"networkPolicy": "cilium",
126-
"networkDataplane":"cilium",
127-
"loadBalancerSku": "Standard"
128-
},
129-
"dnsPrefix": "napcluster",
130-
"agentPoolProfiles": [
131-
{
132-
"name": "agentpool",
133-
"count": 3,
134-
"vmSize": "standard_d2s_v3",
135-
"osType": "Linux",
136-
"mode": "System"
110+
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
111+
"contentVersion": "1.0.0.0",
112+
"metadata": {},
113+
"parameters": {},
114+
"resources": [
115+
{
116+
"type": "Microsoft.ContainerService/managedClusters",
117+
"apiVersion": "2023-09-02-preview",
118+
"sku": {
119+
"name": "Base",
120+
"tier": "Standard"
121+
},
122+
"name": "napcluster",
123+
"location": "uksouth",
124+
"identity": {
125+
"type": "SystemAssigned"
126+
},
127+
"properties": {
128+
"networkProfile": {
129+
"networkPlugin": "azure",
130+
"networkPluginMode": "overlay",
131+
"networkPolicy": "cilium",
132+
"networkDataplane":"cilium",
133+
"loadBalancerSku": "Standard"
134+
},
135+
"dnsPrefix": "napcluster",
136+
"agentPoolProfiles": [
137+
{
138+
"name": "agentpool",
139+
"count": 3,
140+
"vmSize": "standard_d2s_v3",
141+
"osType": "Linux",
142+
"mode": "System"
143+
}
144+
],
145+
"nodeProvisioningProfile": {
146+
"mode": "Auto"
147+
},
137148
}
138-
],
139-
"nodeProvisioningProfile": {
140-
"mode": "Auto"
141-
},
142-
}
149+
}
150+
]
143151
}
144-
]
145-
}
146-
```
152+
```
147153
148154
---
149155
156+
### Enable node autoprovisioning on an existing cluster
157+
158+
- Enable node autoprovisioning on an existing cluster using the `az aks update` command and set `--node-provisioning-mode` to `Auto`. You also need to set the `--network-plugin` to `azure`, `--network-plugin-mode` to `overlay`, and `--network-dataplane` to `cilium`.
159+
160+
```azurecli-interactive
161+
az aks update --name $CLUSTER_NAME --resource-group $RESOURCE_GROUP_NAME --node-provisioning-mode Auto --network-plugin azure --network-plugin-mode overlay --network-dataplane cilium
162+
```
163+
150164
## Node pools
151165
152166
Node autoprovision uses a list of VM SKUs as a starting point to decide which is best suited for the workloads that are in a pending state. Having control over what SKU you want in the initial pool allows you to specify specific SKU families, or VM types and the maximum amount of resources a provisioner uses.

articles/aks/static-ip.md

Lines changed: 50 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.author: allensu
77
ms.subservice: aks-networking
88
ms.custom: devx-track-azurecli
99
ms.topic: how-to
10-
ms.date: 09/22/2023
10+
ms.date: 06/03/2024
1111
#Customer intent: As a cluster operator or developer, I want to create and manage static IP address resources in Azure that I can use beyond the lifecycle of an individual Kubernetes service deployed in an AKS cluster.
1212
---
1313

@@ -65,11 +65,53 @@ This article shows you how to create a static public IP address and assign it to
6565
6666
## Create a service using the static IP address
6767
68-
1. Ensure the cluster identity used by the AKS cluster has delegated permissions to the public IP's resource group using the [`az role assignment create`][az-role-assignment-create] command.
68+
1. First, determine which type of managed identity your AKS cluster is using, system-assigned or user-assigned. If you're not certain, call the [az aks show][az-aks-show] command and query for the identity's *type* property.
69+
70+
```azurecli
71+
az aks show \
72+
--name myAKSCluster \
73+
--resource-group myResourceGroup \
74+
--query identity.type \
75+
--output tsv
76+
```
77+
78+
If the cluster is using a managed identity, the value of the *type* property will be either **SystemAssigned** or **UserAssigned**.
79+
80+
If the cluster is using a service principal, the value of the *type* property will be null. Consider upgrading your cluster to use a managed identity.
81+
82+
1. If your AKS cluster uses a system-assigned managed identity, then query for the managed identity's principal ID as follows:
83+
84+
```azurecli-interactive
85+
# Get the principal ID for a system-assigned managed identity.
86+
CLIENT_ID=$(az aks show \
87+
--name myAKSCluster \
88+
--resource-group myNetworkResourceGroup \
89+
--query identity.principalId \
90+
--output tsv)
91+
```
92+
93+
If your AKS cluster uses a user-assigned managed identity, then the principal ID will be null. Query for the user-assigned managed identity's client ID instead:
94+
95+
```azurecli-interactive
96+
# Get the client ID for a user-assigned managed identity.
97+
CLIENT_ID=$(az aks show \
98+
--name myAKSCluster \
99+
--resource-group myNetworkResourceGroup \
100+
--query identity.userAssignedIdentities.*.clientId \
101+
--output tsv
102+
```
103+
104+
1. Assign delegated permissions for the managed identity used by the AKS cluster for the public IP's resource group by calling the [`az role assignment create`][az-role-assignment-create] command.
69105
70106
```azurecli-interactive
71-
CLIENT_ID=$(az aks show --name myAKSCluster --resource-group myNetworkResourceGroup --query identity.principalId -o tsv)
72-
RG_SCOPE=$(az group show --name <node resource group> --query id -o tsv)
107+
# Get the resource ID for the node resource group.
108+
RG_SCOPE=$(az group show \
109+
--name <node resource group> \
110+
--query id \
111+
--output tsv)
112+
113+
# Assign the Network Contributor role to the managed identity,
114+
# scoped to the node resource group.
73115
az role assignment create \
74116
--assignee ${CLIENT_ID} \
75117
--role "Network Contributor" \
@@ -79,7 +121,7 @@ This article shows you how to create a static public IP address and assign it to
79121
> [!IMPORTANT]
80122
> If you customized your outbound IP, make sure your cluster identity has permissions to both the outbound public IP and the inbound public IP.
81123
82-
2. Create a file named `load-balancer-service.yaml` and copy in the contents of the following YAML file, providing your own public IP address created in the previous step and the node resource group name.
124+
1. Create a file named `load-balancer-service.yaml` and copy in the contents of the following YAML file, providing your own public IP address created in the previous step and the node resource group name.
83125
84126
> [!IMPORTANT]
85127
> Adding the `loadBalancerIP` property to the load balancer YAML manifest is deprecating following [upstream Kubernetes](https://github.com/kubernetes/kubernetes/pull/107235). While current usage remains the same and existing services are expected to work without modification, we **highly recommend setting service annotations** instead. To set service annotations, you can either use `service.beta.kubernetes.io/azure-pip-name` for public IP name, or use `service.beta.kubernetes.io/azure-load-balancer-ipv4` for an IPv4 address and `service.beta.kubernetes.io/azure-load-balancer-ipv6` for an IPv6 address, as shown in the example YAML.
@@ -103,7 +145,7 @@ This article shows you how to create a static public IP address and assign it to
103145
> [!NOTE]
104146
> Adding the `service.beta.kubernetes.io/azure-pip-name` annotation ensures the most efficient LoadBalancer creation and is highly recommended to avoid potential throttling.
105147
106-
3. Set a public-facing DNS label to the service using the `service.beta.kubernetes.io/azure-dns-label-name` service annotation. This publishes a fully qualified domain name (FQDN) for your service using Azure's public DNS servers and top-level domain. The annotation value must be unique within the Azure location, so we recommend you use a sufficiently qualified label. Azure automatically appends a default suffix in the location you selected, such as `<location>.cloudapp.azure.com`, to the name you provide, creating the FQDN.
148+
1. Set a public-facing DNS label to the service using the `service.beta.kubernetes.io/azure-dns-label-name` service annotation. This publishes a fully qualified domain name (FQDN) for your service using Azure's public DNS servers and top-level domain. The annotation value must be unique within the Azure location, so we recommend you use a sufficiently qualified label. Azure automatically appends a default suffix in the location you selected, such as `<location>.cloudapp.azure.com`, to the name you provide, creating the FQDN.
107149
108150
> [!NOTE]
109151
> If you want to publish the service on your own domain, see [Azure DNS][azure-dns-zone] and the [external-dns][external-dns] project.
@@ -125,13 +167,13 @@ This article shows you how to create a static public IP address and assign it to
125167
app: azure-load-balancer
126168
```
127169
128-
4. Create the service and deployment using the `kubectl apply` command.
170+
1. Create the service and deployment using the `kubectl apply` command.
129171
130172
```console
131173
kubectl apply -f load-balancer-service.yaml
132174
```
133175
134-
5. To see the DNS label for your load balancer, use the `kubectl describe service` command.
176+
1. To see the DNS label for your load balancer, use the `kubectl describe service` command.
135177
136178
```console
137179
kubectl describe service azure-load-balancer

articles/application-gateway/for-containers/alb-controller-backend-health-metrics.md

Lines changed: 33 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -6,21 +6,21 @@ author: greglin
66
ms.service: application-gateway
77
ms.subservice: appgw-for-containers
88
ms.topic: article
9-
ms.date: 02/27/2024
9+
ms.date: 06/03/2024
1010
ms.author: greglin
1111
---
1212

1313
# ALB Controller - Backend Health and Metrics
1414

15-
Understanding backend health of your Kubernetes services and pods is crucial in identifying issues and assistance in troubleshooting. To help facilitate visibility into backend health, ALB Controller exposes backend health and metrics endpoints in all ALB Controller deployments.
15+
Understanding backend health of your Kubernetes services and pods is crucial in identifying issues and assistance in troubleshooting. To help facilitate visibility into backend health, ALB Controller exposes backend health and metrics endpoints in all ALB Controller deployments.
1616

1717
ALB Controller's backend health exposes three different experiences:
1818

1919
1. Summarized backend health by Application Gateway for Containers resource
2020
2. Summarized backend health by Kubernetes service
2121
3. Detailed backend health for a specified Kubernetes service
2222

23-
ALB Controller's metric endpoint exposes both metrics and summary of backend health. This endpoint enables exposure to Prometheus.
23+
ALB Controller's metric endpoint exposes both metrics and summary of backend health. This endpoint enables exposure to Prometheus.
2424

2525
Access to these endpoints can be reached via the following URLs:
2626

@@ -35,27 +35,45 @@ Any clients or pods that have connectivity to this pod and port may access these
3535

3636
### Discovering backend health
3737

38-
Run the following kubectl command to identify your ALB Controller pod and its corresponding IP address.
38+
The ALB Controller exposes backend health on the ALB controller pod that is acting as primary.
39+
40+
To find the primary pod, run the following command:
3941

4042
```bash
41-
kubectl get pods -n azure-alb-system -o wide
43+
CONTROLLER_NAMESPACE='azure-alb-system'
44+
kubectl get lease -n $CONTROLLER_NAMESPACE alb-controller-leader-election -o jsonpath='{.spec.holderIdentity}' | awk -F'_' '{print $1}'
4245
```
4346

44-
Example output:
47+
# [Access backend health via Kubectl command](#tab/backend-health-kubectl-access)
4548

46-
| NAME | READY | STATUS | RESTARTS | AGE | IP | NODE | NOMINATED NODE | READINESS GATES |
47-
| ------------------------------------------ | ----- | ------- | -------- | ---- | ---------- | -------------------------------- | -------------- | --------------- |
48-
| alb-controller-74df7896b-gfzfc | 1/1 | Running | 0 | 60m | 10.1.0.247 | aks-userpool-21921599-vmss000000 | \<none\> | \<none\> |
49-
| alb-controller-bootstrap-5f7f8f5d4f-gbstq | 1/1 | Running | 0 | 60m | 10.1.1.183 | aks-userpool-21921599-vmss000001 | \<none\> | \<none\> |
49+
For indirect access via kubectl utility, you can create a listener that proxies traffic to the pod.
5050

51-
Once you have the IP address of your alb-controller pod, you may validate the backend health service is running by browsing to http://\<pod-ip\>:8000.
51+
```bash
52+
kubectl port-forward <pod-name> -n $CONTROLLER_NAMESPACE 8000 8001
53+
```
5254

53-
For example, the following command may be run:
55+
Once the kubectl command is listening, open another terminal (or cloud shell session) and execute curl to 127.0.0.1 to be redirected to the pod.
5456

5557
```bash
56-
curl http://10.1.0.247:8000
58+
curl http://127.0.0.1:8000
5759
```
5860

61+
# [Access backend health via controller pod directly](#tab/backend-health-direct-access)
62+
63+
Run the following kubectl command to identify the IP address of the primary ALB Controller pod.
64+
65+
```bash
66+
kubectl get pod <alb controller pod name from previous step> -n $CONTROLLER_NAMESPACE -o jsonpath="{.status.podIP}"
67+
```
68+
69+
Once you have the IP address of your alb-controller pod, you may validate the backend health service is running by browsing to http://\<pod-ip\>:8000.
70+
71+
```bash
72+
curl http://<your-pod-ip>:8000
73+
```
74+
75+
---
76+
5977
Example response:
6078

6179
```text
@@ -188,9 +206,9 @@ Example output:
188206

189207
## Metrics
190208

191-
ALB Controller currently surfaces metrics following [text based format](https://prometheus.io/docs/instrumenting/exposition_formats/#text-based-format) to be exposed to Prometheus.
209+
ALB Controller currently surfaces metrics following [text based format](https://prometheus.io/docs/instrumenting/exposition_formats/#text-based-format) to be exposed to Prometheus. Access to these logs are available on port 8001 of the primary alb controller pod `http://\<alb-controller-pod-ip\>:8001/metrics`.
192210

193-
The following Application Gateway for Containers specific metrics are currently available today:
211+
The following metrics are exposed today:
194212

195213
| Metric Name | Description |
196214
| ----------- | ------------------------------------------------------------------------------------- |

0 commit comments

Comments
 (0)