You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This article describes how to deploy an AI model on AKS Arc with the *Kubernetes AI toolchain operator* (KAITO). The AI toolchain operator runs as a cluster extension in AKS Arc and makes it easier to deploy and run open source LLM models on your AKS Arc cluster. To enable this feature, follow this workflow:
17
+
This article describes how to deploy an AI model on AKS Arc with the Kubernetes AI toolchain operator (KAITO). The AI toolchain operator (KAITO) is an add-on for AKS Arc, and it simplifies the experience of running OSS AI models on your AKS Arc clusters. To enable this feature, follow this workflow:
18
18
19
-
1.Create a cluster with KAITO.
19
+
1.Deploy KAITO on an existing cluster.
20
20
1. Add a GPU node pool.
21
-
1. Model deployment.
22
-
1. Validate the model with a test prompt.
23
-
1. Clean up resources.
24
-
1. Troubleshoot as needed.
21
+
1. Deploy the AI model.
22
+
1. Validate the model deployment.
25
23
26
24
> [!IMPORTANT]
27
-
> The KAITO Extension for AKS on Azure Local is currently in PREVIEW.
28
-
> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
25
+
> These preview features are available on a self-service, opt-in basis. Previews are provided "as is" and "as available," and they're excluded from the service-level agreements and limited warranty. Azure Kubernetes Service, enabled by Azure Arc previews are partially covered by customer support on a best-effort basis.
29
26
30
27
## Prerequisites
31
28
32
29
Before you begin, make sure you have the following prerequisites:
33
30
34
-
- Make sure the Azure Local cluster has a supported GPU, such as A2, A16, or T4.
35
-
- Make sure the AKS Arc cluster can deploy GPU node pools with the corresponding GPU VM SKU. For more information, see [use GPU for compute-intensive workloads](deploy-gpu-node-pool.md).
36
-
- Make sure that **kubectl** is installed on your local machine. If you need to install **kubectl**, see [Install kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/).
37
-
- Install the **aksarc** extension, and make sure the version is at least 1.5.37. To get the list of installed CLI extensions, run `az extension list -o table`.
38
-
- If you use a Powershell terminal, make sure the version is at least 7.4.
31
+
1. The following details from your infrastructure administrator:
39
32
40
-
For all hosted model preset images and default resource configuration, see the [KAITO GitHub repository](https://github.com/kaito-project/kaito/tree/main/presets). All the preset models are originally from HuggingFace, and we do not change the model behavior during the redistribution. See the [content policy from HuggingFace](https://huggingface.co/content-policy).
33
+
- An AKS Arc cluster that's up and running. For more information, see [Create Kubernetes clusters using Azure CLI](aks-create-clusters-cli.md).
34
+
- Make sure that the AKS Arc cluster runs on the Azure Local cluster with a supported GPU model. Before you create the node pool, you must also identify the correct GPU VM SKUs based on the model. For more information, see [use GPU for compute-intensive workloads](deploy-gpu-node-pool.md).
35
+
- We recommend using a computer running Linux for this feature.
36
+
- Use `az connectedk8s proxy` to connect to your AKS Arc cluster.
41
37
42
-
The AI toolchain operator extension currently supports KAITO version 0.4.5. Make a note of this in considering your choice of model from the KAITO model repository.
38
+
1. Make sure that **helm** and **kubectl** are installed on your local machine.
43
39
44
-
## Create a cluster with KAITO
40
+
- If you need to install or upgrade, see [Install Helm](https://helm.sh/docs/intro/install/).
41
+
- If you need to install **kubectl**, see [Install kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/).
45
42
46
-
To create an AKS Arc cluster on Azure Local with KAITO, follow these steps:
43
+
## Deploy KAITO from GitHub
47
44
48
-
1. Gather [all required parameters](aks-create-clusters-cli.md) and include the `--enable-ai-toolchain-operator` parameter to enable KAITO as part of the cluster creation.
45
+
You must have a running AKS Arc cluster with a default node pool. To deploy the KAITO operator, follow these steps:
1. After the command succeeds, make sure the KAITO extension is installed correctly and the KAITO operator under the `kaito` namespace is in a running state.
54
+
## Add a GPU node pool
55
55
56
-
## Update an existing cluster with KAITO
56
+
Before you add a GPU node, make sure that Azure Local is enabled with a supported GPU model, and that the GPU drivers are installed on all the host nodes. To create a GPU node pool using the Azure portal or Azure CLI, follow these steps:
57
57
58
-
If you want to enable KAITO on an existing AKS Arc cluster with a GPU, you can run the following command to install the KAITO operator on the existing node pool:
58
+
### [Azure portal](#tab/portal)
59
59
60
-
```azurecli
61
-
az aksarc update --resource-group <Resource_Group_name> --name <Cluster_Name> --enable-ai-toolchain-operator
62
-
```
60
+
To create a GPU node pool using the Azure portal, follow these steps:
63
61
64
-
## Add a GPU node pool
62
+
1. Sign in to the Azure portal and find your AKS Arc cluster.
63
+
1. Under **Settings** and **Node pools**, select **Add**. During the preview, we only support Linux nodes. Fill in the other required fields and create the node pool.
65
64
66
-
1. Before you add a GPU node pool, make sure that Azure Local is enabled with a supported GPU such as A2, T4, or A16, and that the GPU drivers are installed on all the host nodes. To add a GPU node pool, follow these steps:
65
+
:::image type="content" source="media/deploy-ai-model/nodepools-portal.png" alt-text="Screenshot of node pools portal page." lightbox="media/deploy-ai-model/nodepools-portal.png":::
67
66
68
-
### [Azure portal](#tab/portal)
67
+
### [Azure CLI](#tab/azurecli)
69
68
70
-
Sign in to the Azure portal and find your AKS Arc cluster. Under **Settings > Node pools**, select **Add**. Fill in the other required fields, then create the nodepool.
69
+
To create a GPU node pool using Azure CLI, run the following command. The GPU VM SKU used in the following example is for the **A16** model; for the full list of VM SKUs, see [Supported VM sizes](deploy-gpu-node-pool.md#supported-gpu-vm-sizes).
To create a GPU node pool using Azure CLI, run the following command. The GPU VM SKU used in the following example is for the **A16** model. For the full list of VM SKUs, see [Supported VM sizes](deploy-gpu-node-pool.md#supported-gpu-vm-sizes).
After the node pool creation succeeds, you can confirm whether the GPU node is provisioned using `kubectl get nodes`. In the following example, the GPU node is **moc-l1i9uh0ksne**. The other node is from the default node pool that was created during the cluster creation:
83
80
84
-
2. After the node pool is provisioned, you can confirm whether the node is successfully provisioned using the node pool name:
You should also ensure that the node has allocatable GPU cores:
95
95
96
-
3. Label the newly provisioned GPU node so the inference workspace can be deployed to the node in the next step. You can make sure the label is applied using `kubectl get nodes`.
96
+
```bash
97
+
kubectl get node moc-l1i9uh0ksne -o yaml | grep -A 10 "allocatable:"
1. Create a YAML file with the following sample file. In this example, we use the Phi 3.5 Mini model by specifying the preset name as **phi-3.5-mini-instruct**. If you want to use other LLMs, use the preset name from the KAITO repo. You should also make sure that the LLM can deploy on the VM SKU based on the matrix table in the "Model VM SKU Matrix" section.
120
+
1. Create a YAML file using the following template. KAITO supports popular OSS models such as Falcon, Phi3, Llama2, and Mistral. This list might increase over time.
121
+
122
+
- The **PresetName** is used to specify which model to deploy, and you can find its value in the [supported model file](https://github.com/kaito-project/kaito/blob/main/presets/workspace/models/supported_models.yaml) in the GitHub repo. In the following example, `falcon-7b-instruct` is used for the model deployment.
123
+
- We recommend using `labelSelector` and `preferredNodes` to explicitly select the GPU node by name. In the following example, `app: llm-inference` is used for the GPU node `moc-le4aoguwyd9`. You can choose any node label you want, as long as the labels match. The next step shows how to label the node.
107
124
108
125
```yaml
109
-
apiVersion: kaito.sh/v1beta1
126
+
apiVersion: kaito.sh/v1alpha1
110
127
kind: Workspace
111
128
metadata:
112
-
name: workspace-llm
129
+
name: workspace-falcon-7b
113
130
resource:
114
-
instanceType: <GPU_VM_SKU> # Update this value with GPU VM SKU
115
-
labelSelector:
116
-
matchLabels:
117
-
apps: llm-inference
118
-
preferredNodes:
119
-
- moc-l36c6vu97d5 # Update the value with GPU VM name
131
+
labelSelector:
132
+
matchLabels:
133
+
apps: llm-inference
134
+
preferredNodes:
135
+
- moc-le4aoguwyd9
120
136
inference:
121
-
preset:
122
-
name: phi-3.5-mini-instruct # Update preset name as needed
123
-
config: "ds-inference-params"
124
-
---
125
-
apiVersion: v1
126
-
kind: ConfigMap
127
-
metadata:
128
-
name: ds-inference-params
129
-
data:
130
-
inference_config.yaml: |
131
-
max_probe_steps: 6 # Maximum number of steps to find the max available seq len fitting in the GPU memory.
132
-
vllm:
133
-
cpu-offload-gb: 0
134
-
swap-space: 4
135
-
gpu-memory-utilization: 0.9
136
-
max-model-len: 4096
137
+
preset:
138
+
name: "falcon-7b-instruct"
137
139
```
138
140
139
-
1. Apply the YAML and wait until the deployment completes. Make sure that internet connectivity is good so that the model can be downloaded from the Hugging Face website within a few minutes. When the inference workspace is successfully provisioned, both **ResourceReady** and **InferenceReady** become **True**. See the "Troubleshooting" section if you encounter any failures in the workspace deployment.
141
+
1. Label the GPU node using **kubectl**, so that the YAML file knows which node can be used for deployment:
1.Validate that the workspace deployment succeeded:
147
+
1.Apply the YAML file and wait until the workplace deployment completes:
146
148
147
-
```azurecli
148
-
kubectl get workspace -A
149
+
```bash
150
+
kubectl apply -f sampleyamlfile.yaml
149
151
```
150
152
151
-
## Validate the model with a test prompt
152
-
153
-
After the resource and inference states become ready, the inference service is exposed internally via a Cluster IP. You can test the model with the following prompt:
154
-
155
-
```bash
156
-
export CLUSTERIP=$(kubectl get svc workspace-llm -o jsonpath="{.spec.clusterIPs[0]}")
157
-
158
-
kubectl run -it --rm --restart=Never curl --image=curlimages/curl -- curl -X POST http://$CLUSTERIP/v1/completions
159
-
-H "Content-Type: application/json"
160
-
-d '{
161
-
"model": "phi-3.5-mini-instruct",
162
-
"prompt": "What is kubernetes?",
163
-
"max_tokens": 20,
164
-
"temperature": 0
165
-
}'
166
-
```
153
+
## Validate the model deployment
167
154
168
-
```powershell
169
-
$CLUSTERIP = $(kubectl get svc workspace-llm -o jsonpath="{.spec.clusterIPs[0]}" )
170
-
$jsonContent = '{
171
-
"model": "phi-3.5-mini-instruct",
172
-
"prompt": "What is kubernetes?",
173
-
"max_tokens": 20,
174
-
"temperature": 0
175
-
}'
155
+
To validate the model deployment, follow these steps:
176
156
177
-
kubectl run -it --rm --restart=Never curl --image=curlimages/curl -- curl -X POST http://$CLUSTERIP/v1/completions -H "accept: application/json" -H "Content-Type: application/json" -d $jsonContent
178
-
```
157
+
1. Validate the workspace using the `kubectl get workspace` command. Also make sure that both the `ResourceReady` and `InferenceReady` fields are set to **True** before testing with the prompt:
179
158
180
-
## Clean up resources
159
+
```bash
160
+
kubectl get workspace
161
+
```
181
162
182
-
To clean up the resources, remove both the inference workspace and the extension:
163
+
Expected output:
183
164
184
-
```azurecli
185
-
kubectl delete workspace workspace-llm
165
+
```output
166
+
NAME INSTANCE RESOURCEREADY INFERENCEREADY JOBSTARTED WORKSPACESUCCEEDED AGE
az aksarc update --resource-group <Resource_Group_name> --name <Cluster_Name> --disable-ai-toolchain-operator
188
-
```
170
+
1. After the resource and inference is ready, the **workspace-falcon-7b** inference service is exposed internally and can be accessed with a cluster IP. You can test the model with the following prompt. For more information about features in the KAITO inference, see the [instructions in the KAITO repo](https://github.com/kaito-project/kaito/blob/main/docs/inference/README.md#inference-workload).
189
171
190
-
## Model VM SKU Matrix
172
+
```bash
173
+
export CLUSTERIP=$(kubectl get svc workspace-falcon-7b -o jsonpath="{.spec.clusterIPs[0]}")
191
174
192
-
The following table shows the supported GPU models and their corresponding VM SKUs. The GPU model is used to determine the VM SKU when you create a node pool. For more information about the GPU models, see [Supported GPU models](scale-requirements.md#supported-gpu-models).
175
+
kubectl run -it --rm --restart=Never curl --image=curlimages/curl -- curl -X POST http://$CLUSTERIP/chat -H "accept: application/json" -H "Content-Type: application/json" -d "{\"prompt\":\"<sample_prompt>\"}"
| Model VM SKU Matrix | Standard_NK6 | Standard_NC4, Standard_NC8 | Standard_NC32, Standard_NC16 |
197
-
| phi-3-mini-4k-instruct | Y | Y | Y |
198
-
| phi-3-mini-128k-instruct | N | Y | Y |
199
-
| phi-3.5-mini-instruct | N | Y | Y |
200
-
| phi-4-mini-instruct | N | N | Y |
201
-
| deepseek-r1-distill-llama-8b | N | N | Y |
202
-
| mistral-7b/mistral-7b-instruct | N | N | Y |
203
-
| qwen2.5-coder-7b-instruct | N | N | Y |
178
+
Expected output:
179
+
180
+
```bash
181
+
usera@quke-desktop: $ kubectl run -it -rm -restart=Never curl -image=curlimages/curl - curl -X POST http
182
+
://$CLUSTERIP/chat -H "accept: application/json" -H "Content-Type: application/json" -d "{\"prompt\":\"Write a short story about a person who discovers a hidden room in their house .? \"}"
183
+
If you don't see a command prompt, try pressing enter.
184
+
{"Result": "Write a short story about a person who discovers a hidden room in their house .? ?\nThe door is lo
185
+
cked from both the inside and outside, and there appears not to be any other entrance. The walls of the room
186
+
seem to be made of stone, although there are no visible seams, or any other indication of where the walls e
187
+
nd and the floor begins. The only furniture in the room is a single wooden chair, a small candle, and what a
188
+
ppears to be a bed. (The bed is covered entirely with a sheet, and is not visible from the doorway. )\nThe on
189
+
ly light in the room comes from a single candle on the floor of the room. The door is solid and does not app
190
+
ear to have hinges or a knob. The walls seem to go on forever into the darkness, and there is a chill, wet f
191
+
eeling in the air that makes the hair stand up on the back of your neck. \nThe chair sits on the floor direct
192
+
ly across from the door. The chair"}pod "curl" deleted
193
+
```
204
194
205
195
## Troubleshooting
206
196
207
-
1. If you want to deploy an LLM and see the error **OutOfMemoryError: CUDA out of memory**, please raise an issue in the [KAITO repo](https://github.com/kaito-project/kaito/).
208
-
1. If you see the error **(ExtensionOperationFailed) The extension operation failed with the following error: Unable to get a response from the Agent in time** during extension installation, [see this TSG](/troubleshoot/azure/azure-kubernetes/extensions/cluster-extension-deployment-errors#error-unable-to-get-a-response-from-the-agent-in-time) and ensure the extension agent in the AKS Arc cluster can connect to Azure.
209
-
1. If you see an error during prompt testing such as **{"detail":[{"type":"json_invalid","loc":["body",1],"msg":"JSON decode error","input":{},"ctx":{"error":"Expecting property name enclosed in double quotes"}}]}**, it's possible that your PowerShell terminal version is 5.1. Make sure the terminal version is at least 7.4.
197
+
If the pod is not deployed properly or the **ResourceReady** field shows empty or **false**, it's usually because the preferred GPU node isn't labeled correctly. Check the node label with `kubectl get node <yourNodeName> --show-labels`. For example, in the YAML file, the following code specifies that the node must have the label `apps=llm-inference`:
0 commit comments