You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-deploy-kubernetes-extension.md
+5-3Lines changed: 5 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -43,7 +43,9 @@ In this article, you can learn:
43
43
-[Disabling local accounts](../aks/managed-aad.md#disable-local-accounts) for AKS is **not supported** by Azure Machine Learning. When the AKS Cluster is deployed, local accounts are enabled by default.
44
44
- If your AKS cluster has an [Authorized IP range enabled to access the API server](../aks/api-server-authorized-ip-ranges.md), enable the AzureML control plane IP ranges for the AKS cluster. The AzureML control plane is deployed across paired regions. Without access to the API server, the machine learning pods can't be deployed. Use the [IP ranges](https://www.microsoft.com/download/confirmation.aspx?id=56519) for both the [paired regions](../availability-zones/cross-region-replication-azure.md) when enabling the IP ranges in an AKS cluster.
45
45
- Azure Machine Learning does not guarantee support for all preview stage features in AKS. For example, [Azure AD pod identity](../aks/use-azure-ad-pod-identity.md) is not supported.
46
-
- If you've previously followed the steps from [AzureML AKS v1 document](./v1/how-to-create-attach-kubernetes.md) to create or attach your AKS as inference cluster, use the following link to [clean up the legacy azureml-fe related resources](./v1/how-to-create-attach-kubernetes.md#delete-azureml-fe-related-resources) before you continue the next step.
46
+
- If you've previously followed the steps from [AzureML AKS v1 document](./v1/how-to-create-attach-kubernetes.md) to create or attach your AKS as inference cluster, use the following link to [clean up the legacy azureml-fe related resources](./v1/how-to-create-attach-kubernetes.md#delete-azureml-fe-related-resources) before you continue the next step.
47
+
- We currently don't support attaching your AKS cluster across subscription, which means that your AKS cluster must be in the same subscription as your workspace.
48
+
- The workaround to meet your cross-subscription requirement is to first connect AKS to Azure-ARC and then attach this ARC-Kubernetes resource.
@@ -60,7 +62,7 @@ You can use AzureML CLI command `k8s-extension create` to deploy AzureML extensi
60
62
|`sslSecret`| The name of the Kubernetes secret in the `azureml` namespace. This config is used to store `cert.pem` (PEM-encoded TLS/SSL cert) and `key.pem` (PEM-encoded TLS/SSL key), which are required for inference HTTPS endpoint support when ``allowInsecureConnections`` is set to `False`. For a sample YAML definition of `sslSecret`, see [Configure sslSecret](./how-to-secure-kubernetes-online-endpoint.md#configure-sslsecret). Use this config or a combination of `sslCertPemFile` and `sslKeyPemFile` protected config settings. |N/A| Optional | Optional |
61
63
|`sslCname`|An TLS/SSL CNAME is used by inference HTTPS endpoint. **Required** if `allowInsecureConnections=False`| N/A | Optional | Optional|
62
64
|`inferenceRouterHA`|`True` or `False`, default `True`. By default, AzureML extension will deploy three inference router replicas for high availability, which requires at least three worker nodes in a cluster. Set to `False` if your cluster has fewer than three worker nodes, in this case only one inference router service is deployed. | N/A| Optional | Optional |
63
-
|`nodeSelector`| By default, the deployed kubernetes resources are randomly deployed to one or more nodes of the cluster, and DaemonSet resources are deployed to ALL nodes. If you want to restrict the extension deployment to specific nodes with label `key1=value1` and `key2=value2`, use `nodeSelector.key1=value1`, `nodeSelector.key2=value2` correspondingly. | Optional| Optional | Optional |
65
+
|`nodeSelector`| By default, the deployed kubernetes resources and your machine learning workloads are randomly deployed to one or more nodes of the cluster, and DaemonSet resources are deployed to ALL nodes. If you want to restrict the extension deployment and your training/inference workloads to specific nodes with label `key1=value1` and `key2=value2`, use `nodeSelector.key1=value1`, `nodeSelector.key2=value2` correspondingly. | Optional| Optional | Optional |
64
66
|`installNvidiaDevicePlugin`|`True` or `False`, default `False`. [NVIDIA Device Plugin](https://github.com/NVIDIA/k8s-device-plugin#nvidia-device-plugin-for-kubernetes) is required for ML workloads on NVIDIA GPU hardware. By default, AzureML extension deployment won't install NVIDIA Device Plugin regardless Kubernetes cluster has GPU hardware or not. User can specify this setting to `True`, to install it, but make sure to fulfill [Prerequisites](https://github.com/NVIDIA/k8s-device-plugin#prerequisites). | Optional |Optional |Optional |
65
67
|`installPromOp`|`True` or `False`, default `True`. AzureML extension needs prometheus operator to manage prometheus. Set to `False` to reuse the existing prometheus operator. For more information about reusing the existing prometheus operator, refer to [reusing the prometheus operator](./how-to-troubleshoot-kubernetes-extension.md#prometheus-operator)| Optional| Optional | Optional |
66
68
|`installVolcano`|`True` or `False`, default `True`. AzureML extension needs volcano scheduler to schedule the job. Set to `False` to reuse existing volcano scheduler. For more information about reusing the existing volcano scheduler, refer to [reusing volcano scheduler](./how-to-troubleshoot-kubernetes-extension.md#volcano-scheduler)| Optional| N/A | Optional |
@@ -83,7 +85,7 @@ If you plan to deploy AzureML extension for real-time inference workload and wan
83
85
* Type `LoadBalancer`. Exposes `azureml-fe` externally using a cloud provider's load balancer. To specify this value, ensure that your cluster supports load balancer provisioning. Note most on-premises Kubernetes clusters might not support external load balancer.
84
86
* Type `NodePort`. Exposes `azureml-fe` on each Node's IP at a static port. You'll be able to contact `azureml-fe`, from outside of cluster, by requesting `<NodeIP>:<NodePort>`. Using `NodePort` also allows you to set up your own load balancing solution and TLS/SSL termination for `azureml-fe`.
85
87
* Type `ClusterIP`. Exposes `azureml-fe` on a cluster-internal IP, and it makes `azureml-fe` only reachable from within the cluster. For `azureml-fe` to serve inference requests coming outside of cluster, it requires you to set up your own load balancing solution and TLS/SSL termination for `azureml-fe`.
86
-
* To ensure high availability of `azureml-fe` routing service, AzureML extension deployment by default creates three replicas of `azureml-fe` for clusters having three nodes or more. If your cluster has **less than 3 nodes**, set `inferenceLoadbalancerHA=False`.
88
+
* To ensure high availability of `azureml-fe` routing service, AzureML extension deployment by default creates three replicas of `azureml-fe` for clusters having three nodes or more. If your cluster has **less than 3 nodes**, set `inferenceRouterHA=False`.
87
89
* You also want to consider using **HTTPS** to restrict access to model endpoints and secure the data that clients submit. For this purpose, you would need to specify either `sslSecret` config setting or combination of `sslKeyPemFile` and `sslCertPemFile` config-protected settings.
88
90
* By default, AzureML extension deployment expects config settings for **HTTPS** support. For development or testing purposes, **HTTP** support is conveniently provided through config setting `allowInsecureConnections=True`.
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-manage-kubernetes-instance-types.md
+8Lines changed: 8 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -24,6 +24,14 @@ and [resources](https://kubernetes.io/docs/concepts/configuration/manage-resourc
24
24
25
25
In short, a `nodeSelector` lets you specify which node a pod should run on. The node must have a corresponding label. In the `resources` section, you can set the compute resources (CPU, memory and NVIDIA GPU) for the pod.
26
26
27
+
>[!IMPORTANT]
28
+
>
29
+
> If you have [specified a nodeSelector when deploying the AzureML extension](./how-to-deploy-kubernetes-extension.md#review-azureml-extension-configuration-settings), the nodeSelector will be applied to all instance types. This means that:
30
+
> - For each instance type creating, the specified nodeSelector should be a subset of the extension-specified nodeSelector.
31
+
> - If you use an instance type **with nodeSelector**, the workload will run on any node matching both the extension-specified nodeSelector and the instance type-specified nodeSelector.
32
+
> - If you use an instance type **without a nodeSelector**, the workload will run on any node mathcing the extension-specified nodeSelector.
33
+
34
+
27
35
## Default instance type
28
36
29
37
By default, a `defaultinstancetype` with the following definition is created when you attach a Kubernetes cluster to an AzureML workspace:
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-secure-kubernetes-online-endpoint.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -169,7 +169,7 @@ TLS/SSL certificates expire and must be renewed. Typically, this happens every y
169
169
If you directly configured the PEM files in the extension deployment command before, you need to run the extension update command and specify the new PEM file's path:
* The system can't find the compute when create/update new online endpoint/deployment.
74
-
*The compute of existing online endpoints/deployments have been removed.
76
+
* The compute of existing online endpoints/deployments have been removed.
75
77
76
78
You can check the following items to troubleshoot the issue:
77
79
* Try to recreate the endpoint and deployment.
@@ -87,12 +89,40 @@ The Kubernetes compute is not accessible.
87
89
88
90
This error should occur when the workspace MSI (managed identity) doesn't have access to the AKS cluster. You can check if the workspace MSI has the access to the AKS, and if not, you can follow this [document](how-to-identity-based-service-authentication.md) to manage access and identity.
89
91
92
+
#### ERROR: InvalidComputeInformation
93
+
94
+
The error message is as follows:
95
+
96
+
```bash
97
+
The compute information is invalid.
98
+
```
99
+
There is a compute target validation process when deploying models to your Kubernetes cluster. This error should occur when the compute information is invalid when validating, for example the compute target is not found, or the configuration of Azure Machine Learning extension has been updated in your Kubernetes cluster.
100
+
101
+
You can check the following items to troubleshoot the issue:
102
+
* Check whether the compute target you used is correct and existing in your workspace.
103
+
* Try to detach and reattach the compute to the workspace. Pay attention to more notes on [reattach](#error-genericcomputeerror).
This error should occur when the system failed to find any configuration to connect to cluster, such as:
114
+
* For Arc-Kubernetes cluster, there is no Azure Relay configuration can be found.
115
+
* For AKS cluster, there is no AKS configuration can be found.
116
+
117
+
To rebuild the configuration of compute connection in your cluster, you can try to detach and reattach the compute to the workspace. Pay attention to more notes on [reattach](#error-genericcomputeerror).
118
+
90
119
### Kubernetes cluster error
91
120
92
121
Below is a list of error types in **cluster scope** that you might encounter when using Kubernetes compute to create online endpoints and online deployments for real-time model inference, which you can trouble shoot by following the guideline:
For an AKS cluster or an Azure Arc enabled Kubernetes cluster:
115
-
1. Check if the Kubernetes API server is accessible by running `kubectl` command in cluster.
145
+
* Check if the Kubernetes API server is accessible by running `kubectl` command in cluster.
116
146
117
147
#### ERROR: ClusterNotReachable
118
148
@@ -132,6 +162,23 @@ For AKS clusters:
132
162
For an AKS cluster or an Azure Arc enabled Kubernetes cluster:
133
163
* Check if the Kubernetes API server is accessible by running `kubectl` command in cluster.
134
164
165
+
#### ERROR: ClusterNotFound
166
+
167
+
The error message is as follows:
168
+
169
+
```bash
170
+
Cannot found Kubernetes cluster.
171
+
```
172
+
173
+
This error should occur when the system cannot find the AKS/Arc-Kubernetes cluster.
174
+
175
+
You can check the following items to troubleshoot the issue:
176
+
* First, check the cluster resource ID in the Azure portal to verify whether Kubernetes cluster resource still exists and is running normally.
177
+
* If the cluster exists and is running, then you can try to detach and reattach the compute to the workspace. Pay attention to more notes on [reattach](#error-genericcomputeerror).
178
+
179
+
> [!TIP]
180
+
> More troubleshoot guide of common errors when creating/updating the Kubernetes online endpoints and deployments, you can find in [How to troubleshoot online endpoints](how-to-troubleshoot-online-endpoints.md).
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-troubleshoot-kubernetes-extension.md
+42-1Lines changed: 42 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -225,6 +225,47 @@ volcano-scheduler.conf: |
225
225
- name: nodeorder
226
226
- name: binpack
227
227
```
228
-
You need to use the same config settings as above, and disable `job/validate` webhook in the volcano admission, so that AzureML training workloads can perform properly.
228
+
You need to use the same config settings as above, and you need to disable `job/validate` webhook in the volcano admission if your **volcano version is lower than 1.6**, so that AzureML training workloads can perform properly.
As discussed in this [thread](https://github.com/volcano-sh/volcano/issues/2558) , the **gang plugin** is not working well with the cluster autoscaler(CA) and also the node autoscaler in AKS.
232
+
233
+
If you use the volcano that comes with the AzureML extension via setting `installVolcano=true`, the extension will have a scheduler config by default, which configures the **gang** plugin to prevent job deadlock. Therefore, the cluster autoscaler(CA) in AKS cluster will not be supported with the volcano installed by extension.
234
+
235
+
For the case above, if you prefer the AKS cluster autoscaler could work normally, you can configure this `volcanoScheduler.schedulerConfigMap` parameter through updating extension, and specify a custom config of **no gang** volcano scheduler to it, for example:
236
+
237
+
```yaml
238
+
volcano-scheduler.conf: |
239
+
actions: "enqueue, allocate, backfill"
240
+
tiers:
241
+
- plugins:
242
+
- name: sla
243
+
arguments:
244
+
sla-waiting-time: 1m
245
+
- plugins:
246
+
- name: conformance
247
+
- plugins:
248
+
- name: overcommit
249
+
- name: drf
250
+
- name: predicates
251
+
- name: proportion
252
+
- name: nodeorder
253
+
- name: binpack
254
+
```
255
+
256
+
To use this config in your AKS cluster, you need to follow the steps below:
257
+
1. Create a configmap file with the above config in the azureml namespace. This namespace will generally be created when you install the AzureML extension.
258
+
1. Set `volcanoScheduler.schedulerConfigMap=<configmap name>` in the extension config to apply this configmap. And you need to skip the resource validation when installing the extension by configuring `amloperator.skipResourceValidation=true`. For example:
0 commit comments