Skip to content

Commit 6249231

Browse files
authored
Merge pull request #78780 from Blackmist/dockerimage
custom Docker image
2 parents 735caf5 + 00042a2 commit 6249231

File tree

4 files changed

+258
-49
lines changed

4 files changed

+258
-49
lines changed

articles/machine-learning/service/concept-azure-machine-learning-architecture.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -176,10 +176,14 @@ Azure Machine Learning can create two types of images:
176176

177177
The Azure Machine Learning service provides a base image, which is used by default. You can also provide your own custom images.
178178

179+
### Image registry
180+
179181
Images are cataloged in the **image registry** in your workspace. You can provide additional metadata tags when you create the image, so that you can query them to find your image later.
180182

181183
For an example of creating an image, see [Deploy an image classification model in Azure Container Instances](tutorial-deploy-models-with-aml.md).
182184

185+
For an example of deploying a model using a custom image, see [How to deploy a model using a custom Docker image](how-to-deploy-custom-docker-image.md).
186+
183187
### Deployment
184188

185189
A deployment is an instantiation of your model into either a web service that can be hosted in the cloud or an IoT module for integrated device deployments.

articles/machine-learning/service/how-to-deploy-and-where.md

Lines changed: 4 additions & 49 deletions
Original file line numberDiff line numberDiff line change
@@ -254,7 +254,9 @@ In this example, the configuration contains the following items:
254254
* The [entry script](#script), which is used to handle web requests sent to the deployed service
255255
* The conda file that describes the Python packages needed to inference
256256

257-
For information on InferenceConfig functionality, see the [Advanced configuration](#advanced-config) section.
257+
For information on InferenceConfig functionality, see the [InferenceConfig](https://docs.microsoft.com/python/api/azureml-core/azureml.core.model.inferenceconfig?view=azure-ml-py) class reference.
258+
259+
For information on using a custom Docker image with inference configuration, see [How to deploy a model using a custom Docker image](how-to-deploy-custom-docker-image.md).
258260

259261
### 3. Define your Deployment configuration
260262

@@ -510,61 +512,14 @@ print(service.state)
510512
print(service.get_logs())
511513
```
512514

513-
<a id="advanced-config"></a>
514-
515-
## Advanced settings
516-
517-
**<a id="customimage"></a> Use a custom base image**
518-
519-
Internally, InferenceConfig creates a Docker image that contains the model and other assets needed by the service. If not specified, a default base image is used.
520-
521-
When creating an image to use with your inference configuration, the image must meet the following requirements:
522-
523-
* Ubuntu 16.04 or greater.
524-
* Conda 4.5.# or greater.
525-
* Python 3.5.# or 3.6.#.
526-
527-
To use a custom image, set the `base_image` property of the inference configuration to the address of the image. The following example demonstrates how to use an image from both a public and private Azure Container Registry:
528-
529-
```python
530-
# use an image available in public Container Registry without authentication
531-
inference_config.base_image = "mcr.microsoft.com/azureml/o16n-sample-user-base/ubuntu-miniconda"
532-
533-
# or, use an image available in a private Container Registry
534-
inference_config.base_image = "myregistry.azurecr.io/mycustomimage:1.0"
535-
inference_config.base_image_registry.address = "myregistry.azurecr.io"
536-
inference_config.base_image_registry.username = "username"
537-
inference_config.base_image_registry.password = "password"
538-
```
539-
540-
The following image URIs are for images provided by Microsoft, and can be used without providing a user name or password value:
541-
542-
* `mcr.microsoft.com/azureml/o16n-sample-user-base/ubuntu-miniconda`
543-
* `mcr.microsoft.com/azureml/onnxruntime:v0.4.0`
544-
* `mcr.microsoft.com/azureml/onnxruntime:v0.4.0-cuda10.0-cudnn7`
545-
* `mcr.microsoft.com/azureml/onnxruntime:v0.4.0-tensorrt19.03`
546-
547-
To use these images, set the `base_image` to the URI from the list above. Set `base_image_registry.address` to `mcr.microsoft.com`.
548-
549-
> [!IMPORTANT]
550-
> Microsoft images that use CUDA or TensorRT must be used on Microsoft Azure Services only.
551-
552-
For more information on uploading your own images to an Azure Container Registry, see [Push your first image to a private Docker container registry](https://docs.microsoft.com/azure/container-registry/container-registry-get-started-docker-cli).
553-
554-
If your model is trained on Azure Machine Learning Compute, using __version 1.0.22 or greater__ of the Azure Machine Learning SDK, an image is created during training. The following example demonstrates how to use this image:
555-
556-
```python
557-
# Use an image built during training with SDK 1.0.22 or greater
558-
image_config.base_image = run.properties["AzureML.DerivedImageName"]
559-
```
560-
561515
## Clean up resources
562516
To delete a deployed web service, use `service.delete()`.
563517
To delete a registered model, use `model.delete()`.
564518

565519
For more information, see the reference documentation for [WebService.delete()](https://docs.microsoft.com/python/api/azureml-core/azureml.core.webservice(class)?view=azure-ml-py#delete--), and [Model.delete()](https://docs.microsoft.com/python/api/azureml-core/azureml.core.model.model?view=azure-ml-py#delete--).
566520

567521
## Next steps
522+
* [How to deploy a model using a custom Docker image](how-to-deploy-custom-docker-image.md)
568523
* [Deployment troubleshooting](how-to-troubleshoot-deployment.md)
569524
* [Secure Azure Machine Learning web services with SSL](how-to-secure-web-service.md)
570525
* [Consume a ML Model deployed as a web service](how-to-consume-web-service.md)
Lines changed: 248 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,248 @@
1+
---
2+
title: How to deploy a model using a custom Docker image
3+
titleSuffix: Azure Machine Learning service
4+
description: 'Learn how to use a custom Docker image when deploying your Azure Machine Learning service models. When deploying a trained model, a Docker image is created to host the image, web server, and other components needed to run the service. While Azure Machine Learning service provides a default image for you, you can also use your own image.'
5+
services: machine-learning
6+
ms.service: machine-learning
7+
ms.subservice: core
8+
ms.topic: conceptual
9+
ms.author: jordane
10+
author: jpe316
11+
ms.reviewer: larryfr
12+
ms.date: 06/05/2019
13+
---
14+
15+
# Deploy a model using a custom Docker image
16+
17+
Learn how to use a custom Docker image when deploying trained models with the Azure Machine Learning service.
18+
19+
When you deploy a trained model to a web service or IoT Edge device, a Docker image is created. This image contains the model, conda environment, and assets needed to use the model. It also contains a web server to handle incoming requests when deployed as a web service, and components needed to work with Azure IoT Hub.
20+
21+
Azure Machine Learning service provides a default Docker image so you don't have to worry about creating one. You can also use a custom image that you create as a _base image_. A base image is used as the starting point when an image is created for a deployment. It provides the underlying operating system and components. The deployment process then adds additional components, such as your model, conda environment, and other assets, to the image before deploying it.
22+
23+
Typically, you create a custom image when you want to control component versions or save time during deployment. For example, you might want to standardize on a specific version of Python, Conda, or other component. You might also want to install software required by your model, where the installation process takes a long time. Installing the software when creating the base image means that you don't have to install it for each deployment.
24+
25+
> [!IMPORTANT]
26+
> When deploying a model, you cannot override core components such as the web server or IoT Edge components. These components provide a known working environment that is tested and supported by Microsoft.
27+
28+
> [!WARNING]
29+
> Microsoft may not be able to help troubleshoot problems caused by a custom image. If you encounter problems, you may be asked to use the default image or one of the images Microsoft provides to see if the problem is specific to your image.
30+
31+
This document is broken into two sections:
32+
33+
* Create a custom image: Provides information to admins and DevOps on creating a custom image and configuring authentication to an Azure Container Registry using the Azure CLI and Machine Learning CLI.
34+
* Use a custom image: Provides information to Data Scientists and DevOps/MLOps on using custom images when deploying a trained model from the Python SDK or ML CLI.
35+
36+
## Prerequisites
37+
38+
* An Azure Machine Learning service workgroup. For more information, see the [Create a workspace](setup-create-workspace.md) article.
39+
* The Azure Machine Learning SDK. For more information, see the Python SDK section of the [Create a workspace](setup-create-workspace.md#sdk) article.
40+
* The [Azure CLI](https://docs.microsoft.com/cli/azure/install-azure-cli?view=azure-cli-latest).
41+
* An [Azure Container Registry](/azure/container-registry) or other Docker registry that is accessible on the internet.
42+
* The steps in this document assume that you are familiar with creating and using an __inference configuration__ object as part of model deployment. For more information, see the "prepare to deploy" section of [Where to deploy and how](how-to-deploy-and-where.md#prepare-to-deploy).
43+
44+
## Create a custom image
45+
46+
The information in this section assumes that you are using an Azure Container Registry to store Docker images. Use the following checklist when planning to create custom images for Azure Machine Learning service:
47+
48+
* Will you use the Azure Container Registry created for the Azure Machine Learning service workspace, or a standalone Azure Container Registry?
49+
50+
When using images stored in the __container registry for the workspace__, you do not need to authenticate to the registry. Authentication is handled by the workspace.
51+
52+
> [!TIP]
53+
> The container registry for your workspace is created the first time you train or deploy a model using the workspace. If you've created a new workspace, but not trained or created a model, no Azure Container Registry will exist for the workspace.
54+
55+
For information on retrieving the name of the Azure Container Registry for your workspace, see the [Get container registry name](#getname) section of this article.
56+
57+
When using images stored in a __standalone container registry__, you will need to configure a service principal that has at least read access. You then provide the service principal ID (username) and password to anyone that uses images from the registry. The exception is if you make the container registry publicly accessible.
58+
59+
For information on creating a private Azure Container Registry, see [Create a private container registry](/azure/container-registry/container-registery-get-started-azure-cli).
60+
61+
For information on using service principals with Azure Container Registry, see [Azure Container Registry authentication with service principals](/azure/container-registry/container-registry-auth-service-principal).
62+
63+
* Azure Container Registry and image information: Provide the image name to anyone that needs to use it. For example, an image named `myimage`, stored in a registry named `myregistry`, is referenced as `myregistry.azurecr.io/myimage` when using the image for model deployment
64+
65+
* Image requirements: Azure Machine Learning service only supports Docker images that provide the following software:
66+
67+
* Ubuntu 16.04 or greater.
68+
* Conda 4.5.# or greater.
69+
* Python 3.5.# or 3.6.#.
70+
71+
<a id="getname"></a>
72+
73+
### Get container registry information
74+
75+
In this section, learn how to get the name of the Azure Container Registry for your Azure Machine Learning service workspace.
76+
77+
> [!TIP]
78+
> The container registry for your workspace is created the first time you train or deploy a model using the workspace. If you've created a new workspace, but not trained or created a model, no Azure Container Registry will exist for the workspace.
79+
80+
If you've already trained or deployed models using the Azure Machine Learning service, a container registry was created for your workspace. To find the name of this container registry, use the following steps:
81+
82+
1. Open a new shell or command-prompt and use the following command to authenticate to your Azure subscription:
83+
84+
```azurecli-interactive
85+
az login
86+
```
87+
88+
Follow the prompts to authenticate to the subscription.
89+
90+
2. Use the following command to list the container registry for the workspace. Replace `<myworkspace>` with your Azure Machine Learning service workspace name. Replace `<resourcegroup>` with the Azure resource group that contains your workspace:
91+
92+
```azurecli-interactive
93+
az ml workspace show -w <myworkspace> -g <resourcegroup> --query containerRegistry
94+
```
95+
96+
The information returned is similar to the following text:
97+
98+
```text
99+
/subscriptions/<subscription_id>/resourceGroups/<resource_group>/providers/Microsoft.ContainerRegistry/registries/<registry_name>
100+
```
101+
102+
The `<registry_name>` value is the name of the Azure Container Registry for your workspace.
103+
104+
### Build a custom image
105+
106+
The steps in this section walk-through creating a custom Docker image in your Azure Container Registry.
107+
108+
1. Create a new text file named `Dockerfile`, and use the following text as the contents:
109+
110+
```text
111+
FROM ubuntu:16.04
112+
113+
ENV LANG=C.UTF-8 LC_ALL=C.UTF-8
114+
ENV PATH /opt/miniconda/bin:$PATH
115+
116+
RUN apt-get update --fix-missing && \
117+
apt-get install -y wget bzip2 && \
118+
apt-get clean && \
119+
rm -rf /var/lib/apt/lists/*
120+
121+
RUN wget --quiet https://repo.anaconda.com/miniconda/Miniconda3-4.5.12-Linux-x86_64.sh -O ~/miniconda.sh && \
122+
/bin/bash ~/miniconda.sh -b -p /opt/miniconda && \
123+
rm ~/miniconda.sh && \
124+
/opt/miniconda/bin/conda clean -tipsy
125+
126+
RUN conda install -y python=3.6 && \
127+
conda clean -aqy && \
128+
rm -rf /opt/miniconda/pkgs && \
129+
find / -type d -name __pycache__ -prune -exec rm -rf {} \;
130+
```
131+
132+
2. From a shell or command-prompt, use the following to authenticate to the Azure Container Registry. Replace the `<registry_name>` with the name of the container registry you want to store the image in:
133+
134+
```azurecli-interactive
135+
az acr login --name <registry_name>
136+
```
137+
138+
3. To upload the Dockerfile, and build it, use the following command. Replace `<registry_name>` with the name of the container registry you want to store the image in:
139+
140+
```azurecli-interactive
141+
az acr build --image myimage:v1 --registry <registry_name> --file Dockerfile .
142+
```
143+
144+
During the build process, information is streamed to back to the command line. If the build is successful, you receive a message similar to the following text:
145+
146+
```text
147+
Run ID: cda was successful after 2m56s
148+
```
149+
150+
For more information on building images with an Azure Container Registry, see [Build and run a container image using Azure Container Registry Tasks](/docs.microsoft.com/azure/container-registry/container-registry-quickstart-task-cli.md)
151+
152+
For more information on uploading existing images to an Azure Container Registry, see [Push your first image to a private Docker container registry](/azure/container-registry/container-registry-get-started-docker-cli.md).
153+
154+
## Use a custom image
155+
156+
To use a custom image, you need the following information:
157+
158+
* The __image name__. For example, `mcr.microsoft.com/azureml/o16n-sample-user-base/ubuntu-miniconda` is the path to a basic Docker Image provided by Microsoft.
159+
* If the image is in a __private repository__, you need the following information:
160+
161+
* The registry __address__. For example, `myregistry.azureecr.io`.
162+
* A service principal __username__ and __password__ that has read access to the registry.
163+
164+
If you do not have this information, speak to the administrator for the Azure Container Registry that contains your image.
165+
166+
### Publicly available images
167+
168+
Microsoft provides several docker images on a publicly accessible repository, which can be used with the steps in this section:
169+
170+
| Image | Description |
171+
| ----- | ----- |
172+
| `mcr.microsoft.com/azureml/o16n-sample-user-base/ubuntu-miniconda` | Basic image for Azure Machine Learning service |
173+
| `mcr.microsoft.com/azureml/onnxruntime:v0.4.0` | Contains the ONNX runtime. |
174+
| `mcr.microsoft.com/azureml/onnxruntime:v0.4.0-cuda10.0-cudnn7` | Contains the ONNX runtime and CUDA components. |
175+
| `mcr.microsoft.com/azureml/onnxruntime:v0.4.0-tensorrt19.03` | Contains ONNX runtime and TensorRT. |
176+
177+
> [!TIP]
178+
> Since these images are publicly available, you do not need to provide an address, username or password when using them.
179+
180+
> [!IMPORTANT]
181+
> Microsoft images that use CUDA or TensorRT must be used on Microsoft Azure Services only.
182+
183+
> [!TIP]
184+
>__If your model is trained on Azure Machine Learning Compute__, using __version 1.0.22 or greater__ of the Azure Machine Learning SDK, an image is created during training. To discover the name of this image, use `run.properties["AzureML.DerivedImageName"]`. The following example demonstrates how to use this image:
185+
>
186+
> ```python
187+
> # Use an image built during training with SDK 1.0.22 or greater
188+
> image_config.base_image = run.properties["AzureML.DerivedImageName"]
189+
> ```
190+
191+
### Use an image with the Azure Machine Learning SDK
192+
193+
To use a custom image, set the `base_image` property of the [inference configuration object](https://docs.microsoft.com/python/api/azureml-core/azureml.core.model.inferenceconfig?view=azure-ml-py) to the address of the image:
194+
195+
```python
196+
# use an image from a registry named 'myregistry'
197+
inference_config.base_image = "myregistry.azurecr.io/myimage:v1"
198+
```
199+
200+
This format works for both images stored in the Azure Container Registry for your workspace and container registries that are publicly accessible. For example, the following code uses a default image provided by Microsoft:
201+
202+
```python
203+
# use an image available in public Container Registry without authentication
204+
inference_config.base_image = "mcr.microsoft.com/azureml/o16n-sample-user-base/ubuntu-miniconda"
205+
```
206+
207+
To use an image from a __private container registry__ that is not in your workspace, you must specify the address of the repository and a user name and password:
208+
209+
```python
210+
# Use an image available in a private Container Registry
211+
inference_config.base_image = "myregistry.azurecr.io/mycustomimage:1.0"
212+
inference_config.base_image_registry.address = "myregistry.azurecr.io"
213+
inference_config.base_image_registry.username = "username"
214+
inference_config.base_image_registry.password = "password"
215+
```
216+
217+
### Use an image with the Machine Learning CLI
218+
219+
> [!IMPORTANT]
220+
> Currently the Machine Learning CLI can use images from the Azure Container Registry for your workspace or publicly accessible repositories. It cannot use images from standalone private registries.
221+
222+
When deploying a model using the Machine Learning CLI, you provide an inference configuration file that references the custom image. The following JSON document demonstrates how to reference an image in a public container registry:
223+
224+
```json
225+
{
226+
"entryScript": "score.py",
227+
"runtime": "python",
228+
"condaFile": "infenv.yml",
229+
"extraDockerfileSteps": null,
230+
"sourceDirectory": null,
231+
"enableGpu": false,
232+
"baseImage": "mcr.microsoft.com/azureml/o16n-sample-user-base/ubuntu-miniconda",
233+
"baseImageRegistry": "mcr.microsoft.com"
234+
}
235+
```
236+
237+
This file is used with the `az ml model deploy` command. The `--ic` parameter is used to specify the inference configuration file.
238+
239+
```azurecli
240+
az ml model deploy -n myservice -m mymodel:1 --ic inferenceconfig.json --dc deploymentconfig.json --ct akscomputetarget
241+
```
242+
243+
For more information on deploying a model using the ML CLI, see the "model registration, profiling, and deployment" section of the [CLI extension for Azure Machine Learning service](reference-azure-machine-learning-cli.md#model-registration-profiling-deployment) article.
244+
245+
## Next steps
246+
247+
* Learn more about [Where to deploy and how](how-to-deploy-and-where.md).
248+
* Learn how to [Train and deploy machine learning models using Azure Pipelines](/azure/devops/pipelines/targets/azure-machine-learning?view=azure-devops).

articles/machine-learning/service/toc.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -222,6 +222,8 @@
222222
href: how-to-deploy-fpga-web-service.md
223223
- name: IoT Edge
224224
href: /azure/iot-edge/tutorial-deploy-machine-learning?context=azure/machine-learning/service/context/ml-context
225+
- name: Custom Docker image
226+
href: how-to-deploy-custom-docker-image.md
225227
- name: Troubleshoot
226228
href: how-to-troubleshoot-deployment.md
227229
- name: Consume web services

0 commit comments

Comments
 (0)