Skip to content

Commit d50853e

Browse files
authored
Merge pull request #103633 from Blackmist/deploy-updates
Deploy updates
2 parents 5825206 + fb3670b commit d50853e

File tree

3 files changed

+9
-44
lines changed

3 files changed

+9
-44
lines changed

articles/machine-learning/how-to-deploy-and-where.md

Lines changed: 7 additions & 42 deletions
Original file line numberDiff line numberDiff line change
@@ -168,24 +168,24 @@ Multi-model endpoints use a shared container to host multiple models. This helps
168168

169169
For an E2E example which shows how to use multiple models behind a single containerized endpoint, see [this example](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/deployment/deploy-multi-model)
170170

171-
## Prepare to deploy
171+
## Prepare deployment artifacts
172172

173-
To deploy the model, you need the following items:
173+
To deploy the model, you need the following:
174174

175-
* **An entry script**. This script accepts requests, scores the requests by using the model, and returns the results.
175+
* **Entry script & source code dependencies**. This script accepts requests, scores the requests by using the model, and returns the results.
176176

177177
> [!IMPORTANT]
178178
> * The entry script is specific to your model. It must understand the format of the incoming request data, the format of the data expected by your model, and the format of the data returned to clients.
179179
>
180180
> If the request data is in a format that's not usable by your model, the script can transform it into an acceptable format. It can also transform the response before returning it to the client.
181181
>
182-
> * The Azure Machine Learning SDK doesn't provide a way for web services or IoT Edge deployments to access your data store or datasets. If your deployed model needs to access data stored outside the deployment, like data in an Azure storage account, you must develop a custom code solution by using the relevant SDK. For example, the [Azure Storage SDK for Python](https://github.com/Azure/azure-storage-python).
182+
> * Web services and IoT Edge deployments are not able to access workspace datastores or datasets. If your deployed service needs to access data stored outside the deployment, like data in an Azure storage account, you must develop a custom code solution by using the relevant SDK. For example, the [Azure Storage SDK for Python](https://github.com/Azure/azure-storage-python).
183183
>
184184
> An alternative that might work for your scenario is [batch prediction](how-to-use-parallel-run-step.md), which does provide access to data stores during scoring.
185185

186-
* **Dependencies**, like helper scripts or Python/Conda packages required to run the entry script or model.
186+
* **Inference environment**. The base image with installed package dependencies required to run the model.
187187

188-
* **The deployment configuration** for the compute target that hosts the deployed model. This configuration describes things like memory and CPU requirements needed to run the model.
188+
* **Deployment configuration** for the compute target that hosts the deployed model. This configuration describes things like memory and CPU requirements needed to run the model.
189189

190190
These items are encapsulated into an *inference configuration* and a *deployment configuration*. The inference configuration references the entry script and other dependencies. You define these configurations programmatically when you use the SDK to perform the deployment. You define them in JSON files when you use the CLI.
191191

@@ -481,7 +481,7 @@ def run(request):
481481
> pip install azureml-contrib-services
482482
> ```
483483

484-
### 2. Define your InferenceConfig
484+
### 2. Define your inference environment
485485

486486
The inference configuration describes how to configure the model to make predictions. This configuration isn't part of your entry script. It references your entry script and is used to locate all the resources required by the deployment. It's used later, when you deploy the model.
487487

@@ -544,41 +544,6 @@ The classes for local, Azure Container Instances, and AKS web services can be im
544544
from azureml.core.webservice import AciWebservice, AksWebservice, LocalWebservice
545545
```
546546

547-
#### Profiling
548-
549-
Before you deploy your model as a service, you might want to profile it to determine optimal CPU and memory requirements. You can use either the SDK or the CLI to profile your model. The following examples show how to profile a model by using the SDK.
550-
551-
> [!IMPORTANT]
552-
> When you use profiling, the inference configuration that you provide can't reference an Azure Machine Learning environment. Instead, define the software dependencies by using the `conda_file` parameter of the `InferenceConfig` object.
553-
554-
```python
555-
import json
556-
test_data = json.dumps({'data': [
557-
[1,2,3,4,5,6,7,8,9,10]
558-
]})
559-
560-
profile = Model.profile(ws, "profilemymodel", [model], inference_config, test_data)
561-
profile.wait_for_profiling(True)
562-
profiling_results = profile.get_results()
563-
print(profiling_results)
564-
```
565-
566-
This code displays a result similar to the following output:
567-
568-
```python
569-
{'cpu': 1.0, 'memoryInGB': 0.5}
570-
```
571-
572-
Model profiling results are emitted as a `Run` object.
573-
574-
For information on using profiling from the CLI, see [az ml model profile](https://docs.microsoft.com/cli/azure/ext/azure-cli-ml/ml/model?view=azure-cli-latest#ext-azure-cli-ml-az-ml-model-profile).
575-
576-
For more information, see these documents:
577-
578-
* [ModelProfile](https://docs.microsoft.com/python/api/azureml-core/azureml.core.profile.modelprofile?view=azure-ml-py)
579-
* [profile()](https://docs.microsoft.com/python/api/azureml-core/azureml.core.model.model?view=azure-ml-py#profile-workspace--profile-name--models--inference-config--input-data-)
580-
* [Inference configuration file schema](reference-azure-machine-learning-cli.md#inference-configuration-schema)
581-
582547
## Deploy to target
583548

584549
Deployment uses the inference configuration deployment configuration to deploy the models. The deployment process is similar regardless of the compute target. Deploying to AKS is slightly different because you must provide a reference to the AKS cluster.

articles/machine-learning/how-to-deploy-custom-docker-image.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ This document is broken into two sections:
4343
* The [Azure CLI](https://docs.microsoft.com/cli/azure/install-azure-cli?view=azure-cli-latest).
4444
* The [CLI extension for Azure Machine Learning](reference-azure-machine-learning-cli.md).
4545
* An [Azure Container Registry](/azure/container-registry) or other Docker registry that is accessible on the internet.
46-
* The steps in this document assume that you are familiar with creating and using an __inference configuration__ object as part of model deployment. For more information, see the "prepare to deploy" section of [Where to deploy and how](how-to-deploy-and-where.md#prepare-to-deploy).
46+
* The steps in this document assume that you are familiar with creating and using an __inference configuration__ object as part of model deployment. For more information, see the "prepare to deploy" section of [Where to deploy and how](how-to-deploy-and-where.md#prepare-deployment-artifacts).
4747

4848
## Create a custom base image
4949

articles/machine-learning/tutorial-train-deploy-model-cli.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -376,7 +376,7 @@ This command deploys a new service named `myservice`, using version 1 of the mod
376376

377377
The `inferenceConfig.yml` file provides information on how to use the model for inference. For example, it references the entry script (`score.py`) and software dependencies.
378378

379-
For more information on the structure of this file, see the [Inference configuration schema](reference-azure-machine-learning-cli.md#inference-configuration-schema). For more information on entry scripts, see [Deploy models with the Azure Machine Learning](how-to-deploy-and-where.md#prepare-to-deploy).
379+
For more information on the structure of this file, see the [Inference configuration schema](reference-azure-machine-learning-cli.md#inference-configuration-schema). For more information on entry scripts, see [Deploy models with the Azure Machine Learning](how-to-deploy-and-where.md#prepare-deployment-artifacts).
380380

381381
The `aciDeploymentConfig.yml` describes the deployment environment used to host the service. The deployment configuration is specific to the compute type that you use for the deployment. In this case, an Azure Container Instance is used. For more information, see the [Deployment configuration schema](reference-azure-machine-learning-cli.md#deployment-configuration-schema).
382382

0 commit comments

Comments
 (0)