You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-deploy-and-where.md
+7-42Lines changed: 7 additions & 42 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -168,24 +168,24 @@ Multi-model endpoints use a shared container to host multiple models. This helps
168
168
169
169
For an E2E example which shows how to use multiple models behind a single containerized endpoint, see [this example](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/deployment/deploy-multi-model)
170
170
171
-
## Prepare to deploy
171
+
## Prepare deployment artifacts
172
172
173
-
To deploy the model, you need the following items:
173
+
To deploy the model, you need the following:
174
174
175
-
***An entry script**. This script accepts requests, scores the requests by using the model, and returns the results.
175
+
***Entry script& source code dependencies**. This script accepts requests, scores the requests by using the model, and returns the results.
176
176
177
177
> [!IMPORTANT]
178
178
>* The entry script is specific to your model. It must understand the format of the incoming request data, the format of the data expected by your model, and the format of the data returned to clients.
179
179
>
180
180
> If the request data isin a format that's not usable by your model, the script can transform it into an acceptable format. It can also transform the response before returning it to the client.
181
181
>
182
-
>*The Azure Machine Learning SDK doesn't provide a way for web services or IoT Edge deployments to access your data store or datasets. If your deployed model needs to access data stored outside the deployment, like data in an Azure storage account, you must develop a custom code solution by using the relevant SDK. For example, the [Azure Storage SDK for Python](https://github.com/Azure/azure-storage-python).
182
+
>*Web services and IoT Edge deployments are not able to access workspace datastores or datasets. If your deployed service needs to access data stored outside the deployment, like data in an Azure storage account, you must develop a custom code solution by using the relevant SDK. For example, the [Azure Storage SDKfor Python](https://github.com/Azure/azure-storage-python).
183
183
>
184
184
> An alternative that might work for your scenario is [batch prediction](how-to-use-parallel-run-step.md), which does provide access to data stores during scoring.
185
185
186
-
***Dependencies**, like helper scripts or Python/Conda packages required to run the entry script or model.
186
+
***Inference environment**. The base image with installed package dependencies required to run the model.
187
187
188
-
***The deployment configuration**for the compute target that hosts the deployed model. This configuration describes things like memory andCPU requirements needed to run the model.
188
+
***Deployment configuration**for the compute target that hosts the deployed model. This configuration describes things like memory andCPU requirements needed to run the model.
189
189
190
190
These items are encapsulated into an *inference configuration*and a *deployment configuration*. The inference configuration references the entry script and other dependencies. You define these configurations programmatically when you use the SDK to perform the deployment. You define them inJSON files when you use the CLI.
191
191
@@ -481,7 +481,7 @@ def run(request):
481
481
> pip install azureml-contrib-services
482
482
>```
483
483
484
-
### 2. Define your InferenceConfig
484
+
### 2. Define your inference environment
485
485
486
486
The inference configuration describes how to configure the model to make predictions. This configuration isn't part of your entry script. It references your entry script and is used to locate all the resources required by the deployment. It's used later, when you deploy the model.
487
487
@@ -544,41 +544,6 @@ The classes for local, Azure Container Instances, and AKS web services can be im
544
544
from azureml.core.webservice import AciWebservice, AksWebservice, LocalWebservice
545
545
```
546
546
547
-
#### Profiling
548
-
549
-
Before you deploy your model as a service, you might want to profile it to determine optimal CPUand memory requirements. You can use either the SDKor the CLI to profile your model. The following examples show how to profile a model by using the SDK.
550
-
551
-
> [!IMPORTANT]
552
-
> When you use profiling, the inference configuration that you provide can't reference an Azure Machine Learning environment. Instead, define the software dependencies by using the `conda_file` parameter of the `InferenceConfig` object.
This code displays a result similar to the following output:
567
-
568
-
```python
569
-
{'cpu': 1.0, 'memoryInGB': 0.5}
570
-
```
571
-
572
-
Model profiling results are emitted as a `Run`object.
573
-
574
-
For information on using profiling from the CLI, see [az ml model profile](https://docs.microsoft.com/cli/azure/ext/azure-cli-ml/ml/model?view=azure-cli-latest#ext-azure-cli-ml-az-ml-model-profile).
Deployment uses the inference configuration deployment configuration to deploy the models. The deployment process is similar regardless of the compute target. Deploying to AKSis slightly different because you must provide a reference to the AKS cluster.
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-deploy-custom-docker-image.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -43,7 +43,7 @@ This document is broken into two sections:
43
43
* The [Azure CLI](https://docs.microsoft.com/cli/azure/install-azure-cli?view=azure-cli-latest).
44
44
* The [CLI extension for Azure Machine Learning](reference-azure-machine-learning-cli.md).
45
45
* An [Azure Container Registry](/azure/container-registry) or other Docker registry that is accessible on the internet.
46
-
* The steps in this document assume that you are familiar with creating and using an __inference configuration__ object as part of model deployment. For more information, see the "prepare to deploy" section of [Where to deploy and how](how-to-deploy-and-where.md#prepare-to-deploy).
46
+
* The steps in this document assume that you are familiar with creating and using an __inference configuration__ object as part of model deployment. For more information, see the "prepare to deploy" section of [Where to deploy and how](how-to-deploy-and-where.md#prepare-deployment-artifacts).
Copy file name to clipboardExpand all lines: articles/machine-learning/tutorial-train-deploy-model-cli.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -376,7 +376,7 @@ This command deploys a new service named `myservice`, using version 1 of the mod
376
376
377
377
The `inferenceConfig.yml` file provides information on how to use the model for inference. For example, it references the entry script (`score.py`) and software dependencies.
378
378
379
-
For more information on the structure of this file, see the [Inference configuration schema](reference-azure-machine-learning-cli.md#inference-configuration-schema). For more information on entry scripts, see [Deploy models with the Azure Machine Learning](how-to-deploy-and-where.md#prepare-to-deploy).
379
+
For more information on the structure of this file, see the [Inference configuration schema](reference-azure-machine-learning-cli.md#inference-configuration-schema). For more information on entry scripts, see [Deploy models with the Azure Machine Learning](how-to-deploy-and-where.md#prepare-deployment-artifacts).
380
380
381
381
The `aciDeploymentConfig.yml` describes the deployment environment used to host the service. The deployment configuration is specific to the compute type that you use for the deployment. In this case, an Azure Container Instance is used. For more information, see the [Deployment configuration schema](reference-azure-machine-learning-cli.md#deployment-configuration-schema).
0 commit comments