You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-batch-scoring-script.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,14 +20,14 @@ ms.custom: how-to
20
20
Batch endpoints allow you to deploy models to perform long-running inference at scale. To indicate how batch endpoints should use your model over the input data to create predictions, you need to create and specify a scoring script (also known as batch driver script). In this article, you will learn how to use scoring scripts in different scenarios and their best practices.
21
21
22
22
> [!TIP]
23
-
> MLflow models don't require a scoring script as it is autogenerated for you. For more details about how batch endpoints work with MLflow models, see the dedicated tutorial [Using MLflow models in batch deployments](how-to-mlflow-batch.md). If you want to change the default inference routine, write an scoring script for your MLflow models as explained at [Using MLflow models with a scoring script](how-to-mlflow-batch.md#customizing-mlflow-models-deployments-with-a-scoring-script).
23
+
> MLflow models don't require a scoring script as it is autogenerated for you. For more details about how batch endpoints work with MLflow models, see the dedicated tutorial [Using MLflow models in batch deployments](how-to-mlflow-batch.md).
24
24
25
25
> [!WARNING]
26
26
> If you are deploying an Automated ML model under a batch endpoint, notice that the scoring script that Automated ML provides only works for Online Endpoints and it is not designed for batch execution. Please follow this guideline to learn how to create one depending on what your model does.
27
27
28
28
## Understanding the scoring script
29
29
30
-
The scoring script is a Python file (`.py`) that contains the logic about how to run the model and read the input data submitted by the batch deployment executor. Each model deployment provides the scoring script (allow with any other depenency required) at creation time. It is usually indicated as follows:
30
+
The scoring script is a Python file (`.py`) that contains the logic about how to run the model and read the input data submitted by the batch deployment executor. Each model deployment provides the scoring script (allow with any other dependency required) at creation time. It is usually indicated as follows:
31
31
32
32
# [Azure CLI](#tab/cli)
33
33
@@ -40,21 +40,21 @@ __deployment.yml__
40
40
```python
41
41
deployment = BatchDeployment(
42
42
...
43
-
code_path="deployment-torch/code",
43
+
code_path="code",
44
44
scoring_script="batch_driver.py",
45
45
...
46
46
)
47
47
```
48
48
49
49
# [Studio](#tab/azure-studio)
50
50
51
-
On [Azure Machine Learning studio portal](https://ml.azure.com), when creating a new deployment, you will be prompted for an scoring script and dependencies as follows:
51
+
When creating a new deployment, you will be prompted for a scoring script and dependencies as follows:
52
52
53
-
:::image type="content" source="./media/how-to-batch-scoring-script/configure-scoring-script.png" alt-text="Screenshot of the step where you can configure the scroing script in a new deployment.":::
53
+
:::image type="content" source="./media/how-to-batch-scoring-script/configure-scoring-script.png" alt-text="Screenshot of the step where you can configure the scoring script in a new deployment.":::
54
54
55
-
MLflow models don't require an scoring script as Azure Machine Learning can automatically generate it for you. However, if you want to customize how inference is executed you can still indicate it:
55
+
For MLflow models, scoring scripts are automatically generated but you can indicate one by checking the following option:
56
56
57
-
:::image type="content" source="./media/how-to-batch-scoring-script/configure-scoring-script-mlflow.png" alt-text="Screenshot of the step where you can configure the scroing script in a new deployment when the model has MLflow format.":::
57
+
:::image type="content" source="./media/how-to-batch-scoring-script/configure-scoring-script-mlflow.png" alt-text="Screenshot of the step where you can configure the scoring script in a new deployment when the model has MLflow format.":::
58
58
59
59
---
60
60
@@ -190,7 +190,7 @@ For an example about how to achieve it see [Text processing with batch deploymen
190
190
191
191
### Using models that are folders
192
192
193
-
When authoring scoring scripts, the environment variable `AZUREML_MODEL_DIR` is typically used in the `init()` function to load the model. However, some models may contain its files inside of a folder. When reading the files in this variable, you may need to account for that. You can identify the folder where your MLflow model is placed as follows:
193
+
The environment variable `AZUREML_MODEL_DIR`contains the path to where the selected model is located and it is typically used in the `init()` function to load the model into memory. However, some models may contain its files inside of a folder. When reading the files in this variable, you may need to account for that. You can identify the folder where your MLflow model is placed as follows:
194
194
195
195
1. Go to [Azure Machine Learning portal](https://ml.azure.com).
0 commit comments