Skip to content

Commit 47c7bbb

Browse files
authored
Update how-to-batch-scoring-script.md
1 parent 32cea55 commit 47c7bbb

File tree

1 file changed

+8
-8
lines changed

1 file changed

+8
-8
lines changed

articles/machine-learning/how-to-batch-scoring-script.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -20,14 +20,14 @@ ms.custom: how-to
2020
Batch endpoints allow you to deploy models to perform long-running inference at scale. To indicate how batch endpoints should use your model over the input data to create predictions, you need to create and specify a scoring script (also known as batch driver script). In this article, you will learn how to use scoring scripts in different scenarios and their best practices.
2121

2222
> [!TIP]
23-
> MLflow models don't require a scoring script as it is autogenerated for you. For more details about how batch endpoints work with MLflow models, see the dedicated tutorial [Using MLflow models in batch deployments](how-to-mlflow-batch.md). If you want to change the default inference routine, write an scoring script for your MLflow models as explained at [Using MLflow models with a scoring script](how-to-mlflow-batch.md#customizing-mlflow-models-deployments-with-a-scoring-script).
23+
> MLflow models don't require a scoring script as it is autogenerated for you. For more details about how batch endpoints work with MLflow models, see the dedicated tutorial [Using MLflow models in batch deployments](how-to-mlflow-batch.md).
2424
2525
> [!WARNING]
2626
> If you are deploying an Automated ML model under a batch endpoint, notice that the scoring script that Automated ML provides only works for Online Endpoints and it is not designed for batch execution. Please follow this guideline to learn how to create one depending on what your model does.
2727
2828
## Understanding the scoring script
2929

30-
The scoring script is a Python file (`.py`) that contains the logic about how to run the model and read the input data submitted by the batch deployment executor. Each model deployment provides the scoring script (allow with any other depenency required) at creation time. It is usually indicated as follows:
30+
The scoring script is a Python file (`.py`) that contains the logic about how to run the model and read the input data submitted by the batch deployment executor. Each model deployment provides the scoring script (allow with any other dependency required) at creation time. It is usually indicated as follows:
3131

3232
# [Azure CLI](#tab/cli)
3333

@@ -40,21 +40,21 @@ __deployment.yml__
4040
```python
4141
deployment = BatchDeployment(
4242
...
43-
code_path="deployment-torch/code",
43+
code_path="code",
4444
scoring_script="batch_driver.py",
4545
...
4646
)
4747
```
4848

4949
# [Studio](#tab/azure-studio)
5050

51-
On [Azure Machine Learning studio portal](https://ml.azure.com), when creating a new deployment, you will be prompted for an scoring script and dependencies as follows:
51+
When creating a new deployment, you will be prompted for a scoring script and dependencies as follows:
5252

53-
:::image type="content" source="./media/how-to-batch-scoring-script/configure-scoring-script.png" alt-text="Screenshot of the step where you can configure the scroing script in a new deployment.":::
53+
:::image type="content" source="./media/how-to-batch-scoring-script/configure-scoring-script.png" alt-text="Screenshot of the step where you can configure the scoring script in a new deployment.":::
5454

55-
MLflow models don't require an scoring script as Azure Machine Learning can automatically generate it for you. However, if you want to customize how inference is executed you can still indicate it:
55+
For MLflow models, scoring scripts are automatically generated but you can indicate one by checking the following option:
5656

57-
:::image type="content" source="./media/how-to-batch-scoring-script/configure-scoring-script-mlflow.png" alt-text="Screenshot of the step where you can configure the scroing script in a new deployment when the model has MLflow format.":::
57+
:::image type="content" source="./media/how-to-batch-scoring-script/configure-scoring-script-mlflow.png" alt-text="Screenshot of the step where you can configure the scoring script in a new deployment when the model has MLflow format.":::
5858

5959
---
6060

@@ -190,7 +190,7 @@ For an example about how to achieve it see [Text processing with batch deploymen
190190

191191
### Using models that are folders
192192

193-
When authoring scoring scripts, the environment variable `AZUREML_MODEL_DIR` is typically used in the `init()` function to load the model. However, some models may contain its files inside of a folder. When reading the files in this variable, you may need to account for that. You can identify the folder where your MLflow model is placed as follows:
193+
The environment variable `AZUREML_MODEL_DIR` contains the path to where the selected model is located and it is typically used in the `init()` function to load the model into memory. However, some models may contain its files inside of a folder. When reading the files in this variable, you may need to account for that. You can identify the folder where your MLflow model is placed as follows:
194194

195195
1. Go to [Azure Machine Learning portal](https://ml.azure.com).
196196

0 commit comments

Comments
 (0)