You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-secure-batch-endpoint.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -73,7 +73,7 @@ Consider the following limitations when working on batch endpoints deployed rega
73
73
74
74
- If you change the networking configuration of the workspace from public to private, or from private to public, such doesn't affect existing batch endpoints networking configuration. Batch endpoints rely on the configuration of the workspace at the time of creation. You can recreate your endpoints if you want them to reflect changes you made in the workspace.
75
75
76
-
- When working on a private link-enabled workspace, batch endpoints can be created and managed using Azure Machine Learning studio. However, they can't be invoked from the UI in studio. Use the Azure ML CLI v2 instead for job creation. For more details about how to use it see [Invoke the batch endpoint to start a batch scoring job](how-to-use-batch-endpoint.md#invoke-the-batch-endpoint-to-start-a-batch-job).
76
+
- When working on a private link-enabled workspace, batch endpoints can be created and managed using Azure Machine Learning studio. However, they can't be invoked from the UI in studio. Use the Azure ML CLI v2 instead for job creation. For more details about how to use it see [Run batch endpoint to start a batch scoring job](how-to-use-batch-endpoint.md#run-endpoint-and-configure-inputs-and-outputs).
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-use-batch-endpoint.md
+14-19Lines changed: 14 additions & 19 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -96,7 +96,7 @@ Open the [Azure ML studio portal](https://ml.azure.com) and sign in using your c
96
96
97
97
Batch endpoints run on compute clusters. They support both [Azure Machine Learning Compute clusters (AmlCompute)](./how-to-create-attach-compute-cluster.md) or [Kubernetes clusters](./how-to-attach-kubernetes-anywhere.md). Clusters are a shared resource so one cluster can host one or many batch deployments (along with other workloads if desired).
98
98
99
-
Run the following code to create an Azure Machine Learning compute cluster. The following examples in this article use the compute created here named `batch-cluster`. Adjust as needed and reference your compute using `azureml:<your-compute-name>`.
99
+
This article uses a compute created here named `batch-cluster`. Adjust as needed and reference your compute using `azureml:<your-compute-name>` or create one as shown.
100
100
101
101
# [Azure CLI](#tab/azure-cli)
102
102
@@ -220,7 +220,6 @@ A batch endpoint is an HTTPS endpoint that clients can call to trigger a batch s
220
220
| --- | ----------- |
221
221
| `name` | The name of the batch endpoint. Needs to be unique at the Azure region level.|
222
222
| `description` | The description of the batch endpoint. This property is optional. |
223
-
| `auth_mode` | The authentication method for the batch endpoint. Currently only Azure Active Directory token-based authentication (`aad_token`) is supported. |
224
223
| `defaults.deployment_name` | The name of the deployment that will serve as the default deployment for the endpoint. |
225
224
226
225
# [Studio](#tab/azure-studio)
@@ -245,22 +244,6 @@ A batch endpoint is an HTTPS endpoint that clients can call to trigger a batch s
245
244
246
245
*You'll create the endpoint in the same step you are creating the deployment later.*
247
246
248
-
## Create a scoring script
249
-
250
-
Batch deployments require a scoring script that indicates how the given model should be executed and how input data must be processed.
251
-
252
-
> [!NOTE]
253
-
> For MLflow models, Azure Machine Learning automatically generates the scoring script, so you're not required to provide one. If your model is an MLflow model, you can skip this step. For more information about how batch endpoints work with MLflow models, see the dedicated tutorial [Using MLflow models in batch deployments](how-to-mlflow-batch.md).
254
-
255
-
> [!WARNING]
256
-
> If you're deploying an Automated ML model under a batch endpoint, notice that the scoring script that Automated ML provides only works for online endpoints and is not designed for batch execution. Please see [Author scoring scripts for batch deployments](how-to-batch-scoring-script.md) to learn how to create one depending on what your model does.
257
-
258
-
In this case, we're deploying a model that reads image files representing digits and outputs the corresponding digit. The scoring script is as follows:
A deployment is a set of resources required for hosting the model that does the actual inferencing. To create a batch deployment, you need all the following items:
@@ -270,6 +253,18 @@ A deployment is a set of resources required for hosting the model that does the
270
253
* The environment in which the model runs.
271
254
* The pre-created compute and resource settings.
272
255
256
+
1. Batch deployments require a scoring script that indicates how a given model should be executed and how input data must be processed. Batch Endpoints support scripts created in Python. In this case, we're deploying a model that reads image files representing digits and outputs the corresponding digit. The scoring script is as follows:
257
+
258
+
> [!NOTE]
259
+
> For MLflow models, Azure Machine Learning automatically generates the scoring script, so you're not required to provide one. If your model is an MLflow model, you can skip this step. For more information about how batch endpoints work with MLflow models, see the dedicated tutorial [Using MLflow models in batch deployments](how-to-mlflow-batch.md).
260
+
261
+
> [!WARNING]
262
+
> If you're deploying an Automated ML model under a batch endpoint, notice that the scoring script that Automated ML provides only works for online endpoints and is not designed for batch execution. Please see [Author scoring scripts for batch deployments](how-to-batch-scoring-script.md) to learn how to create one depending on what your model does.
1. Create an environment where your batch deployment will run. Such environment needs to include the packages `azureml-core` and `azureml-dataset-runtime[fuse]`, which are required by batch endpoints, plus any dependency your code requires for running. In this case, the dependencies have been captured in a `conda.yml`:
274
269
275
270
__mnist/environment/conda.yml__
@@ -480,7 +475,7 @@ A deployment is a set of resources required for hosting the model that does the
480
475
481
476
:::image type="content" source="./media/how-to-use-batch-endpoints-studio/batch-endpoint-details.png" alt-text="Screenshot of the check batch endpoints and deployment details.":::
482
477
483
-
## Invoke the batch endpoint to start a batch job
478
+
## Run endpoint and configure inputs and outputs
484
479
485
480
Invoking a batch endpoint triggers a batch scoring job. A job `name` will be returned from the invoke response and can be used to track the batch scoring progress. The batch scoring job runs for some time. It splits the entire inputs into multiple `mini_batch` and processes in parallel on the compute cluster. The batch scoring job outputs will be stored in cloud storage, either in the workspace's default blob storage, or the storage you specified.
0 commit comments