Skip to content

Commit c529513

Browse files
committed
edits
1 parent b6a7f75 commit c529513

File tree

4 files changed

+9
-19
lines changed

4 files changed

+9
-19
lines changed

articles/machine-learning/how-to-mlflow-batch.md

Lines changed: 5 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -111,8 +111,6 @@ To create an endpoint, you need a name and description. The endpoint name appear
111111

112112
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/batch/deploy-models/heart-classifier-mlflow/mlflow-for-batch-tabular.ipynb?name=name_endpoint)]
113113

114-
---
115-
116114
1. Create the endpoint:
117115

118116
# [Azure CLI](#tab/cli)
@@ -219,10 +217,11 @@ To test your endpoint, you use a sample of unlabeled data located in this reposi
219217
1. To see the changes, refresh the object:
220218

221219
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/batch/deploy-models/heart-classifier-mlflow/mlflow-for-batch-tabular.ipynb?name=get_data_asset)]
222-
223-
---
224220

225-
1. After you upload the data, invoke the endpoint:
221+
1. After you upload the data, invoke the endpoint.
222+
223+
> [!TIP]
224+
> In the following commands, notice that the deployment name isn't indicated in the `invoke` operation. The endpoint automatically routes the job to the default deployment because the endpoint has one deployment only. You can target a specific deployment by indicating the argument/parameter `deployment_name`.
226225
227226
# [Azure CLI](#tab/cli)
228227

@@ -238,11 +237,6 @@ To test your endpoint, you use a sample of unlabeled data located in this reposi
238237
[!INCLUDE [batch-endpoint-invoke-inputs-sdk](includes/batch-endpoint-invoke-inputs-sdk.md)]
239238

240239
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/batch/deploy-models/heart-classifier-mlflow/mlflow-for-batch-tabular.ipynb?name=start_batch_scoring_job)]
241-
242-
---
243-
244-
> [!TIP]
245-
> Notice that the deployment name isn't indicated in the `invoke` operation. The endpoint automatically routes the job to the default deployment because the endpoint has one deployment only. You can target a specific deployment by indicating the argument/parameter `deployment_name`.
246240

247241
1. A batch job is started as soon as the command returns. You can monitor the status of the job until it finishes:
248242

@@ -307,7 +301,7 @@ The output displays a table:
307301
| 307 | 0 | heart-unlabeled-3.csv |
308302

309303
> [!TIP]
310-
> Notice that in this example the input data was tabular data in `CSV` format and there were 4 different input files (heart-unlabeled-0.csv, heart-unlabeled-1.csv, heart-unlabeled-2.csv and heart-unlabeled-3.csv).
304+
> Notice that in this example, the input data contains tabular data in CSV format. There are four different input files: _heart-unlabeled-0.csv_, _heart-unlabeled-1.csv_, _heart-unlabeled-2.csv_, and _heart-unlabeled-3.csv_.
311305
312306
## Review considerations for batch inference
313307

@@ -404,8 +398,6 @@ Use the following steps to deploy an MLflow model with a custom scoring script:
404398

405399
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/batch/deploy-models/heart-classifier-mlflow/mlflow-for-batch-tabular.ipynb?name=configure_environment_custom)]
406400

407-
---
408-
409401
1. Configure the deployment:
410402

411403
# [Azure CLI](#tab/cli)
@@ -422,8 +414,6 @@ Use the following steps to deploy an MLflow model with a custom scoring script:
422414

423415
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/batch/deploy-models/heart-classifier-mlflow/mlflow-for-batch-tabular.ipynb?name=configure_deployment_custom)]
424416

425-
---
426-
427417
1. Create the deployment:
428418

429419
# [Azure CLI](#tab/cli)

articles/machine-learning/how-to-nlp-processing-batch.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -350,7 +350,7 @@ As mentioned in some of the notes along this tutorial, processing text may have
350350

351351
The same considerations mentioned above apply to MLflow models. However, since you are not required to provide a scoring script for your MLflow model deployment, some of the recommendations mentioned may require a different approach.
352352

353-
* MLflow models in Batch Endpoints support reading tabular data as input data, which may contain long sequences of text. See [File's types support](how-to-mlflow-batch.md#files-types-support) for details about which file types are supported.
353+
* MLflow models in Batch Endpoints support reading tabular data as input data, which may contain long sequences of text. See [File's types support](how-to-mlflow-batch.md#review-support-for-file-types) for details about which file types are supported.
354354
* Batch deployments calls your MLflow model's predict function with the content of an entire file in as Pandas dataframe. If your input data contains many rows, chances are that running a complex model (like the one presented in this tutorial) results in an out-of-memory exception. If this is your case, you can consider:
355355
* Customize how your model runs predictions and implement batching. To learn how to customize MLflow model's inference, see [Logging custom models](how-to-log-mlflow-models.md?#logging-custom-models).
356356
* Author a scoring script and load your model using `mlflow.<flavor>.load_model()`. See [Using MLflow models with a scoring script](how-to-mlflow-batch.md#customize-model-deployment-with-scoring-script) for details.

articles/machine-learning/how-to-troubleshoot-batch-endpoints.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -208,7 +208,7 @@ During script execution for batch deployment, if the `init()` or `run()` functio
208208

209209
### ValueError: No objects to concatenate
210210

211-
For batch deployment to succeed, each file in a mini-batch must be valid and implement a supported file type. Keep in mind that MLflow models support only a subset of file types. For more information, see [Considerations when deploying to batch inference](how-to-mlflow-batch.md?#considerations-when-deploying-to-batch-inference).
211+
For batch deployment to succeed, each file in a mini-batch must be valid and implement a supported file type. Keep in mind that MLflow models support only a subset of file types. For more information, see [Considerations when deploying to batch inference](how-to-mlflow-batch.md#review-considerations-for-batch-inference).
212212

213213
**Message logged**: "ValueError: No objects to concatenate."
214214

@@ -222,7 +222,7 @@ For batch deployment to succeed, each file in a mini-batch must be valid and imp
222222

223223
1. Look for entries that describe the file input failure, such as "ERROR:azureml:Error processing input file."
224224

225-
If the file type isn't supported, review the list of supported files. You might need to change the file type of the input data, or customize the deployment by providing a scoring script. For more information, see [Using MLflow models with a scoring script](how-to-mlflow-batch.md?#customizing-mlflow-models-deployments-with-a-scoring-script).
225+
If the file type isn't supported, review the list of supported files. You might need to change the file type of the input data, or customize the deployment by providing a scoring script. For more information, see [Using MLflow models with a scoring script](how-to-mlflow-batch.md#customize-model-deployment-with-scoring-script).
226226

227227
### No succeeded mini-batch
228228

articles/machine-learning/includes/batch-endpoint-invoke-inputs-sdk.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,4 +10,4 @@ __What's the difference between the `inputs` and `input` parameter when you invo
1010

1111
In general, you can use a dictionary `inputs = {}` parameter with the `invoke` method to provide an arbitrary number of required inputs to a batch endpoint that contains a _model deployment_ or a _pipeline deployment_.
1212

13-
For a _model deployment_, you can use the `input` parameter as a shorter way to specify the input data location for the deployment. This approach works because a model deployment always takes only one [data input](../how-to-access-data-batch-endpoints-jobs.md#explore-data-inputs).
13+
For a _model deployment_, you can use the `input` parameter as a shorter way to specify the input data location for the deployment. This approach works because a model deployment always takes only one [data input](../how-to-access-data-batch-endpoints-jobs.md#data-inputs).

0 commit comments

Comments
 (0)