You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
1. After you upload the data, invoke the endpoint:
221
+
1. After you upload the data, invoke the endpoint.
222
+
223
+
> [!TIP]
224
+
> In the following commands, notice that the deployment name isn't indicated in the `invoke` operation. The endpoint automatically routes the job to the default deployment because the endpoint has one deployment only. You can target a specific deployment by indicating the argument/parameter `deployment_name`.
226
225
227
226
# [Azure CLI](#tab/cli)
228
227
@@ -238,11 +237,6 @@ To test your endpoint, you use a sample of unlabeled data located in this reposi
> Notice that the deployment name isn't indicated in the `invoke` operation. The endpoint automatically routes the job to the default deployment because the endpoint has one deployment only. You can target a specific deployment by indicating the argument/parameter `deployment_name`.
246
240
247
241
1. A batch job is started as soon as the command returns. You can monitor the status of the job until it finishes:
248
242
@@ -307,7 +301,7 @@ The output displays a table:
307
301
| 307 | 0 | heart-unlabeled-3.csv |
308
302
309
303
> [!TIP]
310
-
> Notice that in this example the input data was tabular data in `CSV` format and there were 4 different input files (heart-unlabeled-0.csv, heart-unlabeled-1.csv, heart-unlabeled-2.csv and heart-unlabeled-3.csv).
304
+
> Notice that in this example, the input data contains tabular data in CSV format. There are four different input files: _heart-unlabeled-0.csv_, _heart-unlabeled-1.csv_, _heart-unlabeled-2.csv_, and _heart-unlabeled-3.csv_.
311
305
312
306
## Review considerations for batch inference
313
307
@@ -404,8 +398,6 @@ Use the following steps to deploy an MLflow model with a custom scoring script:
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-nlp-processing-batch.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -350,7 +350,7 @@ As mentioned in some of the notes along this tutorial, processing text may have
350
350
351
351
The same considerations mentioned above apply to MLflow models. However, since you are not required to provide a scoring script for your MLflow model deployment, some of the recommendations mentioned may require a different approach.
352
352
353
-
* MLflow models in Batch Endpoints support reading tabular data as input data, which may contain long sequences of text. See [File's types support](how-to-mlflow-batch.md#files-types-support) for details about which file types are supported.
353
+
* MLflow models in Batch Endpoints support reading tabular data as input data, which may contain long sequences of text. See [File's types support](how-to-mlflow-batch.md#review-support-for-file-types) for details about which file types are supported.
354
354
* Batch deployments calls your MLflow model's predict function with the content of an entire file in as Pandas dataframe. If your input data contains many rows, chances are that running a complex model (like the one presented in this tutorial) results in an out-of-memory exception. If this is your case, you can consider:
355
355
* Customize how your model runs predictions and implement batching. To learn how to customize MLflow model's inference, see [Logging custom models](how-to-log-mlflow-models.md?#logging-custom-models).
356
356
* Author a scoring script and load your model using `mlflow.<flavor>.load_model()`. See [Using MLflow models with a scoring script](how-to-mlflow-batch.md#customize-model-deployment-with-scoring-script) for details.
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-troubleshoot-batch-endpoints.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -208,7 +208,7 @@ During script execution for batch deployment, if the `init()` or `run()` functio
208
208
209
209
### ValueError: No objects to concatenate
210
210
211
-
For batch deployment to succeed, each file in a mini-batch must be valid and implement a supported file type. Keep in mind that MLflow models support only a subset of file types. For more information, see [Considerations when deploying to batch inference](how-to-mlflow-batch.md?#considerations-when-deploying-to-batch-inference).
211
+
For batch deployment to succeed, each file in a mini-batch must be valid and implement a supported file type. Keep in mind that MLflow models support only a subset of file types. For more information, see [Considerations when deploying to batch inference](how-to-mlflow-batch.md#review-considerations-for-batch-inference).
212
212
213
213
**Message logged**: "ValueError: No objects to concatenate."
214
214
@@ -222,7 +222,7 @@ For batch deployment to succeed, each file in a mini-batch must be valid and imp
222
222
223
223
1. Look for entries that describe the file input failure, such as "ERROR:azureml:Error processing input file."
224
224
225
-
If the file type isn't supported, review the list of supported files. You might need to change the file type of the input data, or customize the deployment by providing a scoring script. For more information, see [Using MLflow models with a scoring script](how-to-mlflow-batch.md?#customizing-mlflow-models-deployments-with-a-scoring-script).
225
+
If the file type isn't supported, review the list of supported files. You might need to change the file type of the input data, or customize the deployment by providing a scoring script. For more information, see [Using MLflow models with a scoring script](how-to-mlflow-batch.md#customize-model-deployment-with-scoring-script).
Copy file name to clipboardExpand all lines: articles/machine-learning/includes/batch-endpoint-invoke-inputs-sdk.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,4 +10,4 @@ __What's the difference between the `inputs` and `input` parameter when you invo
10
10
11
11
In general, you can use a dictionary `inputs = {}` parameter with the `invoke` method to provide an arbitrary number of required inputs to a batch endpoint that contains a _model deployment_ or a _pipeline deployment_.
12
12
13
-
For a _model deployment_, you can use the `input` parameter as a shorter way to specify the input data location for the deployment. This approach works because a model deployment always takes only one [data input](../how-to-access-data-batch-endpoints-jobs.md#explore-data-inputs).
13
+
For a _model deployment_, you can use the `input` parameter as a shorter way to specify the input data location for the deployment. This approach works because a model deployment always takes only one [data input](../how-to-access-data-batch-endpoints-jobs.md#data-inputs).
0 commit comments