Skip to content

Commit 6cfbd4f

Browse files
authored
Update how-to-mlflow-batch.md
1 parent 25c22d2 commit 6cfbd4f

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

articles/machine-learning/how-to-mlflow-batch.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -265,11 +265,11 @@ The output looks as follows:
265265
266266
## Considerations when deploying to batch inference
267267

268-
Azure Machine Learning supports deploying MLflow models to batch endpoints without indicating an scoring script. This represents a convenient way to deploy models that require processing of big amounts of data in a batch-fashion. Azure Machine Learning uses information in the MLflow model specification to orchestrate the inference process.
268+
Azure Machine Learning supports deploying MLflow models to batch endpoints without indicating a scoring script. This represents a convenient way to deploy models that require processing of big amounts of data in a batch-fashion. Azure Machine Learning uses information in the MLflow model specification to orchestrate the inference process.
269269

270270
### How work is distributed on workers
271271

272-
Batch Endpoints distribute work at the file level, for both structured and unstructured data. As a consequence, only [URI file](reference-yaml-data.md) and [URI folders](reference-yaml-data.md) are supported for this feature. Each worker processes batches of `Mini batch size` files at a time. For tabular data, batch endpoints doesn't take into account the number of rows inside of each file when distributing the work.
272+
Batch Endpoints distribute work at the file level, for both structured and unstructured data. As a consequence, only [URI file](reference-yaml-data.md) and [URI folders](reference-yaml-data.md) are supported for this feature. Each worker processes batches of `Mini batch size` files at a time. For tabular data, batch endpoints don't take into account the number of rows inside of each file when distributing the work.
273273

274274
> [!WARNING]
275275
> Nested folder structures are not explored during inference. If you are partitioning your data using folders, make sure to flatten the structure beforehand.
@@ -314,7 +314,7 @@ You will typically select this workflow when:
314314
> * You model can't process each file at once because of memory constrains and it needs to read it in chunks.
315315
316316
> [!IMPORTANT]
317-
> If you choose to indicate an scoring script for an MLflow model deployment, you will also have to specify the environment where the deployment will run.
317+
> If you choose to indicate a scoring script for an MLflow model deployment, you will also have to specify the environment where the deployment will run.
318318
319319

320320
### Steps

0 commit comments

Comments
 (0)