You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-use-batch-model-deployments.md
+9-9Lines changed: 9 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -303,15 +303,15 @@ A model deployment is a set of resources required for hosting the model that doe
303
303
|`environment`| The environment to score the model. The example defines an environment inline using `conda_file`and`image`. The `conda_file` dependencies will be installed on top of the `image`. The environment will be automatically registered with an autogenerated name and version. See the [Environment schema](./reference-yaml-environment.md#yaml-syntax) for more options. As a best practice for production scenarios, you should create the environment separately and reference it here. To reference an existing environment, use the `azureml:<environment-name>:<environment-version>` syntax. |
304
304
|`compute`| The compute to run batch scoring. The example uses the `batch-cluster` created at the beginning and references it using the `azureml:<compute-name>` syntax. |
305
305
|`resources.instance_count`| The number of instances to be used for each batch scoring job. |
306
-
|`settings.max_concurrency_per_instance`|[Optional] The maximum number of parallel `scoring_script` runs per instance. |
307
-
|`settings.mini_batch_size`|[Optional] The number of files the `scoring_script` can process in one `run()` call. |
308
-
|`settings.output_action`|[Optional] How the output should be organized in the output file. `append_row` will merge all`run()` returned output results into one single file named `output_file_name`. `summary_only` won't merge the output results and will only calculate `error_threshold`. |
309
-
|`settings.output_file_name`|[Optional] The name of the batch scoring output filefor`append_row``output_action`. |
310
-
|`settings.retry_settings.max_retries`|[Optional] The number of max tries for a failed `scoring_script``run()`. |
311
-
|`settings.retry_settings.timeout`|[Optional] The timeout in seconds for a `scoring_script``run()`for scoring a mini batch. |
312
-
|`settings.error_threshold`|[Optional] The number of inputfile scoring failures that should be ignored. If the error count for the entire input goes above this value, the batch scoring job will be terminated. The example uses `-1`, which indicates that any number of failures is allowed without terminating the batch scoring job. |
|`settings.environment_variables`|[Optional] Dictionary of environment variable name-value pairs to setfor each batch scoring job. |
306
+
|`settings.max_concurrency_per_instance`| The maximum number of parallel `scoring_script` runs per instance. |
307
+
|`settings.mini_batch_size`| The number of files the `scoring_script` can process in one `run()` call. |
308
+
|`settings.output_action`| How the output should be organized in the output file. `append_row` will merge all`run()` returned output results into one single file named `output_file_name`. `summary_only` won't merge the output results and will only calculate `error_threshold`. |
309
+
|`settings.output_file_name`| The name of the batch scoring output filefor`append_row``output_action`. |
310
+
|`settings.retry_settings.max_retries`| The number of max tries for a failed `scoring_script``run()`. |
311
+
|`settings.retry_settings.timeout`| The timeout in seconds for a `scoring_script``run()`for scoring a mini batch. |
312
+
|`settings.error_threshold`| The number of inputfile scoring failures that should be ignored. If the error count for the entire input goes above this value, the batch scoring job will be terminated. The example uses `-1`, which indicates that any number of failures is allowed without terminating the batch scoring job. |
0 commit comments