You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/reference-yaml-deployment-batch.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -30,10 +30,10 @@ The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
30
30
|`description`| string | Description of the deployment. |||
31
31
|`tags`| object | Dictionary of tags for the deployment. |||
32
32
|`endpoint_name`| string |**Required.** Name of the endpoint to create the deployment under. |||
33
-
|`type`| string |**Required.** Type of the bath deployment. Use `model` for [model deployments](concept-endpoints-batch.md#model-deployments) and `pipeline` for [pipeline component deployments](concept-endpoints-batch.md#pipeline-component-deployment-preview). Introduced since verison 1.7 and above. |`model`, `pipeline`|`model`|
34
-
|`settings`| object |**Required if type is indicated.** Specific configuration of the deployment. See specific YAML reference for model and pipeline component for allowed values. Introduced since verison 1.7 and above. |||
33
+
|`type`| string |**Required.** Type of the bath deployment. Use `model` for [model deployments](concept-endpoints-batch.md#model-deployments) and `pipeline` for [pipeline component deployments](concept-endpoints-batch.md#pipeline-component-deployment-preview). Introduced since version 1.7. |`model`, `pipeline`|`model`|
34
+
|`settings`| object |**Required if type is indicated.** Specific configuration of the deployment. See specific YAML reference for model and pipeline component for allowed values. Introduced since version 1.7. |||
35
35
36
-
> [!NOTE]
36
+
> [!TIP]
37
37
> The key `type` has been introduced in version 1.7 of the CLI extension and above. To fully support backward compatibility, this property defaults to `model`. However, if not explicitly indicated, the key `settings` is not enforced and all the properties for the model deployment settings should be indicated in to root of the YAML specification.
38
38
39
39
### YAML syntax for model deployments
@@ -45,12 +45,12 @@ When `type: model`, the following syntax is enforced:
45
45
|`model`| string or object |**Required.** The model to use for the deployment. This value can be either a reference to an existing versioned model in the workspace or an inline model specification. <br><br> To reference an existing model, use the `azureml:<model-name>:<version>` syntax. <br><br> To define a model inline, follow the [Model schema](reference-yaml-model.md#yaml-syntax). <br><br> As a best practice for production scenarios, you should create the model separately and reference it here. |||
46
46
|`code_configuration`| object | Configuration for the scoring code logic. <br><br> This property is not required if your model is in MLflow format. |||
47
47
|`code_configuration.code`| string | The local directory that contains all the Python source code to score the model. |||
48
-
|`code_configuration.scoring_script`| string | The Python file in the above directory. This file must have an `init()` function and a `run()` function. Use the `init()` function for any costly or common preparation (for example, load the model in memory). `init()`will be called only once at beginning of process. Use `run(mini_batch)` to score each entry; the value of `mini_batch` is a list of file paths. The `run()` function should return a pandas DataFrame or an array. Each returned element indicates one successful run of input element in the `mini_batch`. For more information on how to author scoring script, see [Understanding the scoring script](batch-inference/how-to-batch-scoring-script.md#understanding-the-scoring-script).|||
48
+
|`code_configuration.scoring_script`| string | The Python file in the above directory. This file must have an `init()` function and a `run()` function. Use the `init()` function for any costly or common preparation (for example, load the model in memory). `init()`is called only once at beginning of process. Use `run(mini_batch)` to score each entry; the value of `mini_batch` is a list of file paths. The `run()` function should return a pandas DataFrame or an array. Each returned element indicates one successful run of input element in the `mini_batch`. For more information on how to author scoring script, see [Understanding the scoring script](batch-inference/how-to-batch-scoring-script.md#understanding-the-scoring-script).|||
49
49
|`environment`| string or object | The environment to use for the deployment. This value can be either a reference to an existing versioned environment in the workspace or an inline environment specification. <br><br> This property is not required if your model is in MLflow format. <br><br> To reference an existing environment, use the `azureml:<environment-name>:<environment-version>` syntax. <br><br> To define an environment inline, follow the [Environment schema](reference-yaml-environment.md#yaml-syntax). <br><br> As a best practice for production scenarios, you should create the environment separately and reference it here. |||
50
50
|`compute`| string |**Required.** Name of the compute target to execute the batch scoring jobs on. This value should be a reference to an existing compute in the workspace using the `azureml:<compute-name>` syntax. |||
51
51
|`resources.instance_count`| integer | The number of nodes to use for each batch scoring job. ||`1`|
52
52
|`settings.max_concurrency_per_instance`| integer | The maximum number of parallel `scoring_script` runs per instance. ||`1`|
53
-
|`settings.error_threshold`| integer | The number of file failures that should be ignored. If the error count for the entire input goes above this value, the batch scoring job will be terminated. `error_threshold` is for the entire input and not for individual mini batches. If omitted, any number of file failures will be allowed without terminating the job. ||`-1`|
53
+
|`settings.error_threshold`| integer | The number of file failures that should be ignored. If the error count for the entire input goes above this value, the batch scoring job is terminated. `error_threshold` is for the entire input and not for individual mini batches. If omitted, any number of file failures is allowed without terminating the job. ||`-1`|
54
54
|`settings.logging_level`| string | The log verbosity level. |`warning`, `info`, `debug`|`info`|
55
55
|`settings.mini_batch_size`| integer | The number of files the `code_configuration.scoring_script` can process in one `run()` call. ||`10`|
56
56
|`settings.retry_settings`| object | Retry settings for scoring each mini batch. |||
@@ -76,7 +76,7 @@ The `az ml batch-deployment` commands can be used for managing Azure Machine Lea
76
76
77
77
## Examples
78
78
79
-
Examples are available in the [examples GitHub repository](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/batch). Several are shown below.
79
+
Examples are available in the [examples GitHub repository](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/batch).
0 commit comments