You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-deploy-and-where.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -30,7 +30,7 @@ The workflow is similar no matter where you deploy your model:
30
30
1. Prepare an inference configuration.
31
31
1. Deploy the model locally to ensure everything works.
32
32
1. Choose a compute target.
33
-
1.Re-deploy the model to the cloud.
33
+
1.Deploy the model to the cloud.
34
34
1. Test the resulting web service.
35
35
36
36
For more information on the concepts involved in the machine learning deployment workflow, see [Manage, deploy, and monitor models with Azure Machine Learning](concept-model-management-and-deployment.md).
@@ -110,7 +110,7 @@ az ml model register -n bidaf_onnx \
110
110
111
111
Set `-p` to the path of a folder or a file that you want to register.
112
112
113
-
For more information on `az ml model register`, consult the [reference documentation](/cli/azure/ml(v1)/model).
113
+
For more information on `az ml model register`, see the [reference documentation](/cli/azure/ml(v1)/model).
114
114
115
115
### Register a model from an Azure ML training run
116
116
@@ -122,7 +122,7 @@ az ml model register -n bidaf_onnx --asset-path outputs/model.onnx --experiment-
122
122
123
123
The `--asset-path` parameter refers to the cloud location of the model. In this example, the path of a single file is used. To include multiple files in the model registration, set `--asset-path` to the path of a folder that contains the files.
124
124
125
-
For more information on `az ml model register`, consult the [reference documentation](/cli/azure/ml(v1)/model).
125
+
For more information on `az ml model register`, see the [reference documentation](/cli/azure/ml(v1)/model).
126
126
127
127
# [Python](#tab/python)
128
128
@@ -213,7 +213,7 @@ For more information on inference configuration, see the [InferenceConfig](/pyth
213
213
214
214
## Define a deployment configuration
215
215
216
-
A deployment configuration specifies the amount of memory and cores to reserve foryour webservice will require in order to run, as well as configuration details of the underlying webservice. For example, a deployment configuration lets you specify that your service needs 2 gigabytes of memory, 2CPU cores, 1GPU core, and that you want to enable autoscaling.
216
+
A deployment configuration specifies the amount of memory and cores your webservice needs in order to run. It also provides configuration details of the underlying webservice. For example, a deployment configuration lets you specify that your service needs 2 gigabytes of memory, 2CPU cores, 1GPU core, and that you want to enable autoscaling.
217
217
218
218
The options available for a deployment configuration differ depending on the compute target you choose. In a local deployment, all you can specify is which port your webservice will be served on.
219
219
@@ -225,7 +225,7 @@ For more information, see the [deployment schema](./reference-azure-machine-lear
225
225
226
226
# [Python](#tab/python)
227
227
228
-
To create a local deployment configuration, do the following:
228
+
The following Python demonstrates how to create a local deployment configuration:
0 commit comments