Skip to content

Commit cf14112

Browse files
author
Larry Franks
committed
acrolinx
1 parent db4c367 commit cf14112

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

articles/machine-learning/how-to-deploy-and-where.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ The workflow is similar no matter where you deploy your model:
3030
1. Prepare an inference configuration.
3131
1. Deploy the model locally to ensure everything works.
3232
1. Choose a compute target.
33-
1. Re-deploy the model to the cloud.
33+
1. Deploy the model to the cloud.
3434
1. Test the resulting web service.
3535

3636
For more information on the concepts involved in the machine learning deployment workflow, see [Manage, deploy, and monitor models with Azure Machine Learning](concept-model-management-and-deployment.md).
@@ -110,7 +110,7 @@ az ml model register -n bidaf_onnx \
110110

111111
Set `-p` to the path of a folder or a file that you want to register.
112112

113-
For more information on `az ml model register`, consult the [reference documentation](/cli/azure/ml(v1)/model).
113+
For more information on `az ml model register`, see the [reference documentation](/cli/azure/ml(v1)/model).
114114

115115
### Register a model from an Azure ML training run
116116

@@ -122,7 +122,7 @@ az ml model register -n bidaf_onnx --asset-path outputs/model.onnx --experiment-
122122

123123
The `--asset-path` parameter refers to the cloud location of the model. In this example, the path of a single file is used. To include multiple files in the model registration, set `--asset-path` to the path of a folder that contains the files.
124124

125-
For more information on `az ml model register`, consult the [reference documentation](/cli/azure/ml(v1)/model).
125+
For more information on `az ml model register`, see the [reference documentation](/cli/azure/ml(v1)/model).
126126

127127
# [Python](#tab/python)
128128

@@ -213,7 +213,7 @@ For more information on inference configuration, see the [InferenceConfig](/pyth
213213

214214
## Define a deployment configuration
215215

216-
A deployment configuration specifies the amount of memory and cores to reserve for your webservice will require in order to run, as well as configuration details of the underlying webservice. For example, a deployment configuration lets you specify that your service needs 2 gigabytes of memory, 2 CPU cores, 1 GPU core, and that you want to enable autoscaling.
216+
A deployment configuration specifies the amount of memory and cores your webservice needs in order to run. It also provides configuration details of the underlying webservice. For example, a deployment configuration lets you specify that your service needs 2 gigabytes of memory, 2 CPU cores, 1 GPU core, and that you want to enable autoscaling.
217217

218218
The options available for a deployment configuration differ depending on the compute target you choose. In a local deployment, all you can specify is which port your webservice will be served on.
219219

@@ -225,7 +225,7 @@ For more information, see the [deployment schema](./reference-azure-machine-lear
225225

226226
# [Python](#tab/python)
227227

228-
To create a local deployment configuration, do the following:
228+
The following Python demonstrates how to create a local deployment configuration:
229229

230230
[!Notebook-python[] (~/azureml-examples-main/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=deployment-configuration-code)]
231231

@@ -358,7 +358,7 @@ curl -v -X POST -H "content-type:application/json" \
358358

359359
[!INCLUDE [aml-deploy-target](../../includes/aml-compute-target-deploy.md)]
360360

361-
## Re-deploy to cloud
361+
## Deploy to cloud
362362

363363
Once you've confirmed your service works locally and chosen a remote compute target, you are ready to deploy to the cloud.
364364

0 commit comments

Comments
 (0)