Skip to content

Commit 3448e23

Browse files
Merge pull request #220520 from santiagxf/santiagxf-patch-1
Update how-to-deploy-model-custom-output.md
2 parents a8e72cf + bb61e77 commit 3448e23

File tree

1 file changed

+25
-13
lines changed

1 file changed

+25
-13
lines changed

articles/machine-learning/how-to-deploy-model-custom-output.md

Lines changed: 25 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -60,14 +60,14 @@ In this example, we are going to create a deployment that can write directly to
6060

6161
Batch Endpoint can only deploy registered models. In this case, we already have a local copy of the model in the repository, so we only need to publish the model to the registry in the workspace. You can skip this step if the model you are trying to deploy is already registered.
6262

63-
# [Azure ML CLI](#tab/cli)
63+
# [Azure CLI](#tab/cli)
6464

6565
```azurecli
6666
MODEL_NAME='heart-classifier'
6767
az ml model create --name $MODEL_NAME --type "mlflow_model" --path "heart-classifier-mlflow/model"
6868
```
6969

70-
# [Azure ML SDK for Python](#tab/sdk)
70+
# [Python](#tab/sdk)
7171

7272
```python
7373
model_name = 'heart-classifier'
@@ -136,11 +136,11 @@ Follow the next steps to create a deployment using the previous scoring script:
136136

137137
1. First, let's create an environment where the scoring script can be executed:
138138

139-
# [Azure ML CLI](#tab/cli)
139+
# [Azure CLI](#tab/cli)
140140

141141
No extra step is required for the Azure ML CLI. The environment definition will be included in the deployment file.
142142

143-
# [Azure ML SDK for Python](#tab/sdk)
143+
# [Python](#tab/sdk)
144144

145145
Let's get a reference to the environment:
146146

@@ -156,7 +156,7 @@ Follow the next steps to create a deployment using the previous scoring script:
156156
> [!NOTE]
157157
> This example assumes you have an endpoint created with the name `heart-classifier-batch` and a compute cluster with name `cpu-cluster`. If you don't, please follow the steps in the doc [Use batch endpoints for batch scoring](how-to-use-batch-endpoint.md).
158158
159-
# [Azure ML CLI](#tab/cli)
159+
# [Azure CLI](#tab/cli)
160160

161161
To create a new deployment under the created endpoint, create a `YAML` configuration like the following:
162162

@@ -192,7 +192,7 @@ Follow the next steps to create a deployment using the previous scoring script:
192192
az ml batch-deployment create -f endpoint.yml
193193
```
194194

195-
# [Azure ML SDK for Python](#tab/sdk)
195+
# [Python](#tab/sdk)
196196

197197
To create a new deployment under the created endpoint, use the following script:
198198

@@ -235,11 +235,12 @@ For testing our endpoint, we are going to use a sample of unlabeled data located
235235

236236
1. Let's create the data asset first. This data asset consists of a folder with multiple CSV files that we want to process in parallel using batch endpoints. You can skip this step is your data is already registered as a data asset or you want to use a different input type.
237237

238-
# [Azure ML CLI](#tab/cli)
238+
# [Azure CLI](#tab/cli)
239239

240240
Create a data asset definition in `YAML`:
241241

242242
__heart-dataset-unlabeled.yml__
243+
243244
```yaml
244245
$schema: https://azuremlschemas.azureedge.net/latest/data.schema.json
245246
name: heart-dataset-unlabeled
@@ -266,12 +267,23 @@ For testing our endpoint, we are going to use a sample of unlabeled data located
266267
description="An unlabeled dataset for heart classification",
267268
name=dataset_name,
268269
)
270+
```
271+
272+
Then, create the data asset:
273+
274+
```python
269275
ml_client.data.create_or_update(heart_dataset_unlabeled)
270276
```
271277

278+
To get the newly created data asset, use:
279+
280+
```python
281+
heart_dataset_unlabeled = ml_client.data.get(name=dataset_name, label="latest")
282+
```
283+
272284
1. Now that the data is uploaded and ready to be used, let's invoke the endpoint:
273285

274-
# [Azure ML CLI](#tab/cli)
286+
# [Azure CLI](#tab/cli)
275287

276288
```azurecli
277289
JOB_NAME = $(az ml batch-endpoint invoke --name $ENDPOINT_NAME --deployment-name $DEPLOYMENT_NAME --input azureml:heart-dataset-unlabeled@latest | jq -r '.name')
@@ -280,7 +292,7 @@ For testing our endpoint, we are going to use a sample of unlabeled data located
280292
> [!NOTE]
281293
> The utility `jq` may not be installed on every installation. You can get instructions in [this link](https://stedolan.github.io/jq/download/).
282294
283-
# [Azure ML SDK for Python](#tab/sdk)
295+
# [Python](#tab/sdk)
284296

285297
```python
286298
input = Input(type=AssetTypes.URI_FOLDER, path=heart_dataset_unlabeled.id)
@@ -293,13 +305,13 @@ For testing our endpoint, we are going to use a sample of unlabeled data located
293305

294306
1. A batch job is started as soon as the command returns. You can monitor the status of the job until it finishes:
295307

296-
# [Azure ML CLI](#tab/cli)
308+
# [Azure CLI](#tab/cli)
297309

298310
```azurecli
299311
az ml job show --name $JOB_NAME
300312
```
301313

302-
# [Azure ML SDK for Python](#tab/sdk)
314+
# [Python](#tab/sdk)
303315

304316
```python
305317
ml_client.jobs.get(job.name)
@@ -314,15 +326,15 @@ The job generates a named output called `score` where all the generated files ar
314326
315327
You can download the results of the job by using the job name:
316328

317-
# [Azure ML CLI](#tab/cli)
329+
# [Azure CLI](#tab/cli)
318330

319331
To download the predictions, use the following command:
320332

321333
```azurecli
322334
az ml job download --name $JOB_NAME --output-name score --download-path ./
323335
```
324336

325-
# [Azure ML SDK for Python](#tab/sdk)
337+
# [Python](#tab/sdk)
326338

327339
```python
328340
ml_client.jobs.download(name=job.name, output_name='score', download_path='./')

0 commit comments

Comments
 (0)