Skip to content

Commit a4413c7

Browse files
authored
Merge pull request #207205 from MadhuM02/main
Vision Register and deploy docs
2 parents bf99791 + 9e5fe21 commit a4413c7

File tree

5 files changed

+327
-76
lines changed

5 files changed

+327
-76
lines changed

articles/machine-learning/how-to-auto-train-image-models.md

Lines changed: 177 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,7 @@ task: image_object_detection
8181
```
8282

8383
# [Python SDK v2 (preview)](#tab/SDK-v2)
84-
Based on the task type, you can create automl image jobs using task specific `automl` functions.
84+
Based on the task type, you can create AutoML image jobs using task specific `automl` functions.
8585

8686
For example:
8787

@@ -274,7 +274,7 @@ In addition to controlling the model algorithm, you can also tune hyperparameter
274274

275275
### Data augmentation
276276

277-
In general, deep learning model performance can often improve with more data. Data augmentation is a practical technique to amplify the data size and variability of a dataset which helps to prevent overfitting and improve the model’s generalization ability on unseen data. Automated ML applies different data augmentation techniques based on the computer vision task, before feeding input images to the model. Currently, there is no exposed hyperparameter to control data augmentations.
277+
In general, deep learning model performance can often improve with more data. Data augmentation is a practical technique to amplify the data size and variability of a dataset which helps to prevent overfitting and improve the model’s generalization ability on unseen data. Automated ML applies different data augmentation techniques based on the computer vision task, before feeding input images to the model. Currently, there's no exposed hyperparameter to control data augmentations.
278278

279279
|Task | Impacted dataset | Data augmentation technique(s) applied |
280280
|-------|----------|---------|
@@ -304,7 +304,7 @@ If you wish to use the default hyperparameter values for a given algorithm (say
304304
image_object_detection_job.set_image_model(model_name="yolov5")
305305
```
306306
---
307-
Once you've built a baseline model, you might want to optimize model performance in order to sweep over the model algorithm and hyperparameter space. You can use the following sample config to sweep over the hyperparameters for each algorithm, choosing from a range of values for learning_rate, optimizer, lr_scheduler, etc., to generate a model with the optimal primary metric. If hyperparameter values are not specified, then default values are used for the specified algorithm.
307+
Once you've built a baseline model, you might want to optimize model performance in order to sweep over the model algorithm and hyperparameter space. You can use the following sample config to sweep over the hyperparameters for each algorithm, choosing from a range of values for learning_rate, optimizer, lr_scheduler, etc., to generate a model with the optimal primary metric. If hyperparameter values aren't specified, then default values are used for the specified algorithm.
308308

309309
### Primary metric
310310

@@ -524,20 +524,187 @@ The automated ML training runs generates output model files, evaluation metrics,
524524
> [!TIP]
525525
> Check how to navigate to the run results from the [View run results](how-to-understand-automated-ml.md#view-job-results) section.
526526

527-
For definitions and examples of the performance charts and metrics provided for each run, see [Evaluate automated machine learning experiment results](how-to-understand-automated-ml.md#metrics-for-image-models-preview)
527+
For definitions and examples of the performance charts and metrics provided for each run, see [Evaluate automated machine learning experiment results](how-to-understand-automated-ml.md#metrics-for-image-models-preview).
528528

529529
## Register and deploy model
530530

531-
Once the run completes, you can register the model that was created from the best run (configuration that resulted in the best primary metric).
531+
Once the run completes, you can register the model that was created from the best run (configuration that resulted in the best primary metric). You can either register the model after downloading or by specifying the azureml path with corresponding jobid. Note: If you want to change the inference settings that are described below you need to download the model and change settings.json and register using the updated model folder.
532532

533-
You can deploy the model from the [Azure Machine Learning studio UI](https://ml.azure.com/).
534-
Navigate to the model you wish to deploy in the **Models** tab of the automated ML run and select the **Deploy**.
533+
### Get the best run
535534

536-
![Select model from the automl runs in studio UI ](./media/how-to-auto-train-image-models/select-model.png)
537535

538-
You can configure the model deployment endpoint name and the inferencing cluster to use for your model deployment in the **Deploy a model** pane.
536+
# [CLI v2](#tab/CLI-v2)
537+
538+
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
539+
```yaml
540+
541+
```
542+
543+
# [Python SDK v2 (preview)](#tab/SDK-v2)
544+
545+
[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=best_run)]
546+
547+
[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=create_local_dir)]
548+
549+
[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=download_model)]
550+
---
551+
552+
### register the model
553+
554+
Register the model either using the azureml path or your locally downloaded path.
555+
556+
# [CLI v2](#tab/CLI-v2)
557+
558+
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
559+
560+
```azurecli
561+
az ml model create --name od-fridge-items-mlflow-model --version 1 --path azureml://jobs/$best_run/outputs/artifacts/outputs/mlflow-model/ --type mlflow_model --workspace-name [YOUR_AZURE_WORKSPACE] --resource-group [YOUR_AZURE_RESOURCE_GROUP] --subscription [YOUR_AZURE_SUBSCRIPTION]
562+
```
563+
# [Python SDK v2 (preview)](#tab/SDK-v2)
564+
565+
[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=register_model)]
566+
---
567+
568+
After you register the model you want to use, you can deploy it using the managed online endpoint [deploy-managed-online-endpoint](how-to-deploy-managed-online-endpoint-sdk-v2.md)
569+
570+
### Configure online endpoint
571+
572+
# [CLI v2](#tab/CLI-v2)
573+
574+
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
575+
576+
```yaml
577+
$schema: https://azuremlschemas.azureedge.net/latest/managedOnlineEndpoint.schema.json
578+
name: od-fridge-items-endpoint
579+
auth_mode: key
580+
```
581+
582+
# [Python SDK v2 (preview)](#tab/SDK-v2)
583+
584+
[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=endpoint)]
585+
586+
---
587+
588+
### Create the endpoint
589+
590+
Using the `MLClient` created earlier, we'll now create the Endpoint in the workspace. This command will start the endpoint creation and return a confirmation response while the endpoint creation continues.
591+
592+
593+
# [CLI v2](#tab/CLI-v2)
594+
595+
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
596+
```azurecli
597+
az ml online-endpoint create --file .\create_endpoint.yml --workspace-name [YOUR_AZURE_WORKSPACE] --resource-group [YOUR_AZURE_RESOURCE_GROUP] --subscription [YOUR_AZURE_SUBSCRIPTION]
598+
```
599+
600+
# [Python SDK v2 (preview)](#tab/SDK-v2)
601+
602+
[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=create_endpoint)]
603+
---
604+
605+
### Configure online deployment
606+
607+
A deployment is a set of resources required for hosting the model that does the actual inferencing. We'll create a deployment for our endpoint using the `ManagedOnlineDeployment` class. You can use either GPU or CPU VM SKUs for your deployment cluster.
608+
609+
610+
# [CLI v2](#tab/CLI-v2)
611+
612+
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
613+
614+
```yaml
615+
name: od-fridge-items-mlflow-deploy
616+
endpoint_name: od-fridge-items-endpoint
617+
model: azureml:od-fridge-items-mlflow-model@latest
618+
instance_type: Standard_DS3_v2
619+
instance_count: 1
620+
liveness_probe:
621+
failure_threshold: 30
622+
success_threshold: 1
623+
timeout: 2
624+
period: 10
625+
initial_delay: 2000
626+
readiness_probe:
627+
failure_threshold: 10
628+
success_threshold: 1
629+
timeout: 10
630+
period: 10
631+
initial_delay: 2000
632+
```
633+
634+
# [Python SDK v2 (preview)](#tab/SDK-v2)
635+
636+
[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=deploy)]
637+
---
638+
639+
640+
### Create the deployment
641+
642+
Using the `MLClient` created earlier, we'll now create the deployment in the workspace. This command will start the deployment creation and return a confirmation response while the deployment creation continues.
643+
644+
# [CLI v2](#tab/CLI-v2)
645+
646+
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
647+
648+
```azurecli
649+
az ml online-deployment create --file .\create_deployment.yml --workspace-name [YOUR_AZURE_WORKSPACE] --resource-group [YOUR_AZURE_RESOURCE_GROUP] --subscription [YOUR_AZURE_SUBSCRIPTION]
650+
```
651+
652+
# [Python SDK v2 (preview)](#tab/SDK-v2)
653+
654+
[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=create_deploy)]
655+
---
656+
657+
### update traffic:
658+
By default the current deployment is set to receive 0% traffic. you can set the traffic percentage current deployment should receive. Sum of traffic percentages of all the deployments with one end point shouldn't exceed 100%.
659+
660+
# [CLI v2](#tab/CLI-v2)
661+
662+
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
663+
664+
```azurecli
665+
az ml online-endpoint update --name 'od-fridge-items-endpoint' --traffic 'od-fridge-items-mlflow-deploy=100' --workspace-name [YOUR_AZURE_WORKSPACE] --resource-group [YOUR_AZURE_RESOURCE_GROUP] --subscription [YOUR_AZURE_SUBSCRIPTION]
666+
```
667+
668+
# [Python SDK v2 (preview)](#tab/SDK-v2)
669+
670+
[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=update_traffic)]
671+
---
672+
673+
674+
Alternatively You can deploy the model from the [Azure Machine Learning studio UI](https://ml.azure.com/).
675+
Navigate to the model you wish to deploy in the **Models** tab of the automated ML run and select on **Deploy** and select **Deploy to real-time endpoint** .
676+
677+
![Screenshot of how the Deployment page looks like after selecting the Deploy option.](./media/how-to-auto-train-image-models/deploy-end-point.png).
678+
679+
this is how your review page looks like. we can select instance type, instance count and set traffic percentage for the current deployment.
680+
681+
![Screenshot of how the top of review page looks like after selecting the options to deploy.](./media/how-to-auto-train-image-models/review-deploy-1.png).
682+
![Screenshot of how the bottom of review page looks like after selecting the options to deploy.](./media/how-to-auto-train-image-models/review-deploy-2.png).
683+
684+
### Update inference settings
685+
686+
In the previous step, we downloaded a file `mlflow-model/artifacts/settings.json` from the best model. which can be used to update the inference settings before registering the model. Although its's recommended to use the same parameters as training for best performance.
687+
688+
Each of the tasks (and some models) has a set of parameters. By default, we use the same values for the parameters that were used during the training and validation. Depending on the behavior that we need when using the model for inference, we can change these parameters. Below you can find a list of parameters for each task type and model.
689+
690+
| Task | Parameter name | Default |
691+
|--------- |------------- | --------- |
692+
|Image classification (multi-class and multi-label) | `valid_resize_size`<br>`valid_crop_size` | 256<br>224 |
693+
|Object detection | `min_size`<br>`max_size`<br>`box_score_thresh`<br>`nms_iou_thresh`<br>`box_detections_per_img` | 600<br>1333<br>0.3<br>0.5<br>100 |
694+
|Object detection using `yolov5`| `img_size`<br>`model_size`<br>`box_score_thresh`<br>`nms_iou_thresh` | 640<br>medium<br>0.1<br>0.5 |
695+
|Instance segmentation| `min_size`<br>`max_size`<br>`box_score_thresh`<br>`nms_iou_thresh`<br>`box_detections_per_img`<br>`mask_pixel_score_threshold`<br>`max_number_of_polygon_points`<br>`export_as_image`<br>`image_type` | 600<br>1333<br>0.3<br>0.5<br>100<br>0.5<br>100<br>False<br>JPG|
696+
697+
For a detailed description on task specific hyperparameters, please refer to [Hyperparameters for computer vision tasks in automated machine learning](./reference-automl-images-hyperparameters.md).
698+
699+
If you want to use tiling, and want to control tiling behavior, the following parameters are available: `tile_grid_size`, `tile_overlap_ratio` and `tile_predictions_nms_thresh`. For more details on these parameters please check [Train a small object detection model using AutoML](./how-to-use-automl-small-object-detect.md).
700+
701+
702+
703+
704+
## Example notebooks
705+
Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/main/sdk/jobs/automl-standalone-jobs). Please check the folders with 'automl-image-' prefix for samples specific to building computer vision models.
706+
539707

540-
![Deploy configuration](./media/how-to-auto-train-image-models/deploy-image-model.png)
541708

542709
## Code examples
543710
# [CLI v2](#tab/CLI-v2)
398 KB
Loading
234 KB
Loading
212 KB
Loading

0 commit comments

Comments
 (0)