You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Based on the task type, you can create automl image jobs using task specific `automl` functions.
84
+
Based on the task type, you can create AutoML image jobs using task specific `automl` functions.
85
85
86
86
For example:
87
87
@@ -274,7 +274,7 @@ In addition to controlling the model algorithm, you can also tune hyperparameter
274
274
275
275
### Data augmentation
276
276
277
-
In general, deep learning model performance can often improve with more data. Data augmentation is a practical technique to amplify the data size and variability of a dataset which helps to prevent overfitting and improve the model’s generalization ability on unseen data. Automated ML applies different data augmentation techniques based on the computer vision task, before feeding input images to the model. Currently, thereis no exposed hyperparameter to control data augmentations.
277
+
In general, deep learning model performance can often improve with more data. Data augmentation is a practical technique to amplify the data size and variability of a dataset which helps to prevent overfitting and improve the model’s generalization ability on unseen data. Automated ML applies different data augmentation techniques based on the computer vision task, before feeding input images to the model. Currently, there's no exposed hyperparameter to control data augmentations.
278
278
279
279
|Task | Impacted dataset | Data augmentation technique(s) applied |
280
280
|-------|----------|---------|
@@ -304,7 +304,7 @@ If you wish to use the default hyperparameter values for a given algorithm (say
Once you've built a baseline model, you might want to optimize model performance in order to sweep over the model algorithm and hyperparameter space. You can use the following sample config to sweep over the hyperparameters for each algorithm, choosing from a range of values for learning_rate, optimizer, lr_scheduler, etc., to generate a model with the optimal primary metric. If hyperparameter values are not specified, then default values are used for the specified algorithm.
307
+
Once you've built a baseline model, you might want to optimize model performance in order to sweep over the model algorithm and hyperparameter space. You can use the following sample config to sweep over the hyperparameters for each algorithm, choosing from a range of values for learning_rate, optimizer, lr_scheduler, etc., to generate a model with the optimal primary metric. If hyperparameter values aren't specified, then default values are used for the specified algorithm.
308
308
309
309
### Primary metric
310
310
@@ -524,20 +524,187 @@ The automated ML training runs generates output model files, evaluation metrics,
524
524
> [!TIP]
525
525
> Check how to navigate to the run results from the [View run results](how-to-understand-automated-ml.md#view-job-results) section.
526
526
527
-
For definitions and examples of the performance charts and metrics provided for each run, see [Evaluate automated machine learning experiment results](how-to-understand-automated-ml.md#metrics-for-image-models-preview)
527
+
For definitions and examples of the performance charts and metrics provided for each run, see [Evaluate automated machine learning experiment results](how-to-understand-automated-ml.md#metrics-for-image-models-preview).
528
528
529
529
## Register and deploy model
530
530
531
-
Once the run completes, you can register the model that was created from the best run (configuration that resulted in the best primary metric).
531
+
Once the run completes, you can register the model that was created from the best run (configuration that resulted in the best primary metric). You can either register the model after downloading or by specifying the azureml path with corresponding jobid. Note: If you want to change the inference settings that are described below you need to download the model and change settings.json and register using the updated model folder.
532
532
533
-
You can deploy the model from the [Azure Machine Learning studio UI](https://ml.azure.com/).
534
-
Navigate to the model you wish to deploy in the **Models** tab of the automated ML run and select the **Deploy**.
533
+
### Get the best run
535
534
536
-

537
535
538
-
You can configure the model deployment endpoint name and the inferencing cluster to use for your model deployment in the **Deploy a model** pane.
After you register the model you want to use, you can deploy it using the managed online endpoint [deploy-managed-online-endpoint](how-to-deploy-managed-online-endpoint-sdk-v2.md)
Using the `MLClient` created earlier, we'll now create the Endpoint in the workspace. This command will start the endpoint creation and return a confirmation response while the endpoint creation continues.
A deployment is a set of resources required for hosting the model that does the actual inferencing. We'll create a deployment for our endpoint using the `ManagedOnlineDeployment` class. You can use either GPU or CPU VM SKUs for your deployment cluster.
Using the `MLClient` created earlier, we'll now create the deployment in the workspace. This command will start the deployment creation and return a confirmation response while the deployment creation continues.
By default the current deployment isset to receive 0% traffic. you can set the traffic percentage current deployment should receive. Sum of traffic percentages of all the deployments with one end point shouldn't exceed 100%.
Alternatively You can deploy the model from the [Azure Machine Learning studio UI](https://ml.azure.com/).
675
+
Navigate to the model you wish to deploy in the **Models** tab of the automated ML run and select on **Deploy**and select **Deploy to real-time endpoint** .
676
+
677
+
.
678
+
679
+
this is how your review page looks like. we can select instance type, instance count andset traffic percentage for the current deployment.
680
+
681
+
.
682
+
.
683
+
684
+
### Update inference settings
685
+
686
+
In the previous step, we downloaded a file`mlflow-model/artifacts/settings.json`from the best model. which can be used to update the inference settings before registering the model. Although its's recommended to use the same parameters as training for best performance.
687
+
688
+
Each of the tasks (and some models) has a set of parameters. By default, we use the same values for the parameters that were used during the training and validation. Depending on the behavior that we need when using the model for inference, we can change these parameters. Below you can find a list of parameters for each task typeand model.
For a detailed description on task specific hyperparameters, please refer to [Hyperparameters for computer vision tasks in automated machine learning](./reference-automl-images-hyperparameters.md).
698
+
699
+
If you want to use tiling, and want to control tiling behavior, the following parameters are available: `tile_grid_size`, `tile_overlap_ratio`and`tile_predictions_nms_thresh`. For more details on these parameters please check [Train a small object detection model using AutoML](./how-to-use-automl-small-object-detect.md).
700
+
701
+
702
+
703
+
704
+
## Example notebooks
705
+
Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/main/sdk/jobs/automl-standalone-jobs). Please check the folders with'automl-image-' prefix for samples specific to building computer vision models.
0 commit comments