You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
You can create data inputs from training and validation MLTable from your local directory or cloud storage with the following code:
228
228
@@ -359,14 +359,49 @@ In individual trials, you directly control the model architecture and hyperparam
359
359
360
360
#### Supported model architectures
361
361
362
-
The following table summarizes the supported models for each computer vision task.
362
+
The following table summarizes the supported legacy models for each computer vision task. Using only these legacy models will trigger runs using the legacy runtime (where each individual run or trial is submitted as a command job). Please see below for HuggingFace and MMDetection support.
363
363
364
364
Task | model architectures | String literal syntax<br>***`default_model`\**** denoted with \*
#### Supported model architectures - HuggingFace and MMDetection (preview)
371
+
372
+
With the new backend that runs on [Azure Machine Learning pipelines](concept-ml-pipelines.md), you can additionally use any image classification model from the [HuggingFace Hub](https://huggingface.co/models?pipeline_tag=image-classification&library=transformers) which is part of the transformers library (such as microsoft/beit-base-patch16-224), as well asanyobject detection or instance segmentation model from the [MMDetection Version 2.28.2 Model Zoo](https://mmdetection.readthedocs.io/en/v2.28.2/model_zoo.html) (such as atss_r50_fpn_1x_coco).
373
+
374
+
In addition to supporting any model from HuggingFace Transfomers and MMDetection 2.28.2, we also offer a list of curated models from these libraries in the azureml-staging registry. These curated models have been tested thoroughly and use default hyperparameters selected from extensive benchmarking to ensure effective training. The table below summarizes these curated models.
375
+
376
+
Task | model architectures | String literal syntax
if model.tags['task'] =='image-classification': # choose an image task
392
+
classification_models.append(model.name)
393
+
394
+
classification_models
395
+
```
396
+
Output:
397
+
```
398
+
['google-vit-base-patch16-224',
399
+
'microsoft-swinv2-base-patch4-window12-192-22k',
400
+
'facebook-deit-base-patch16-224',
401
+
'microsoft-beit-base-patch16-224-pt22k-ft22k']
402
+
```
403
+
Using any HuggingFace or MMDetection model will trigger runs using pipeline components. If both legacy and HuggingFace/MMdetection models are used, all runs/trials will be triggered using components.
404
+
370
405
371
406
In addition to controlling the model architecture, you can also tune hyperparameters used for model training. While many of the hyperparameters exposed are model-agnostic, there are instances where hyperparameters are task-specific or model-specific. [Learn more about the available hyperparameters for these instances](reference-automl-images-hyperparameters.md).
0 commit comments