|
| 1 | +# Usage Guide for Models Provided by ExecuTorch |
| 2 | + |
| 3 | +This guide provides examples and instructions for open source models. Some models under this folder might also have their own customized runner. |
| 4 | + |
| 5 | +## Model categories |
| 6 | +The following models can be categorized based on their primary use cases. |
| 7 | + |
| 8 | +1. Language Model: |
| 9 | + - albert |
| 10 | + - bert |
| 11 | + - distilbert |
| 12 | + - eurobert |
| 13 | + - llama |
| 14 | + - roberta |
| 15 | + |
| 16 | +2. Vision Model: |
| 17 | + - conv_former |
| 18 | + - cvt |
| 19 | + - deit |
| 20 | + - dino_v2 |
| 21 | + - dit |
| 22 | + - efficientnet |
| 23 | + - efficientSAM |
| 24 | + - esrgan |
| 25 | + - fastvit |
| 26 | + - fbnet |
| 27 | + - focalnet |
| 28 | + - gMLP_image_classification |
| 29 | + - mobilevit1 |
| 30 | + - mobilevit_v2 |
| 31 | + - pvt |
| 32 | + - regnet |
| 33 | + - retinanet |
| 34 | + - squeezenet |
| 35 | + - ssd300_vgg16 |
| 36 | + - swin_transformer |
| 37 | + |
| 38 | +## Prerequisite |
| 39 | +Please follow another [README](../README.md) first to set up environment. |
| 40 | + |
| 41 | +## Model running |
| 42 | +Some models require specific datasets. Please download them in advance and place them in the appropriate folders. |
| 43 | + |
| 44 | +Detailed instructions for each model are provided below. |
| 45 | +If you want to export the model without running it, please add `--compile_only` to the command. |
| 46 | + |
| 47 | +1. `albert`,`bert`,`distilbert`, `eurobert`, `roberta`: |
| 48 | + - Required Dataset : wikisent2 |
| 49 | + |
| 50 | + download [dataset](https://www.kaggle.com/datasets/mikeortman/wikipedia-sentences) first, and place it in a valid folder. |
| 51 | + ```bash |
| 52 | + python albert.py -m ${SOC_MODEL} -b path/to/build-android/ -s ${DEVICE_SERIAL} -d path/to/wikisent2 |
| 53 | + |
| 54 | +2. `conv_former`,`cvt`,`deit`,`dino_v2`,`efficientnet`,`fbnet`, `focalnet`, `gMLP_image_classification`, `mobilevit1`,`mobilevit_v2`, `pvt`, `squeezenet`, `swin_transformer` : |
| 55 | + - Required Dataset : ImageNet |
| 56 | + |
| 57 | + Download [dataset](https://www.kaggle.com/datasets/ifigotin/imagenetmini-1000) first, and place it in a valid folder. |
| 58 | + ```bash |
| 59 | + python SCRIPT_NAME.py -m ${SOC_MODEL} -b path/to/build-android/ -s ${DEVICE_SERIAL} -d path/to/ImageNet |
| 60 | +
|
| 61 | +3. `dit`: |
| 62 | +
|
| 63 | + ```bash |
| 64 | + python dit.py -m ${SOC_MODEL} -b path/to/build-android/ -s ${DEVICE_SERIAL} |
| 65 | +4. `esrgan`: |
| 66 | + - Required Dataset: B100 |
| 67 | + |
| 68 | + Will be downloaded automatically if -d is specified. Alternatively, you can provide your own dataset using `--hr_ref_dir` and `--lr_dir`. |
| 69 | + |
| 70 | + - Required OSS Repo: Real-ESRGAN |
| 71 | + |
| 72 | + Clone [OSS Repo](https://github.com/ai-forever/Real-ESRGAN) first, and place it in a valid folder. |
| 73 | + |
| 74 | + ```bash |
| 75 | + python esrgan.py -m ${SOC_MODEL} -b path/to/build-android/ -s ${DEVICE_SERIAL} --oss_repo path/to/Real-ESRGAN |
| 76 | +
|
| 77 | +5. `fastvit`: |
| 78 | + - Required Dataset: ImageNet |
| 79 | +
|
| 80 | + Download [dataset](https://www.kaggle.com/datasets/ifigotin/imagenetmini-1000) first, and place it in a valid folder. |
| 81 | + |
| 82 | + - Required OSS Repo: ml-fastvit |
| 83 | +
|
| 84 | + Clone [OSS Repo](https://github.com/apple/ml-fastvit) first, and place it in a valid folder. |
| 85 | + |
| 86 | + - Pretrained weight: |
| 87 | +
|
| 88 | + Download [pretrained weight](https://docs-assets.developer.apple.com/ml-research/models/fastvit/image_classification_distilled_models/fastvit_s12_reparam.pth.tar) first, and place it in a valid folder(should be fastvit_s12_reparam.pth.tar). |
| 89 | + ```bash |
| 90 | + python fastvit.py -m ${SOC_MODEL} -b path/to/build-android/ -s ${DEVICE_SERIAL} --oss_repo path/to/ml-fastvit -p path/to/pretrained_weight -d path/to/ImageNet |
| 91 | + |
| 92 | +6. `regnet`: |
| 93 | + - Required Dataset: ImageNet |
| 94 | + |
| 95 | + Download [dataset](https://www.kaggle.com/datasets/ifigotin/imagenetmini-1000) first, and place it in a valid folder. |
| 96 | + - Weights: regnet_y_400mf, regnet_x_400mf |
| 97 | + |
| 98 | + use `--weights` to specify which regent weights/model to execute. |
| 99 | + ```bash |
| 100 | + python regnet.py -m ${SOC_MODEL} -b path/to/build-android/ -s ${DEVICE_SERIAL} -d path/to/ImageNet --weights <WEIGHTS> |
| 101 | +
|
| 102 | +7. `retinanet`: |
| 103 | + - Required Dataset: COCO |
| 104 | +
|
| 105 | + Download [val2017](http://images.cocodataset.org/zips/val2017.zip) and [annotations](http://images.cocodataset.org/annotations/annotations_trainval2017.zip) first, and place it in a valid folder. |
| 106 | +
|
| 107 | + ```bash |
| 108 | + python retinanet.py -m ${SOC_MODEL} -b path/to/build-android/ -s ${DEVICE_SERIAL} -d path/to/PATH/TO/COCO #(which contains 'val_2017' & 'annotations') |
| 109 | + |
| 110 | +8. `ssd300_vgg16`: |
| 111 | + - Required OSS Repo: |
| 112 | + |
| 113 | + Clone [OSS Repo](https://github.com/sgrvinod/a-PyTorch-Tutorial-to-Object-Detection) first, and place it in a valid folder. |
| 114 | + |
| 115 | + - Pretrained weight: |
| 116 | + |
| 117 | + Download [pretrained weight](https://github.com/sgrvinod/a-PyTorch-Tutorial-to-Object-Detection) first, and place it in a valid folder.(checkpoint_ssd300.pth.tar) |
| 118 | + |
| 119 | + - Required Dataset: VOCSegmentation |
| 120 | + download [VOC 2007](https://github.com/sgrvinod/a-PyTorch-Tutorial-to-Object-Detection?tab=readme-ov-file#download) first, and place it in a valid folder. |
| 121 | + ```bash |
| 122 | + python ssd300_vgg16.py -m ${SOC_MODEL} -b path/to/build-android/ -s ${DEVICE_SERIAL} --oss_repo path/to/a-PyTorch-Tutorial-to-Object-Detection -p path/to/pretrained_weight |
| 123 | +
|
| 124 | +9. `llama`: |
| 125 | + For llama, please check [README](llama/README.md) under llama folder for more details. |
| 126 | +
|
| 127 | +10. `efficientSAM`: |
| 128 | + For efficientSAM, please get access to efficientSAM folder. |
| 129 | + - Pretrained weight: |
| 130 | +
|
| 131 | + Download [EfficientSAM-S](https://github.com/yformer/EfficientSAM/blob/main/weights/efficient_sam_vits.pt.zip) or [EfficientSAM-Ti](https://github.com/yformer/EfficientSAM/blob/main/weights/efficient_sam_vitt.pt) first, and place it in a valid folder. |
| 132 | + - Required Dataset: ImageNet |
| 133 | +
|
| 134 | + Download [dataset](https://www.kaggle.com/datasets/ifigotin/imagenetmini-1000) first, and place it in a valid folder. |
| 135 | +
|
| 136 | + - Required OSS Repo: |
| 137 | +
|
| 138 | + Clone [OSS Repo](https://github.com/yformer/EfficientSAM) first, and place it in a valid folder. |
| 139 | + ```bash |
| 140 | + python efficientSAM.py -m ${SOC_MODEL} -b path/to/build-android/ -s ${DEVICE_SERIAL} --oss_repo path/to/EfficientSAM -p path/to/pretrained_weight -d path/to/ImageNet |
| 141 | + |
0 commit comments