Skip to content

Commit 59c90ea

Browse files
authored
Update README.md
1 parent ae3a9e7 commit 59c90ea

File tree

1 file changed

+16
-11
lines changed

1 file changed

+16
-11
lines changed

examples/openvino/README.md

Lines changed: 16 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -9,8 +9,7 @@ Below is the layout of the `examples/openvino` directory, which includes the nec
99
```
1010
examples/openvino
1111
├── README.md # Documentation for examples (this file)
12-
├── aot_openvino_compiler.py # Example script for AoT export
13-
└── export_and_infer_openvino.py # Example script to export and execute models with python bindings
12+
└── aot_optimize_and_infer.py # Example script to export and execute models
1413
```
1514

1615
# Build Instructions for Examples
@@ -20,13 +19,13 @@ Follow the [instructions](../../backends/openvino/README.md) of **Prerequisites*
2019

2120
## AOT step:
2221

23-
The export script called `aot_openvino_compiler.py` allows users to export deep learning models from various model suites (TIMM, Torchvision, Hugging Face) to a openvino backend using **Executorch**. Users can dynamically specify the model, input shape, and target device.
22+
The export script called `aot_optimize_and_infer.py` allows users to export deep learning models from various model suites (TIMM, Torchvision, Hugging Face) to a openvino backend using **Executorch**. Users can dynamically specify the model, input shape, and target device.
2423

2524
### **Usage**
2625

2726
#### **Command Structure**
2827
```bash
29-
python aot_openvino_compiler.py --suite <MODEL_SUITE> --model <MODEL_NAME> --input_shape <INPUT_SHAPE> --device <DEVICE>
28+
python aot_optimize_and_infer.py --suite <MODEL_SUITE> --model <MODEL_NAME> --input_shape <INPUT_SHAPE> --device <DEVICE>
3029
```
3130

3231
#### **Arguments**
@@ -50,6 +49,12 @@ python aot_openvino_compiler.py --suite <MODEL_SUITE> --model <MODEL_NAME> --inp
5049
- `[1, 3, 224, 224]` (Zsh users: wrap in quotes)
5150
- `(1, 3, 224, 224)`
5251

52+
- **`--export`** (optional):
53+
Save the exported model as a `.pte` file.
54+
55+
- **`--model_file_name`** (optional):
56+
Specify a custom file name to save the exported model.
57+
5358
- **`--batch_size`** :
5459
Batch size for the validation. Default batch_size == 1.
5560
The dataset length must be evenly divisible by the batch size.
@@ -93,31 +98,31 @@ python aot_openvino_compiler.py --suite <MODEL_SUITE> --model <MODEL_NAME> --inp
9398

9499
#### Export a TIMM VGG16 model for the CPU
95100
```bash
96-
python aot_openvino_compiler.py --suite timm --model vgg16 --input_shape [1, 3, 224, 224] --device CPU
101+
python aot_optimize_and_infer.py --export --suite timm --model vgg16 --input_shape [1, 3, 224, 224] --device CPU
97102
```
98103

99104
#### Export a Torchvision ResNet50 model for the GPU
100105
```bash
101-
python aot_openvino_compiler.py --suite torchvision --model resnet50 --input_shape "(1, 3, 256, 256)" --device GPU
106+
python aot_optimize_and_infer.py --export --suite torchvision --model resnet50 --input_shape "(1, 3, 256, 256)" --device GPU
102107
```
103108

104109
#### Export a Hugging Face BERT model for the CPU
105110
```bash
106-
python aot_openvino_compiler.py --suite huggingface --model bert-base-uncased --input_shape "(1, 512)" --device CPU
111+
python aot_optimize_and_infer.py --export --suite huggingface --model bert-base-uncased --input_shape "(1, 512)" --device CPU
107112
```
108113
#### Export and validate TIMM Resnet50d model for the CPU
109114
```bash
110-
python aot_openvino_compiler.py --suite timm --model vgg16 --input_shape [1, 3, 224, 224] --device CPU --validate --dataset /path/to/dataset
115+
python aot_optimize_and_infer.py --export --suite timm --model vgg16 --input_shape [1, 3, 224, 224] --device CPU --validate --dataset /path/to/dataset
111116
```
112117

113118
#### Export, quantize and validate TIMM Resnet50d model for the CPU
114119
```bash
115-
python aot_openvino_compiler.py --suite timm --model vgg16 --input_shape [1, 3, 224, 224] --device CPU --validate --dataset /path/to/dataset --quantize
120+
python aot_optimize_and_infer.py --export --suite timm --model vgg16 --input_shape [1, 3, 224, 224] --device CPU --validate --dataset /path/to/dataset --quantize
116121
```
117122

118-
#### Export a Torchvision Inception V3 model for the CPU and Execute Inference
123+
#### Execute Inference with Torchvision Inception V3 model for the CPU
119124
```bash
120-
python aot_openvino_compiler.py --suite torchvision --model inception_v3 --infer --warmup_iter 10 --num_iter 100 --input_shape "(1, 3, 256, 256)" --device CPU
125+
python aot_optimize_and_infer.py --suite torchvision --model inception_v3 --infer --warmup_iter 10 --num_iter 100 --input_shape "(1, 3, 256, 256)" --device CPU
121126
```
122127

123128
### **Notes**

0 commit comments

Comments
 (0)