You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: examples/openvino/README.md
+39-87Lines changed: 39 additions & 87 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,8 +9,7 @@ Below is the layout of the `examples/openvino` directory, which includes the nec
9
9
```
10
10
examples/openvino
11
11
├── README.md # Documentation for examples (this file)
12
-
├── aot_openvino_compiler.py # Example script for AoT export
13
-
└── export_and_infer_openvino.py # Example script to export and execute models with python bindings
12
+
└── aot_optimize_and_infer.py # Example script to export and execute models
14
13
```
15
14
16
15
# Build Instructions for Examples
@@ -20,14 +19,10 @@ Follow the [instructions](../../backends/openvino/README.md) of **Prerequisites*
20
19
21
20
## AOT step:
22
21
23
-
The export script called `aot_openvino_compiler.py` allows users to export deep learning models from various model suites (TIMM, Torchvision, Hugging Face) to a openvino backend using **Executorch**. Users can dynamically specify the model, input shape, and target device.
22
+
The python script called `aot_optimize_and_infer.py` allows users to export deep learning models from various model suites (TIMM, Torchvision, Hugging Face) to a openvino backend using **Executorch**. Users can dynamically specify the model, input shape, and target device.
@@ -162,72 +183,3 @@ Run inference with a given model for 10 iterations:
162
183
--model_path=model.pte \
163
184
--num_executions=10
164
185
```
165
-
166
-
## Running Python Example with Pybinding:
167
-
168
-
You can use the `export_and_infer_openvino.py` script to run models with the OpenVINO backend through the Python bindings.
169
-
170
-
### **Usage**
171
-
172
-
#### **Command Structure**
173
-
```bash
174
-
python export_and_infer_openvino.py <ARGUMENTS>
175
-
```
176
-
177
-
#### **Arguments**
178
-
-**`--suite`** (required if `--model_path` argument is not used):
179
-
Specifies the model suite to use. Needs to be used with `--model` argument.
180
-
Supported values:
181
-
-`timm` (e.g., VGG16, ResNet50)
182
-
-`torchvision` (e.g., resnet18, mobilenet_v2)
183
-
-`huggingface` (e.g., bert-base-uncased). NB: Quantization and validation is not supported yet.
184
-
185
-
-**`--model`** (required if `--model_path` argument is not used):
186
-
Name of the model to export. Needs to be used with `--suite` argument.
187
-
Examples:
188
-
- For `timm`: `vgg16`, `resnet50`
189
-
- For `torchvision`: `resnet18`, `mobilenet_v2`
190
-
- For `huggingface`: `bert-base-uncased`, `distilbert-base-uncased`
191
-
192
-
-**`--model_path`** (required if `--suite` and `--model` arguments are not used):
193
-
Path to the saved model file. This argument allows you to load the compiled model from a file, instead of downloading it from the model suites using the `--suite` and `--model` arguments.
194
-
Example: `<path to model foler>/resnet50_fp32.pte`
195
-
196
-
-**`--input_shape`**(required for random inputs):
197
-
Input shape for the model. Provide this as a **list** or **tuple**.
198
-
Examples:
199
-
-`[1, 3, 224, 224]` (Zsh users: wrap in quotes)
200
-
-`(1, 3, 224, 224)`
201
-
202
-
-**`--input_tensor_path`**(optional):
203
-
Path to the raw input tensor file. If this argument is not provided, a random input tensor will be generated with the input shape provided with `--input_shape` argument.
204
-
Example: `<path to the input tensor foler>/input_tensor.pt`
205
-
206
-
-**`--output_tensor_path`**(optional):
207
-
Path to the file where the output raw tensor will be saved.
208
-
Example: `<path to the output tensor foler>/output_tensor.pt`
209
-
210
-
-**`--device`** (optional)
211
-
Target device for the compiled model. Default is `CPU`.
212
-
Examples: `CPU`, `GPU`
213
-
214
-
-**`--num_iter`** (optional)
215
-
Number of iterations to execute inference for evaluation. The default value is `1`.
216
-
Examples: `100`, `1000`
217
-
218
-
-**`--warmup_iter`** (optional)
219
-
Number of warmup iterations to execute inference before evaluation. The default value is `0`.
220
-
Examples: `5`, `10`
221
-
222
-
223
-
### **Examples**
224
-
225
-
#### Execute Torchvision ResNet50 model for the GPU with Random Inputs
0 commit comments