@@ -67,17 +67,18 @@ The basic usage is to run the script like this:
67
67
./downloader.py --all
68
68
```
69
69
70
- This will download all models into a directory tree rooted in the current
71
- directory. To download into a different directory, use the ` -o ` /` --output_dir `
72
- option:
70
+ This will download all models. The ` --all ` option can be replaced with
71
+ other filter options to download only a subset of models. See the "Shared options"
72
+ section.
73
+
74
+ By default, the script will download models into a directory tree rooted
75
+ in the current directory. To download into a different directory, use
76
+ the ` -o ` /` --output_dir ` option:
73
77
74
78
``` sh
75
79
./downloader.py --all --output_dir my/download/directory
76
80
```
77
81
78
- The ` --all ` option can be replaced with other filter options to download only
79
- a subset of models. See the "Shared options" section.
80
-
81
82
You may use ` --precisions ` flag to specify comma separated precisions of weights
82
83
to be downloaded.
83
84
@@ -221,6 +222,9 @@ This will convert all models into the Inference Engine IR format. Models that
221
222
were originally in that format are ignored. Models in PyTorch and Caffe2 formats will be
222
223
converted in ONNX format first.
223
224
225
+ The ` --all ` option can be replaced with other filter options to convert only
226
+ a subset of models. See the "Shared options" section.
227
+
224
228
The current directory must be the root of a download tree created by the model
225
229
downloader. To specify a different download tree path, use the ` -d ` /` --download_dir `
226
230
option:
@@ -237,9 +241,6 @@ into a different directory tree, use the `-o`/`--output_dir` option:
237
241
```
238
242
> Note: models in intermediate format are placed to this directory too.
239
243
240
- The ` --all ` option can be replaced with other filter options to convert only
241
- a subset of models. See the "Shared options" section.
242
-
243
244
By default, the script will produce models in every precision that is supported
244
245
for conversion. To only produce models in a specific precision, use the ` --precisions `
245
246
option:
0 commit comments