You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -185,22 +185,19 @@ Deep Learning Inference Engine (IE) supports models in the Intermediate Represen
185
185
186
186
## Demo
187
187
188
-
A demo shows the main idea of how to infer a model using IE. If your model solves one of the tasks supported by the Open Model Zoo, try to find an appropriate option from [demos](demos/README.md) or [samples](https://docs.openvino.ai/latest/_docs_IE_DG_Samples_Overview.html). Otherwise, you must provide your own demo (C++ or Python).
188
+
A demo shows the main idea of how to infer a model using OpenVINO™. If your model solves one of the tasks supported by the Open Model Zoo, try to find an appropriate option from [demos](demos/README.md) or [samples](https://docs.openvino.ai/latest/_docs_IE_DG_Samples_Overview.html). Otherwise, you must provide your own demo (C++ or Python).
189
189
190
190
The demo's name should end with `_demo` suffix to follow the convention of the project.
191
191
192
-
Demos are required to support the following keys:
192
+
Demos are required to support the following args:
193
193
194
-
-`-i "<input>"`: Required. An input to process. The input can usually be a single image, a folder of images or anything that OpenCV's `VideoCapture` can process.
195
-
-`-m "<path>"`: Required. Path to an .xml file with a trained model. If the demo uses several models at the same time, use other keys prefixed with `-m_`.
196
-
-`-d "<device>"`: Optional. Specifies a target device to infer on. CPU, GPU, HDDL or MYRIAD is acceptable. Default must be CPU. If the demo uses several models at the same time, use keys prefixed with `d_` (just like keys `m_*` above) to specify device for each model.
197
-
-`-no_show`: Optional. Do not visualize inference results.
194
+
*`-h, --help` show this help message and exit
195
+
*`-m <MODEL FILE>` path to an .xml file with a trained model. If the demo uses several models an extended syntax can be used, like `--mdet`
196
+
*`-i <INPUT>` input to process. For vision tasks the input might be a path to single image or video file, a path to folder of images, or numeric camera id. For vision tasks the default value must be `0`. For speech/audio tasks input is path to WAV file. For NLP tasks input might be a path to text file or just quoted sentence of text.
197
+
*`-d <DEVICE>` specify a device to infer on (the list of available devices is shown below). Default is CPU
198
+
*`-o <FILE PATTERN>` pattern for output file(s) to save
198
199
199
-
> **TIP**: For Python, it is preferable to use `--` instead of `-` for long keys.
200
-
201
-
You can also add any other necessary parameters.
202
-
203
-
Add `README.md` file, which describes demo usage. Update [demos' README.md](demos/README.md) adding your demo to the list.
200
+
Add `README.md` file, which describes demo usage. Update [demos' README.md](demos/README.md) by adding your demo to the list.
0 commit comments