-
Notifications
You must be signed in to change notification settings - Fork 38
ModelPreparing
To prepare models and data for benchmarking, please, follow instructions.
-
Create
<working_dir>directory which will contain models and datasets.mkdir <working_dir>
-
Download models using the OpenVINO Model Downloader tool to the
<working_dir>directory:omz_downloader --all --output_dir <working_dir> --cache_dir <cache_dir>
-
Convert models using the OpenVINO Model Converter tool to the
<working_dir>directory:omz_converter --output_dir <working_dir> --download_dir <working_dir>
-
(Optional) Convert models to the INT8 precision:
-
Prepare configuration files in accordance with
src/configs/quantization_configuration_file_template.xml. Please, use GUI application (src/config_maker). -
Quantize models to the INT8 precision using the script
src/quantization/quantization.pyin accordiance withsrc/quantization/README.md.python3 ~/dl-benchmark/src/quantization/quantization.py -c <config_path>
-
[TBD]
[TBD]
omz_converter supports exporting of PyTorch models to ONNX format. For more info see Exporting a PyTorch Model to ONNX Format
-
Create
<working_dir>directory which will contain models and datasets.mkdir <working_dir>
-
Download models using the OpenVINO Model Downloader tool to the
<working_dir>directory:omz_downloader --all --output_dir <working_dir> --cache_dir <cache_dir>
or
omz_downloader --name <model_name> --output_dir <working_dir> --cache_dir <cache_dir>
-
Convert models using the OpenVINO Model Converter tool to the
<working_dir>directory:omz_converter --output_dir <working_dir> --download_dir <working_dir>
output_dir will contain model converted to onnx format.
TBD