Skip to content

ModelPreparing

Natalya Berezina edited this page Jan 11, 2023 · 34 revisions

Model preparing

Intel® Distribution of OpenVINO™ Toolkit

To prepare models and data for benchmarking, please, follow instructions.

  1. Create <working_dir> directory which will contain models and datasets.

    mkdir <working_dir>
  2. Download models using the OpenVINO Model Downloader tool to the <working_dir> directory:

    omz_downloader --all --output_dir <working_dir> --cache_dir <cache_dir>
  3. Convert models using the OpenVINO Model Converter tool to the <working_dir> directory:

    omz_converter --output_dir <working_dir> --download_dir <working_dir>
  4. (Optional) Convert models to the INT8 precision:

    1. Prepare configuration files in accordance with src/configs/quantization_configuration_file_template.xml. Please, use GUI application (src/config_maker).

    2. Quantize models to the INT8 precision using the script src/quantization/quantization.py in accordiance with src/quantization/README.md.

      python3 ~/dl-benchmark/src/quantization/quantization.py -c <config_path>

Intel® Optimization for Caffe

[TBD]

Intel® Optimization for TensorFlow

[TBD]

ONNX runtime

omz_converter supports exporting of PyTorch models to ONNX format. For more info see Exporting a PyTorch Model to ONNX Format

  1. Create <working_dir> directory which will contain models and datasets.

    mkdir <working_dir>
  2. Download models using the OpenVINO Model Downloader tool to the <working_dir> directory:

    omz_downloader --all --output_dir <working_dir> --cache_dir <cache_dir>

    or

    omz_downloader --name <model_name> --output_dir <working_dir> --cache_dir <cache_dir>
  3. Convert models using the OpenVINO Model Converter tool to the <working_dir> directory:

    omz_converter --output_dir <working_dir> --download_dir <working_dir>

    output_dir will contain model converted to onnx format.

Tensorflow-lite

TBD

Clone this wiki locally