-
Notifications
You must be signed in to change notification settings - Fork 38
NewFeatureIdeas
Valentina edited this page Jan 5, 2023
·
18 revisions
New features:
- Extend the set of inference frameworks and models in regular benchmarks. Support for inference using OpenCV (DNN module, with / without Inference Engine backend), MXNet, PyTorch.
- Check models for the following frameworks: TensorFlow lite and ONNXRuntime.
- Develop docker containers for TensorFlow lite and ONNXRuntime.
- Check docker containers for the following frameworks: Intel® Optimization for Caffe, Intel® Optimization for TensorFlow.
- Provide collecting the performance metrics and the accuracy metrics for the following frameworks: Intel® Optimization for Caffe, Intel® Optimization for TensorFlow, TensorFlow lite and ONNXRuntime.
- Test prepared step-by-step manual.
- Automate the regular testing of the main scenario correctness, the final collecting of the performance metrics and the accuracy of the available deep models using generally accepted CI/CD tools.
- Develop a demo to demonstrate the main scenario of the benchmarking system DLI.
- Develop and/or integrate utilities for converting models from one storage format to another one. Providing the ability to convert models between framework formats supported by the benchmarking system DLI, or to the ONNX format.
- Extend the set of hardware platforms for regular performance measurements (RaspberryPi 4 8GB) (if we will have access to hardware).
Papers:
- An overview of frameworks for inferring deep neural network models and comparing their functionality.