Skip to content

How to accelerate the inference speed by using inference engine (ex:Intel Openvino, NVIDIA TensorRT) instead of using caffe? #914

@jackgao0323

Description

@jackgao0323

Type of Issue

  • Help wanted

Your System Configuration

  1. OpenPose version: Latest GitHub code

  2. General configuration:

    • Installation mode: CMake
    • Operating system : Windows 10
    • Release or Debug mode : Release
    • Compiler : VS2015 community
  3. Non-default settings:

    • 3-D Reconstruction module added? : no
    • Any other custom CMake configuration with respect to the default version? : no
  4. 3rd-party software:

    • Caffe version: Default from OpenPose
    • CMake version : CMake 3.8.0
    • OpenCV version: OpenPose default
  5. If Windows system:

    • Portable demo or compiled library?

I want to accelerate the inference speed by using inference engine (ex:Intel Openvino, NVIDIA TensorRT) instead of using caffe.

I want to know how to get the input of the caffe model in your program and what should I do after I get the output?

Thank you very much.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions