Skip to content

Increase the inference speedΒ #2851

@Howardqlz

Description

@Howardqlz

πŸ“š Documentation

The speed when i use a faster_rcnn model to do inference on 2cpu is very slow(20s per pic). I try to find so ways to increase the speed and i find somebody says that, there exists a script can convert torch model to caffe2 to increase the inference speed in Tool. But i can't find it.
So does detectron2 support some speed increasing ways on cpu?

  • Tool/caffe2_convert.py

Metadata

Metadata

Assignees

No one assigned

    Labels

    documentationProblems about existing documentation or comments

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions