-
Notifications
You must be signed in to change notification settings - Fork 7.9k
Closed
Labels
documentationProblems about existing documentation or commentsProblems about existing documentation or comments
Description
π Documentation
The speed when i use a faster_rcnn model to do inference on 2cpu is very slow(20s per pic). I try to find so ways to increase the speed and i find somebody says that, there exists a script can convert torch model to caffe2 to increase the inference speed in Tool. But i can't find it.
So does detectron2 support some speed increasing ways on cpu?
- Tool/caffe2_convert.py
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
documentationProblems about existing documentation or commentsProblems about existing documentation or comments