Skip to content

Inference is implement on each GPU #2730

@jiaerfei

Description

@jiaerfei

According to the code in tools/train_net.py, I adopted the build_evaluator.py.
Screenshot from 2021-03-11 13-07-33
I meant to implement evaluation after a certain number of iterations, and I had three GPUs. However, in the practical inference process, the test sets were loaded to each GPU. Actually, I want to implement inference on one GPU. Thus, I appreciate that if you can give me some suggestions on this issue.
Screenshot from 2021-03-11 13-08-01
Screenshot from 2021-03-11 13-27-11

Metadata

Metadata

Assignees

No one assigned

    Labels

    needs-more-infoMore info is needed to complete the issue

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions