Skip to content

Hey, does this allow inferencing on CPU after the model deployment? #148

@ArpanGyawali

Description

@ArpanGyawali

I am working on LVO detecton task. After create a custom model, can i convert it into torchscript and depoy it using docker and use it for inference on device with no GPU?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions