Skip to content
Discussion options

You must be logged in to vote

You need to set export in yaml to torch:

optimization:
  export_mode: torch # options: torch, onnx, openvino

Then you use the inferencer like this:

# To get help about the arguments, run:
python tools/inference/torch_inference.py --help

# Example Torch inference command:
python tools/inference/torch_inference.py \
    --weights results/padim/mvtec/bottle/run/weights/torch/model.pt \
    --input datasets/MVTec/bottle/test/broken_large/000.png \
    --output results/padim/mvtec/bottle/images```

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by samet-akcay
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants
Converted from issue

This discussion was converted from issue #1466 on January 12, 2024 21:52.