-
I built the docker images from here: And traced a torchscript model (
Calling the
The binary outputs something like this:
Since I'm not a C++ developer, I'm trying to replicate the C++ code that underpins So far, I've come up with this: import cv2
import numpy as np
import torch
model_file = "/some/where/model.ts"
image_file = "/some/where/input.jpg"
with torch.no_grad():
model = torch.jit.load(model_file)
device = None
for b in model.buffers():
device = b.device
break
if device is None:
raise Exception("No buffers?")
input_img = cv2.imread(image_file)
height, width = input_img.shape[:2]
channels = 3
assert(height % 32 == 0 and width % 32 == 0)
inp = torch.from_numpy(input_img)
print("inp", inp.shape)
inp = torch.as_tensor(input_img.astype("float32")).permute(2, 0, 1).unsqueeze(0)
print("inp", inp.shape)
im_info = torch.from_numpy(np.array([float(height), float(width), 1.0])).unsqueeze(0)
print("im_info", im_info.shape)
inputs = (inp, im_info)
output = model.forward(inputs) However, the line RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript, serialized code (most recent call last):
File "code/__torch__/detectron2/export/caffe2_modeling.py", line 17, in forward
x = torch.div(torch.sub(data, _4, alpha=1), _3)
_5, _6, _7, _8, _9, _10, _11, = (_2).forward(x, )
_12 = (_1).forward(_5, _6, _7, _8, _9, im_info, )
~~~~~~~~~~~ <--- HERE
_13 = (_0).forward(_12, _5, _6, _7, _10, im_info, _11, )
_14, _15, _16, _17, = _13
File "code/__torch__/detectron2/export/c10.py", line 24, in forward
scores = torch.detach(_6)
bbox_deltas = torch.detach(_7)
_16, _17 = ops._caffe2.GenerateProposals(scores, bbox_deltas, im_info, _4, 0.25, 1000, 1000, 0.69999999999999996, 0., True, -180, 180, 1., False, None)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
scores0 = torch.detach(_8)
bbox_deltas0 = torch.detach(_9)
Traceback of TorchScript, original code (most recent call last):
/home/appuser/detectron2_repo/detectron2/export/c10.py(203): _generate_proposals
/home/appuser/detectron2_repo/detectron2/export/c10.py(256): forward
/home/appuser/.local/lib/python3.6/site-packages/torch/nn/modules/module.py(860): _slow_forward
/home/appuser/.local/lib/python3.6/site-packages/torch/nn/modules/module.py(887): _call_impl
/home/appuser/detectron2_repo/detectron2/export/caffe2_modeling.py(272): forward
/usr/lib/python3.6/contextlib.py(52): inner
/home/appuser/.local/lib/python3.6/site-packages/torch/nn/modules/module.py(860): _slow_forward
/home/appuser/.local/lib/python3.6/site-packages/torch/nn/modules/module.py(887): _call_impl
/home/appuser/.local/lib/python3.6/site-packages/torch/jit/_trace.py(940): trace_module
/home/appuser/.local/lib/python3.6/site-packages/torch/jit/_trace.py(742): trace
/home/appuser/detectron2_repo/detectron2/export/api.py(134): export_torchscript
./export_model.py(53): export_caffe2_tracing
./export_model.py(172): <module>
RuntimeError: [enforce fail at generate_proposals_op.cc:291] im_info_tensor.template IsType<float>(). double This is where I'm currently stuck, as I have no idea how to interpret this error... My Python virtual environment that I used:
Thanks for any insight! |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
The error says |
Beta Was this translation helpful? Give feedback.
The error says
im_info
has to be float but double is given.