You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Anyone knows why this input format lead to a None type object?
Basically, I followed the documents from detectron2 official docs, but construct an input format like
x = torch.randn(3,224,224)
inputs = [{'image':x,'height':224,'width':224}]
was not working
I get these errors
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1090, in _slow_forward
result = self.forward(*input, **kwargs)
File "/Users/nemo/Documents//VideoPose3D-main/inference/detectron2/modeling/meta_arch/rcnn.py", line 146, in forward
return self.inference(batched_inputs)
File "/Users/nemo/Documents/VideoPose3D-main/inference/detectron2/modeling/meta_arch/rcnn.py", line 199, in inference
images = self.preprocess_image(batched_inputs)
File "/Users/nemo/Documents/VideoPose3D-main/inference/detectron2/modeling/meta_arch/rcnn.py", line 224, in preprocess_image
images = [x["image"].to(self.device) for x in batched_inputs]
TypeError: 'NoneType' object is not iterable
from detectron2.config import get_cfg
from detectron2 import model_zoo
from detectron2.modeling import build_model
from detectron2.checkpoint import DetectionCheckpointer
import torch
cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file("COCO-Detection/faster_rcnn_R_101_FPN_3x.yaml"))
cfg.MODEL.DEVICE = "cpu"
model = build_model(cfg)
# optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9)
model.eval()
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-Detection/faster_rcnn_R_101_FPN_3x.yaml")
DetectionCheckpointer(model).load(cfg.MODEL.WEIGHTS)
# predictor = DefaultPredictor(cfg)
x = torch.randn(3,224,224)
inputs = [{'image':x,'height':224,'width':224}]
torch.onnx.export(model, # model being run
inputs, # model input (or a tuple for multiple inputs)
"model_t1.onnx", # where to save the model (can be a file or file-like object)
export_params=True, # store the trained parameter weights inside the model file
opset_version=10, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names = ['input'], # the model's input names
output_names = ['output']) # the model's output names
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Anyone knows why this input format lead to a None type object?
Basically, I followed the documents from detectron2 official docs, but construct an input format like
I get these errors
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1090, in _slow_forward
result = self.forward(*input, **kwargs)
File "/Users/nemo/Documents//VideoPose3D-main/inference/detectron2/modeling/meta_arch/rcnn.py", line 146, in forward
return self.inference(batched_inputs)
File "/Users/nemo/Documents/VideoPose3D-main/inference/detectron2/modeling/meta_arch/rcnn.py", line 199, in inference
images = self.preprocess_image(batched_inputs)
File "/Users/nemo/Documents/VideoPose3D-main/inference/detectron2/modeling/meta_arch/rcnn.py", line 224, in preprocess_image
images = [x["image"].to(self.device) for x in batched_inputs]
TypeError: 'NoneType' object is not iterable
Beta Was this translation helpful? Give feedback.
All reactions