Can`t use model 'ssd_mobilenet_v2_coco.engine' #429
Unanswered
zoubin2019102976
asked this question in
Q&A
Replies: 1 comment
-
They haven't built a model that will work with the new Jetpack release. They are working on it. See #430 (comment) |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
My environment:
ubuntu 18.04 aarch64
jetpack jetson-nano-jp451-sd-card-image
jetbot v0.4.3
tensorflow 2.4.0+nv21.3
tensorrt 7.1.3.0
torch 1.6.0
torchvision 0.7.0a0+78ed10c
setuptools 49.6.0
Model version:
ssd_mobilenet_v2_coco.engine: v0.4 (latest)
When I execute the command:
from jetbot import ObjectDetector
from jetbot import Camera
model = ObjectDetector('ssd_mobilenet_v2_coco.engine')
camera = Camera.instance(width=300, height=300)
detections = model.value(camera.value)
print(detections)
I get this error information:
[TensorRT] ERROR: coreReadArchive.cpp (38) - Serialization Error in verifyHeader: 0 (Version tag does not match)
[TensorRT] ERROR: INVALID_STATE: std::exception
[TensorRT] ERROR: INVALID_CONFIG: Deserialize the cuda engine failed.
Traceback (most recent call last):
File "object_follow.py", line 4, in
model = ObjectDetector('ssd_mobilenet_v2_coco.engine')
File "/usr/local/lib/python3.6/dist-packages/jetbot-0.4.3-py3.6.egg/jetbot/object_detection.py", line 29, in init
output_names=[TRT_OUTPUT_NAME, TRT_OUTPUT_NAME + '_1'])
File "/usr/local/lib/python3.6/dist-packages/jetbot-0.4.3-py3.6.egg/jetbot/tensorrt_model.py", line 59, in init
self.context = self.engine.create_execution_context()
AttributeError: 'NoneType' object has no attribute 'create_execution_context'
If anyone can help me, I would be very grateful
Beta Was this translation helpful? Give feedback.
All reactions