Replies: 3 comments 1 reply
-
Please try this config "configs/mmdet/detection/detection_tensorrt-fp16_dynamic-64x64-608x608.py" |
Beta Was this translation helpful? Give feedback.
1 reply
-
Does someone fix this problem? |
Beta Was this translation helpful? Give feedback.
0 replies
-
same issue, did you solve this? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
1、docker build docker/GPU/ -t mmdeploy:inside --build-arg USE_SRC_INSIDE=true
2、docker run -itd --gpus all -v D:/dockerdir/docker_mmdeploy/:/stworksp/ -p 0.0.0.0::8888 -p 0.0.0.0::6006 -p 0.0.0.0::8080 -p 0.0.0.0::8081 -p 0.0.0.0::8082 -p 0.0.0.0::7070 -p 0.0.0.0::7071 -p 0.0.0.0::22 --ipc=host --name mmdeploy1 --privileged=true mmdeploy:inside
3、pip install -U openmim
4、mim install mmengine==0.1.0
5、git clone https://github.com/open-mmlab/mmdetection.git
cd mmdetection
pip install -v -e .
6、mim download mmdet --config yolov3_d53_mstrain-608_273e_coco --dest .
7、python ./tools/deploy.py
configs/mmdet/detection/detection_tensorrt_dynamic-320x320-1344x1344.py
$PATH_TO_MMDET/configs/yolo/yolov3_d53_mstrain-608_273e_coco.py
$PATH_TO_MMDET/checkpoints/yolo/yolov3_d53_mstrain-608_273e_coco.pth
$PATH_TO_MMDET/demo/demo.jpg
--work-dir work_dir
--show
--device cuda:0
(base) root@1b481a3ba279:/stworksp/pro/test/test1# ls
demo.jpg end2end.onnx yolov3_d53_mstrain-608_273e_coco.py yolov3_d53_mstrain-608_273e_coco_20210518_115020-a2c3acb8.pth
(base) root@1b481a3ba279:/stworksp/pro/test/test1# python /root/workspace/mmdeploy/tools/deploy.py \
2023-01-18 05:07:47,682 - mmdeploy - INFO - Start pipeline mmdeploy.apis.pytorch2onnx.torch2onnx in subprocess
load checkpoint from local path: ./yolov3_d53_mstrain-608_273e_coco_20210518_115020-a2c3acb8.pth
/root/workspace/mmdetection/mmdet/datasets/utils.py:66: UserWarning: "ImageToTensor" pipeline is replaced by "DefaultFormatBundle" for batch inference. It is recommended to manually replace it in the test data pipeline in your config file.
warnings.warn(
2023-01-18 05:07:54,684 - mmdeploy - WARNING - DeprecationWarning: get_onnx_config will be deprecated in the future.
2023-01-18 05:07:54,685 - mmdeploy - INFO - Export PyTorch model to ONNX: ./end2end.onnx.
2023-01-18 05:07:54,919 - mmdeploy - WARNING - Can not find torch._C._jit_pass_onnx_autograd_function_process, function rewrite will not be applied
2023-01-18 05:07:54,922 - mmdeploy - WARNING - Can not find torch._C._jit_pass_onnx_deduplicate_initializers, function rewrite will not be applied
/root/workspace/mmdeploy/mmdeploy/core/optimizers/function_marker.py:158: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
ys_shape = tuple(int(s) for s in ys.shape)
/opt/conda/lib/python3.8/site-packages/torch/nn/functional.py:3631: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
warnings.warn(
/root/workspace/mmdetection/mmdet/models/dense_heads/yolo_head.py:127: UserWarning: DeprecationWarning:
anchor_generator
is deprecated, please use "prior_generator" insteadwarnings.warn('DeprecationWarning:
anchor_generator
is deprecated, '/root/workspace/mmdetection/mmdet/core/anchor/anchor_generator.py:333: UserWarning:
grid_anchors
would be deprecated soon. Please usegrid_priors
warnings.warn('
grid_anchors
would be deprecated soon. '/root/workspace/mmdetection/mmdet/core/anchor/anchor_generator.py:369: UserWarning:
single_level_grid_anchors
would be deprecated soon. Please usesingle_level_grid_priors
warnings.warn(
/root/workspace/mmdetection/mmdet/core/bbox/coder/yolo_bbox_coder.py:73: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert pred_bboxes.size(-1) == bboxes.size(-1) == 4
/root/workspace/mmdeploy/mmdeploy/pytorch/functions/topk.py:58: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if k > size:
/root/workspace/mmdeploy/mmdeploy/codebase/mmdet/core/post_processing/bbox_nms.py:266: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
dets, labels = TRTBatchedNMSop.apply(boxes, scores, int(scores.shape[-1]),
/root/workspace/mmdeploy/mmdeploy/mmcv/ops/nms.py:143: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
out_boxes = min(num_boxes, after_topk)
WARNING: The shape inference of mmdeploy::TRTBatchedNMS type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::TRTBatchedNMS type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::TRTBatchedNMS type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::TRTBatchedNMS type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::TRTBatchedNMS type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::TRTBatchedNMS type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
2023-01-18 05:08:01,545 - mmdeploy - INFO - Execute onnx optimize passes.
2023-01-18 05:08:02,580 - mmdeploy - INFO - Finish pipeline mmdeploy.apis.pytorch2onnx.torch2onnx
2023-01-18 05:08:04,758 - mmdeploy - INFO - Start pipeline mmdeploy.apis.utils.utils.to_backend in subprocess
2023-01-18 05:08:04,862 - mmdeploy - INFO - Successfully loaded tensorrt plugins from /root/workspace/mmdeploy/mmdeploy/lib/libmmdeploy_tensorrt_ops.so
[01/18/2023-05:08:05] [TRT] [I] [MemUsageChange] Init CUDA: CPU +456, GPU +0, now: CPU 541, GPU 1194 (MiB)
[01/18/2023-05:08:05] [TRT] [I] [MemUsageSnapshot] Begin constructing builder kernel library: CPU 541 MiB, GPU 1194 MiB
[01/18/2023-05:08:06] [TRT] [I] [MemUsageSnapshot] End constructing builder kernel library: CPU 696 MiB, GPU 1238 MiB
[01/18/2023-05:08:06] [TRT] [W] parsers/onnx/onnx2trt_utils.cpp:364: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[01/18/2023-05:08:06] [TRT] [W] parsers/onnx/onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped
[01/18/2023-05:08:07] [TRT] [I] No importer registered for op: TRTBatchedNMS. Attempting to import as plugin.
[01/18/2023-05:08:07] [TRT] [I] Searching for plugin: TRTBatchedNMS, plugin_version: 1, plugin_namespace:
[01/18/2023-05:08:07] [TRT] [I] Successfully created plugin: TRTBatchedNMS
/bin/bash: nvcc: command not found
[01/18/2023-05:08:08] [TRT] [W] TensorRT was linked against cuBLAS/cuBLASLt 11.6.5 but loaded cuBLAS/cuBLASLt 11.5.1
[01/18/2023-05:08:08] [TRT] [I] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +748, GPU +320, now: CPU 1921, GPU 1558 (MiB)
[01/18/2023-05:08:08] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +127, GPU +56, now: CPU 2048, GPU 1614 (MiB)
[01/18/2023-05:08:08] [TRT] [I] Local timing cache in use. Profiling results in this builder pass will not be stored.
[01/18/2023-05:09:16] [TRT] [I] Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
[01/18/2023-05:09:34] [TRT] [W] Myelin graph with multiple dynamic values may have poor performance if they differ. Dynamic values are:
[01/18/2023-05:09:34] [TRT] [W] (# 2 (SHAPE input))
[01/18/2023-05:09:34] [TRT] [W] (ONNX_RESIZE (+ (CEIL_DIV (+ (# 3 (SHAPE input)) -32) 32) 1) 2.000000e+00)
[01/18/2023-05:09:34] [TRT] [W] (ONNX_RESIZE (+ (CEIL_DIV (+ (# 2 (SHAPE input)) -32) 32) 1) 2.000000e+00)
[01/18/2023-05:09:34] [TRT] [W] (# 3 (SHAPE input))
[01/18/2023-05:09:34] [TRT] [W] (ONNX_RESIZE (ONNX_RESIZE (+ (CEIL_DIV (+ (# 3 (SHAPE input)) -32) 32) 1) 2.000000e+00) 2.000000e+00)
[01/18/2023-05:09:34] [TRT] [W] (ONNX_RESIZE (ONNX_RESIZE (+ (CEIL_DIV (+ (# 2 (SHAPE input)) -32) 32) 1) 2.000000e+00) 2.000000e+00)
[01/18/2023-05:09:34] [TRT] [W] Skipping tactic 0 due to insuficient memory on requested size of 2442276864 detected for tactic 0.
[01/18/2023-05:09:34] [TRT] [E] 10: [optimizer.cpp::computeCosts::2011] Error Code 10: Internal Error (Could not find any implementation for node {ForeignNode[971_57...Concat_510]}.)
Process Process-3:
Traceback (most recent call last):
File "/opt/conda/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/opt/conda/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/root/workspace/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 107, in call
ret = func(*args, **kwargs)
File "/root/workspace/mmdeploy/mmdeploy/apis/utils/utils.py", line 95, in to_backend
return backend_mgr.to_backend(
File "/root/workspace/mmdeploy/mmdeploy/backend/tensorrt/backend_manager.py", line 129, in to_backend
onnx2tensorrt(
File "/root/workspace/mmdeploy/mmdeploy/backend/tensorrt/onnx2tensorrt.py", line 79, in onnx2tensorrt
from_onnx(
File "/root/workspace/mmdeploy/mmdeploy/backend/tensorrt/utils.py", line 233, in from_onnx
assert engine is not None, 'Failed to create TensorRT engine'
AssertionError: Failed to create TensorRT engine
2023-01-18 05:09:35,373 - mmdeploy - ERROR -
mmdeploy.apis.utils.utils.to_backend
with Call id: 1 failed. exit.Beta Was this translation helpful? Give feedback.
All reactions