Avoid OOM when converting from pytorch to onnx. #1300
Unanswered
abetancordelrosario
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
python tools/deploy.py configs/mmseg/segmentation_onnxruntime_static-1024x2048.py ../mmsegmentation/configs/swin/upernet_swin_base_patch4_window12_512x512_160k_ade20k_pretrain_384x384_22K.py ../mmsegmentation/upernet_swin_base_patch4_window12_512x512_160k_ade20k_pretrain_384x384_22K_20210531_125459-429057bf.pth ../../../shared/demosc.jpeg --work-dir work-dir --show --device cuda:1 --dump-info
...................
2022-11-03 11:18:15.119049825 [E:onnxruntime:, sequential_executor.cc:339 Execute] Non-zero status code returned while running ArgMax node. Name:'ArgMax_4134' Status Message: /onnxruntime_src/onnxruntime/core/framework/bfc_arena.cc:330 void* onnxruntime::BFCArena::AllocateRawInternal(size_t, bool) Failed to allocate memory for requested buffer of size 10066329600
2022-11-03:11:18:15 - root - ERROR - Error in execution: Non-zero status code returned while running ArgMax node. Name:'ArgMax_4134' Status Message: /onnxruntime_src/onnxruntime/core/framework/bfc_arena.cc:330 void* onnxruntime::BFCArena::AllocateRawInternal(size_t, bool) Failed to allocate memory for requested buffer of size 10066329600
Traceback (most recent call last):
File "/root/workspace/mmdeploy/mmdeploy/utils/utils.py", line 41, in target_wrapper
result = target(*args, **kwargs)
File "/root/workspace/mmdeploy/mmdeploy/apis/visualize.py", line 72, in visualize_model
result = task_processor.run_inference(model, model_inputs)[0]
File "/root/workspace/mmdeploy/mmdeploy/codebase/mmseg/deploy/segmentation.py", line 197, in run_inference
return model(**model_inputs, return_loss=False, rescale=True)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/root/workspace/mmdeploy/mmdeploy/codebase/mmseg/deploy/segmentation_model.py", line 85, in forward
outputs = self.forward_test(input_img, img_metas, *args, **kwargs)
File "/root/workspace/mmdeploy/mmdeploy/codebase/mmseg/deploy/segmentation_model.py", line 110, in forward_test
outputs = self.wrapper({self.input_name: imgs})
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/root/workspace/mmdeploy/mmdeploy/backend/onnxruntime/wrapper.py", line 97, in forward
self.__ort_execute(self.io_binding)
File "/root/workspace/mmdeploy/mmdeploy/utils/timer.py", line 66, in fun
result = func(*args, *kwargs)
File "/root/workspace/mmdeploy/mmdeploy/backend/onnxruntime/wrapper.py", line 113, in __ort_execute
self.sess.run_with_iobinding(io_binding)
File "/opt/conda/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 229, in run_with_iobinding
self._sess.run_with_iobinding(iobinding._iobinding, run_options)
RuntimeError: Error in execution: Non-zero status code returned while running ArgMax node. Name:'ArgMax_4134' Status Message: /onnxruntime_src/onnxruntime/core/framework/bfc_arena.cc:330 void onnxruntime::BFCArena::AllocateRawInternal(size_t, bool) Failed to allocate memory for requested buffer of size 10066329600
2022-11-03 11:18:16,223 - mmdeploy - ERROR - visualize onnxruntime model failed.
Beta Was this translation helpful? Give feedback.
All reactions