You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I run RandLA-Net on Toronto3D dataset, I met this bug. I seems the multiprocesing error, have you ever met before?
AttributeError: Can't pickle local object 'SemSegRandomSampler.get_point_sampler.._random_centered_gen'
[W CudaIPCTypes.cpp:15] Producer process has been terminated before all shared CUDA tensors released. See Note [Sharing CUDA tensors]
training: 0%| | 0/50 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/export/home2/hanxiaobing/Documents/Open3D-ML-code/Open3D-ML/scripts/run_pipeline.py", line 246, in
sys.exit(main())
File "/export/home2/hanxiaobing/Documents/Open3D-ML-code/Open3D-ML/scripts/run_pipeline.py", line 180, in main
pipeline.run_train()
File "/export/home2/hanxiaobing/anaconda3/envs/Open3D-ML-Pytorch/lib/python3.10/site-packages/open3d/_ml3d/torch/pipelines/semantic_segmentation.py", line 406, in run_train
for step, inputs in enumerate(tqdm(train_loader, desc='training')):
File "/export/home2/hanxiaobing/anaconda3/envs/Open3D-ML-Pytorch/lib/python3.10/site-packages/tqdm/std.py", line 1195, in iter
for obj in iterable:
File "/export/home2/hanxiaobing/anaconda3/envs/Open3D-ML-Pytorch/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 438, in iter
return self._get_iterator()
File "/export/home2/hanxiaobing/anaconda3/envs/Open3D-ML-Pytorch/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 384, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
File "/export/home2/hanxiaobing/anaconda3/envs/Open3D-ML-Pytorch/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1048, in init
w.start()
File "/export/home2/hanxiaobing/anaconda3/envs/Open3D-ML-Pytorch/lib/python3.10/multiprocessing/process.py", line 121, in start
self._popen = self._Popen(self)
File "/export/home2/hanxiaobing/anaconda3/envs/Open3D-ML-Pytorch/lib/python3.10/multiprocessing/context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "/export/home2/hanxiaobing/anaconda3/envs/Open3D-ML-Pytorch/lib/python3.10/multiprocessing/context.py", line 291, in _Popen
return Popen(process_obj)
File "/export/home2/hanxiaobing/anaconda3/envs/Open3D-ML-Pytorch/lib/python3.10/multiprocessing/popen_forkserver.py", line 35, in init
super().init(process_obj)
File "/export/home2/hanxiaobing/anaconda3/envs/Open3D-ML-Pytorch/lib/python3.10/multiprocessing/popen_fork.py", line 19, in init
self._launch(process_obj)
File "/export/home2/hanxiaobing/anaconda3/envs/Open3D-ML-Pytorch/lib/python3.10/multiprocessing/popen_forkserver.py", line 47, in _launch
reduction.dump(process_obj, buf)
File "/export/home2/hanxiaobing/anaconda3/envs/Open3D-ML-Pytorch/lib/python3.10/multiprocessing/reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'SemSegRandomSampler.get_point_sampler.._random_centered_gen'
[W CudaIPCTypes.cpp:15] Producer process has been terminated before all shared CUDA tensors released. See Note [Sharing CUDA tensors
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
When I run RandLA-Net on Toronto3D dataset, I met this bug. I seems the multiprocesing error, have you ever met before?
AttributeError: Can't pickle local object 'SemSegRandomSampler.get_point_sampler.._random_centered_gen'
[W CudaIPCTypes.cpp:15] Producer process has been terminated before all shared CUDA tensors released. See Note [Sharing CUDA tensors]
training: 0%| | 0/50 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/export/home2/hanxiaobing/Documents/Open3D-ML-code/Open3D-ML/scripts/run_pipeline.py", line 246, in
sys.exit(main())
File "/export/home2/hanxiaobing/Documents/Open3D-ML-code/Open3D-ML/scripts/run_pipeline.py", line 180, in main
pipeline.run_train()
File "/export/home2/hanxiaobing/anaconda3/envs/Open3D-ML-Pytorch/lib/python3.10/site-packages/open3d/_ml3d/torch/pipelines/semantic_segmentation.py", line 406, in run_train
for step, inputs in enumerate(tqdm(train_loader, desc='training')):
File "/export/home2/hanxiaobing/anaconda3/envs/Open3D-ML-Pytorch/lib/python3.10/site-packages/tqdm/std.py", line 1195, in iter
for obj in iterable:
File "/export/home2/hanxiaobing/anaconda3/envs/Open3D-ML-Pytorch/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 438, in iter
return self._get_iterator()
File "/export/home2/hanxiaobing/anaconda3/envs/Open3D-ML-Pytorch/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 384, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
File "/export/home2/hanxiaobing/anaconda3/envs/Open3D-ML-Pytorch/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1048, in init
w.start()
File "/export/home2/hanxiaobing/anaconda3/envs/Open3D-ML-Pytorch/lib/python3.10/multiprocessing/process.py", line 121, in start
self._popen = self._Popen(self)
File "/export/home2/hanxiaobing/anaconda3/envs/Open3D-ML-Pytorch/lib/python3.10/multiprocessing/context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "/export/home2/hanxiaobing/anaconda3/envs/Open3D-ML-Pytorch/lib/python3.10/multiprocessing/context.py", line 291, in _Popen
return Popen(process_obj)
File "/export/home2/hanxiaobing/anaconda3/envs/Open3D-ML-Pytorch/lib/python3.10/multiprocessing/popen_forkserver.py", line 35, in init
super().init(process_obj)
File "/export/home2/hanxiaobing/anaconda3/envs/Open3D-ML-Pytorch/lib/python3.10/multiprocessing/popen_fork.py", line 19, in init
self._launch(process_obj)
File "/export/home2/hanxiaobing/anaconda3/envs/Open3D-ML-Pytorch/lib/python3.10/multiprocessing/popen_forkserver.py", line 47, in _launch
reduction.dump(process_obj, buf)
File "/export/home2/hanxiaobing/anaconda3/envs/Open3D-ML-Pytorch/lib/python3.10/multiprocessing/reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'SemSegRandomSampler.get_point_sampler.._random_centered_gen'
[W CudaIPCTypes.cpp:15] Producer process has been terminated before all shared CUDA tensors released. See Note [Sharing CUDA tensors
Beta Was this translation helpful? Give feedback.
All reactions