Replies: 3 comments 6 replies
-
there are some code optimisation recently, if you haven't tried the latest dev, please try installing from github (https://docs.monai.io/en/latest/installation.html#from-github), or use latest version of this file https://github.com/Project-MONAI/MONAI/blob/dev/monai/inferers/utils.py to replace the corresponding file in your installed version |
Beta Was this translation helpful? Give feedback.
-
we've been working on this, now with this implementation #6254 it's faster by default, and additional new options of buffer step None buffer dim 1 MONAI SWI time: 4.219854116439819 sec, peak GPU mem: 0.955810546875 GB
buffer step 1 buffer dim 1 MONAI SWI time: 3.145899772644043 sec, peak GPU mem: 1.00341796875 GB
buffer step 3 buffer dim 1 MONAI SWI time: 2.9485719203948975 sec, peak GPU mem: 1.367919921875 GB
buffer step 5 buffer dim 1 MONAI SWI time: 2.8465116024017334 sec, peak GPU mem: 1.6126708984375 GB monai v1.1:
monai v1.2.0rc1:
(these are mainly work from @myron and @dongyang0122) |
Beta Was this translation helpful? Give feedback.
-
if along the
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I'm using Monai's UNet for segmenting about 50 organs in CT images (~512x512x300).
I use sliding_window_inference to run the model on a new CT images, due to GPU memory limitations I need to keep the input and output volumes on cpu. The sliding window call (see below) is very slow and the GPU is poorly utilized. Using pytorch profiler and looking at the trace the GPU is active only 5% of the runtime.
Any tips and tricks to speed things up? The inference is done in mixed precision using pytorch's amp.
Thanks!
edit: formatting
Beta Was this translation helpful? Give feedback.
All reactions