-
I am using
For our use case, we need to run this inference on CPU, and performance is significantly worse (x10 times worse) than using GPU. Since the algorithm could be parallelized, I was wondering how we can achieve parallelization for this function. |
Beta Was this translation helpful? Give feedback.
Answered by
KumoLiu
Jul 11, 2023
Replies: 1 comment 5 replies
-
Hi, @pcanas Currently, MONAI does not provide parallelization for the computation on CPU, you can try Thanks |
Beta Was this translation helpful? Give feedback.
5 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi @pcanas, if you want to do parallelize inside
sliding_window_inference
, you may need to modify the code manually. Or you can increase yoursw_batch_size
.BTW, if you have a limited GPU resource, you can also try
SlidingWindowInfererAdapt
which automatically switch to buffered and then to CPU stitching, when OOM on GPU.MONAI/monai/inferers/inferer.py
Line 519 in 17c1e3a
Hope it helps, thanks!