Skip to content
Discussion options

You must be logged in to vote

Hi @pcanas, if you want to do parallelize inside sliding_window_inference, you may need to modify the code manually. Or you can increase your sw_batch_size.
BTW, if you have a limited GPU resource, you can also try SlidingWindowInfererAdapt which automatically switch to buffered and then to CPU stitching, when OOM on GPU.

class SlidingWindowInfererAdapt(SlidingWindowInferer):

Hope it helps, thanks!

Replies: 1 comment 5 replies

Comment options

You must be logged in to vote
5 replies
@pcanas
Comment options

@KumoLiu
Comment options

@pcanas
Comment options

@KumoLiu
Comment options

Answer selected by pcanas
@pcanas
Comment options

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants