Replies: 2 comments 4 replies
-
|
Note that such a design is also highly relevant for multi-scale networks, multiple instance learning designs, graph neural networks, and vision transformers, as these networks tend to divide larger patches into smaller tiles and store these either as a grid or a bag, and then use spatial relationships as part of the classification pipeline. Today, it is possible to generate bags by using |
Beta Was this translation helpful? Give feedback.
-
|
A good idea, but need to do some benchmarks to see if your theory holds before starting to implement something like this in the PatchGenerator. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Patch reading speed is a massive bottleneck which affects most deployment pipelines. It becomes especially prominent when reading small image patches (e.g.,
132 x 132on 20x magnification). It does not really matter how large the network is, reading individual small patches takes A LOT of time and greatly impacts the overall deployment runtime.A way to get around this problem, could be by reading larger patches from disk in the PatchGenerator PO, and then performing a second patch generation from the individual large patches, before running inference on each, and stitching the result into a large patch, which again will be stitched to form prediction on the full WSI.
If done correctly, and without making too large patches, the memory overhead should not be that much greater, and definitely feasible for low-end devices. The application above where I am running a model on
132 x 132patches, one could instead read a large patch of size528 x 528(4x larger => 4x4=16 patches). which is similar to what we normally use for other networks anyways.To better understand the idea, I have made a simple FPL that demonstrates how the pipeline could look like:
Results in the error:
Some notes:
runPipelineCLI (nor was I expecting it to), as I believe the second PatchStitcher PO will be unable to stitch the already stitched results. It could also be the second PatchGenerator where the problem lies.LargePatchGeneratorandLargePatchStitcherPOs that handles this logic? Or of course, the originalPatchGeneratorandPatchStitcherPOs could be made to handle this scenario. Perhaps that is easier?patch-overlap=0.0in the FPL, but of course nothing is stopping you from trying to get this working in an overlapping scenario as well.4x4=16patches into a batch - but if I remember correctlyTensorRTwas not compatible with batch-inference? Or has that been resolved a while ago maybe (it used to be the case with UFF at least)?Beta Was this translation helpful? Give feedback.
All reactions