-
Notifications
You must be signed in to change notification settings - Fork 10
Description
For a few months I've been using ndv for a microscope acquisition GUI that we're working on at the CZ Biohub SF (not open sourced yet)! It's a great resource, and has made it super easy for us to preview acquisition images in realtime as they come off of our scope - congrats and thanks for working on it.
A recent issue I noticed in our application is that the VRAM utilization steadily grows with the acquisition. I've traced it down to ndv, and the root cause seems to be our usage pattern:
# Somewhere in a constructor:
self._viewer = ndv.ArrayViewer(np.zeros(10, 10, 10)) # no data is ready at construction time, so i just give it a blank chunk.
# Later on...
for each volume we acquire:
self._viewer.data = volumeEvery time we run that loop, new handles are created and their VRAM allocation does not seem to be released when ArrayViewer._clear_canvas is called. This leads to steadily increasing GPU usage over time, which seems correlated with an instability in our software (which is how I got down this rabbit hole in the first place).
A workaround for this is instead doing
for each volume we acquire:
np.copyto(self._viewer.data, volume) # i.e. overwrite data in place, don't ever invoke that setter
self._viewer.display_model.current_index.update()But this approach fails when the volumes are of different sizes. Is this intended behavior (i.e. is one ArrayViewer not supposed to display differently shaped volumes) or a bug? I suspect it is a bug, due to the final comment in _clear_canvas
def _clear_canvas(self) -> None:
for lut_ctrl in self._lut_controllers.values():
# self._view.remove_lut_view(lut_ctrl.lut_view)
while lut_ctrl.handles:
lut_ctrl.handles.pop().remove()
# do we need to cleanup the lut views themselves?I won't have time to investigate this further for a little while, but if you can point me in the right direction I would be happy to test or contribute a fix. Thanks again for the great project!