Skip to content
Open
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 7 additions & 2 deletions nwbwidgets/timeseries.py
Original file line number Diff line number Diff line change
Expand Up @@ -423,10 +423,15 @@ def _prep_timeseries(time_series: TimeSeries, time_window=None, order=None):
t_ind_stop = timeseries_time_to_ind(time_series, time_window[1])

tt = get_timeseries_tt(time_series, t_ind_start, t_ind_stop)
unique_sorted_order, inverse_sort = np.unique(order, return_inverse=True)

if len(time_series.data.shape) > 1:
mini_data = time_series.data[t_ind_start:t_ind_stop, unique_sorted_order][:, inverse_sort]
unique_sorted_order, inverse_sort = np.unique(order, return_inverse=True)
# fancy indexing is not supported in zarr, so we use slice when possible
if np.all(np.diff(unique_sorted_order) == 1):
unique_sorted_order = slice(unique_sorted_order[0], unique_sorted_order[-1] + 1)
mini_data = time_series.data[t_ind_start:t_ind_stop, unique_sorted_order][:, inverse_sort]
else:
mini_data = np.array(time_series.data[t_ind_start:t_ind_stop])[:, unique_sorted_order][:, inverse_sort]
Copy link
Collaborator

@bendichter bendichter Aug 16, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we add a check here and have this work differently only for Zarr dataset objects? I'd prefer to use the simultaneous indexing approach for h5py datasets where we can so we don't load data into memory when we don't need to. I also think this could and probably should be refactored into a data utility function that can be used in other places

if np.all(np.isnan(mini_data)):
return None, tt, None
gap = np.median(np.nanstd(mini_data, axis=0)) * 20
Expand Down