-
|
It is not uncommon to find datasets with bad metadata, like Pinging @dcherian who gave the idea of using preprocess and @rsignell-usgs who is a pro on finding bad metadata everywhere. [1] https://nbviewer.jupyter.org/gist/rsignell-usgs/27ba1fdeb934d6fd5b83abe43098a047 |
Beta Was this translation helpful? Give feedback.
Replies: 5 comments 1 reply
-
|
@ocefpaf Sorry for letting this sit so long. From my perspective adding |
Beta Was this translation helpful? Give feedback.
-
|
|
Beta Was this translation helpful? Give feedback.
-
|
Yes, @keewis , that works too, thanks. That would naturally be the easiest thing to do. It might be that it would be just nice to have the |
Beta Was this translation helpful? Give feedback.
-
|
I kind of like the |
Beta Was this translation helpful? Give feedback.
-
|
@ocefpaf Great, than we can just convert this to a discussion and mark @keewis answer as solution. |
Beta Was this translation helpful? Give feedback.
xr.open_dataset(filename).pipe(preprocess)would be exactly the same (minus a bit of overhead), sinceopen_mfdatasetjust opens all datasets (and decodes them), appliespreprocessto each of them and then combines.