Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #454 +/- ##
=======================================
Coverage 88.64% 88.64%
=======================================
Files 28 28
Lines 1752 1752
=======================================
Hits 1553 1553
Misses 199 199 🚀 New features to boost your workflow:
|
|
Pretty weird one - I can reproduce using act with the Python 3.11 environment, but not locally. I wonder if it's an os-specific issue since we only use ubuntu for the tests. I'm not sure how to debug within the docker container. @chuckwondo do you have experience with that? |
|
Thanks for looking into this @maxrjones! I clearly shouldn't have merged the PR, feel free to revert it. |
Sorry, I'm not following. I see that this build failed, but it shows as canceled, and I cannot see why: https://github.com/zarr-developers/VirtualiZarr/actions/runs/13580434648/job/37998176988 Where are you seeing something hanging? What are you doing to reproduce via act? |
|
@maxrjones, I pulled the latest from main (containing #420 merged) and ran |
|
I think this is an issue with file locking due to this parameterized fixture writing to the same file. I may have a fix within in the next half hour, otherwise I'll reach out for help again. Sorry for causing confusion @chuckwondo - I cancelled the workflow manually after a half hour since it should only take a couple minutes. I am convinced this is not strictly a network issue based on having also seen the test hang (i.e., run for more than 500% the usual time) using |
chuckwondo
left a comment
There was a problem hiding this comment.
A couple of ignorable suggestions, so I've approved in case you choose to ignore them.
Thanks for clarifying! |
Co-authored-by: Chuck Daniels <chuck@developmentseed.org>
|
@maxrjones, I don't think this is the issue. I see the current build hanging at this point: The other build hung similarly: These tests don't even use the fixture in question. |
Yes, I shouldn't have been using such a shot in the dark approach. I think the changes in this PR are good, but not directed towards the problem. I opened #455 as a more targeted change, but there are so many warnings about closing the CachingFileManager on an open file that I don't expect it to actually help 😕. I think it might be necessary to do a full evaluation of where files are opened and closed to figure out if it's truly a problem. It also possible that it is a network issue with the xarray tutorial dataset as well. Maybe it's worth saving a copy locally to use and see if the test still hangs. As hooked as I am on the problem, I'm a bit out of time to work on it more now. |
|
@maxrjones Thanks for investigating this. Apologies, I wasn't able to reproduce this locally and I hadn't considered using |
I think once we have the
The logic you changed in #455 will then no longer even need to exist. |
|
Superseded by #460 |
cc @TomNicholas and @sharkinsspatial
The dependencies are all the same between the passing and hanging workflows, so I think this was actually cause by a change in #420. We should probably revert the PR if it might take a while to fix the issue.
This is the only test that uses the
netcdf4_inlined_reffixture, so I think it's probably something going on there but I haven't looked at all.