-
Notifications
You must be signed in to change notification settings - Fork 653
fix: discard generated pystubs when in GHA environment #4855
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: discard generated pystubs when in GHA environment #4855
Conversation
Re: AcademySoftwareFoundation#4854 Currently, we use `cibuildwheel`'s test-command to both generate new type stubs and validate existing type stubs. Typically, this results in the generation of `{dist_path}/OpenImageIO/__init__.pyi`, where `{dist_path}` is the directory in which the generated .whl lives. This is fine -- necessary, even -- for locally generating the python stubs, but the addition of an "OpenImageIO" directory causes problems for our "upload_pypi" github task. This PR is a workaround, which forces the stub generation script to use $PWD as the output path (instead of `{dist_path}`), which should be suitable for stub validation purposes. In the future, we will have to revisit this, if we decide to replace the local python-stubs generation process with an automated CI-based process. Signed-off-by: Zach Lewis <[email protected]>
|
@chadrik -- does this make sense to you? |
|
Is this ready to merge? I have no idea how to judge if it's gonna fix the broken upload to pypi. |
|
Here's my understanding of the problem:
I believe the change proposed here would break the feature that allows developers to download the The - uses: actions/download-artifact@fa0a91b85d4f404e444e00e005971372dc801d16 # v4.1.8
with:
pattern: cibw-*
path: dist
merge-multiple: trueSo a possible alternative that will hopefully solve the pypi upload problem without collateral damage would be to place the stubs in a separate upload: - uses: actions/upload-artifact@6f51ac03b9356f520e9adb1b1b7802705f340c2b # v4.5.0
with:
name: cibw-wheels-${{ matrix.python }}-${{ matrix.manylinux }}
path: |
./wheelhouse/*.whl
- uses: actions/upload-artifact@6f51ac03b9356f520e9adb1b1b7802705f340c2b # v4.5.0
with:
name: stubs-${{ matrix.python }}-${{ matrix.manylinux }}
path: |
./wheelhouse/OpenImageIO/__init__.pyi
# if stub validation fails we want to upload the stubs for users to review.
# keep the python build in sync with the version specified in tool.cibuildwheel.overrides section of pyproject.toml
if: always() && ${{ matrix.python =~ 'cp311-manylinux_.*64' }}Note the conditional is untested so it requires vetting. |
|
Where are we on this? Also, I think something additional has broken in the last two days, now I'm getting wheel building failures on Mac only, that I don't understand. If I do test pushes on my branch, it is failing at the same commit that succeeded a couple days ago, so I'm assuming maybe something changed on the runners or with some dependency? You can see the logs here: |
|
Oh, whoops -- I didn't see that Chad had replied. His solution is more robust, so lemme implement it, and then I'll have a look at those logs and I'll see if I can figure out what's going on with the mac wheels build. |
This reverts commit b72b375. Signed-off-by: Zach Lewis <[email protected]>
|
Looking at the logs, it seems the macos runners were experiencing a fatal error trying to pull from |
|
Why would problems with gitlib/libtiff only cause Mac Intel wheels to fail? |
|
And also... why do you think that it's related to libtiff in the first place? |
|
I thought it was related to libtiff, because that's what this particular job seemed to indicate: I suspect it was an issue with a single Mac Intel job, which caused the reset of the Mac Intel jobs to shut down and report a failure as well. Anyway, that no longer seems to be an issue. But now we seem to be experiencing something else: Two suspicious things:
|
|
I think there are two openexr's on the system and it's mixing headers from one with library from another. And a second look at the log indicates something weird with tiff, too. You can see that it first fails to find libtiff, then builds libtiff 4.6 locally, then re-does the find and thinks it gets 4.0.9! So I think that there, too, maybe there's a system install of an older version that's messing with things. But why did it all work until a couple days ago? Maybe they updated the Mac Intel runner images? |
|
Oh, no, the openexr one may be a non-problem, and simpler. It's using openexr 3.2 and imath 3.1, which is fine. Their versions are not kept in sync. |
This could be a red herring. I'm pretty sure I've come across this before... I could be mistaken, but I think that's a bug with how we're reporting the re-found library version, and that it doesn't actually reflect the version of the library that was just autobuilt -- specifically, if a too-early version of the library is found and we need to build a newer version, somewhere, the logs still claim the older version is being built. But I think the variable is ultimately being set correctly by the time the refind has completed. I think. (There IS something weird with how we're building TIFF and libdeflate, but I haven't been able to put my finger on exactly what; but it causes problems if you try to reuse autobuilt libraries for incremental builds of OIIO...)
Hmm. |
|
Reminder that the next scheduled 3.0 release is Monday, and as far as I can tell, we are not building Intel Mac wheels correctly and I'm not sure the upload-to-PyPI works either. |
|
Hey, I apologize for not working out a fix in time for the 3.0.10 release -- I hope to put this bad boy to bed some time this week. If you're cool with releasing a 3.0.10.1 hotfix, we should be able to verify that all python wheely stuff works as expected before the 3.1 release in twelve days. At this moment, it's really not clear to me why or how things have changed with the intel mac runners, or why the system OpenEXR 3.1 headers are being found and prioritized over the auto-built 3.2 headers. In a worst-case scenario, we might have to roll back OpenEXR to 3.1 for the intel mac wheels, just to get things out the door... but I'd obviously rather fix this properly. |
|
Of course, we can do a tweak release any time we need to fix this. |
|
@zachlewis What do you think we should do here? Even disabling Intel Mac wheels entirely, while very much not ideal, is better than the status quo, which is that we haven't been publishing any python wheels for the last 2 or 3 patch releases. |
|
Today is the day where I figure out wtf is going on here and get stuff working. Agreed that disabling the intel mac wheels entirely is preferable to not releasing any wheels... |
|
...okay, tomorrow's the day where I figure out wtf is going wrong. I've whittled things down to a place where now the problem seems to be a CMake-4 thing happening with one of the OCIO dependencies, yaml-cpp... and I know we have an auto-builder recipe for yaml-cpp, so I'll be looking into that next... It's very very strange that these problems are suddenly occurring in the first place... |
4ba54a0 to
a92ea7f
Compare
6bc2425 to
8688560
Compare
b57c7f9 to
557c3d3
Compare
instead of LAST Signed-off-by: Zach Lewis <[email protected]>
Signed-off-by: Zach Lewis <[email protected]>
Signed-off-by: Zach Lewis <[email protected]>
Signed-off-by: Zach Lewis <[email protected]>
This reverts commit bf6d1a5. Signed-off-by: Zach Lewis <[email protected]>
This reverts commit 211d61c. Signed-off-by: Zach Lewis <[email protected]>
- revert to deployment target 10.15 - don't use CIBW_ENVIRONMENT to find the C / CXX compiler - unset CMAKE_FIND_FRAMEWORK=NEVER - unset NO_SYSTEM_FROM_IMPORTED=ON Signed-off-by: Zach Lewis <[email protected]>
This reverts commit 557c3d3. Signed-off-by: Zach Lewis <[email protected]>
hopefully, the improved IGNORE_HOMEBREWED_DEPS business will take care of needing to manually uninstall homebrewed stuff in a CI step, but it's not the end of the world if we have to revert this commit to get stuff to work. Signed-off-by: Zach Lewis <[email protected]>
785579d to
50064a4
Compare
Signed-off-by: Zach Lewis <[email protected]>
Signed-off-by: Zach Lewis <[email protected]>
|
The plot has thickened: OCIO is having an extremely similar problem building the MacOS Intel wheels against OpenEXR: But... I think I've solved the problem? There's a homebrew-installed OpenEXR 3.4.0 preinstalled to the macos-13 runner, apparently; and it looks like it's being installed to $HOMEBREW_PREFIX/Cellar. I think this must be new behavior, either in the latest OpenEXR recipe, or with HomeBrew itself. Anyway, manually brew-uninstalling openexr and imath before building the wheels seems to fix the problem for us, and right now I'm checking to see if I've made the IGNORE_HOMEBREWED_DEPS heuristics smart enough to ignore brew-installed OpenEXR without requiring the brew-uninstall step... |
Signed-off-by: Zach Lewis <[email protected]>
Signed-off-by: Zach Lewis <[email protected]>
Signed-off-by: Zach Lewis <[email protected]>
Signed-off-by: Zach Lewis <[email protected]>
17709e1 to
52f7373
Compare
|
Looks like we'll have to manually brew-uninstall openexr to get stuff working for the macos-intel wheels after all... ...but, finally, @lgritz, all our wheels seem to be building! Functionally, this PR is good to merge; but I can take a stab at cleaning up the git history if you'd like! |
No need! The only kind of merge we do in this project is a full squash, so it will all be smashed into a single commit automatically. |
|
I wonder if another way to have solved the problem would be to change our build_OpenEXR.cmake to default to using OpenEXR 3.4 and build_Imath.cmake to use 3.2.1. (Or just use the environment variable I think the issue is that we are bitten once again between subtle binary incompatibilities between OpenEXR built against Imath 3.1 versus OpenEXR built against Imath 3.2. |
|
Hah... well, yeah, in hindsight, of course that would have made the most sense...! It took me a long time to realize that Hmm. Maybe I need to tweak the CMAKE_PREFIX_PATH to make sure we're not picking up any CMakeConfigs... (not planning on making any other changes to this PR, feedback pending) |
|
This failed a Windows job -- which may just be a spurious glitch, I'm rerunning it now, but I won't really hold this up, since it is clearly unrelated to your patch. Is this ready to merge as-is from your perspective, @zachlewis? |
|
Yes sir! |
|
I feel like the title of this PR doesn't really reflect what it's about, after all this. Would you like to take a stab at editing it (there's a button at the top for that) so that it will be more accurate in the git history and release notes? |
|
Also, one more thing... and it may be that we won't have a clear answer until I do a tagged release. It seems to me that last time I tried to do a 3.0 release, it acted like it tried to upload to PyPI upon pushing the tag, then didn't try upon the release when it was supposed to? Does this ring a bell? I could be wrong, but I remembered another issue lurking. Perhaps after I backport this to 3.0 I should do a release (even though it's not the start of the month) just to force a PyPI update since we haven't had one for a while, and then we'll find out. |
333b63b
into
AcademySoftwareFoundation:main
Signed-off-by: Zach Lewis <[email protected]>
Signed-off-by: Zach Lewis <[email protected]>
Signed-off-by: Zach Lewis <[email protected]>
Re: #4854
Currently, we use
cibuildwheel's test-command to both generate new type stubs and validate existing type stubs.Typically, this results in the generation of
{dist_path}/OpenImageIO/__init__.pyi, where{dist_path}is the directory in which the generated .whl lives.This is fine -- necessary, even -- for locally generating the python stubs, but the addition of an "OpenImageIO" directory causes problems for our "upload_pypi" github task.
This PR is a workaround, which forces the stub generation script to use $PWD as the output path (instead of
{dist_path}), which should be suitable for stub validation purposes.In the future, we will have to revisit this, if we decide to replace the local python-stubs generation process with an automated CI-based process.