aarch64 build fixes for 2.19.1#465
Conversation
|
Hi! This is the friendly automated conda-forge-linting service. I just wanted to let you know that I linted all conda-recipes in your PR ( I do have some suggestions for making it better though... For recipe/meta.yaml:
This message was generated by GitHub Actions workflow run https://github.com/conda-forge/conda-forge-webservices/actions/runs/21003886856. Examine the logs at this URL for more detail. |
|
also need to check this line from the build output:
|
This is fine. We always use the newest C++ standard library implementation, even if you compile with older compiler versions. That works because the standard libraries (libcxx and libstdcxx) are highly backwards compatible |
d2a2a5f to
dd5aec0
Compare
|
Same error as you noted in the OP already This will probably require patching the bazel files |
|
Spent some time trying to get to bottom of bazel system and interaction with building the wheels. I could not see how the platforms// etc gets populated or an equivalent 'target' platform - but a simple (dirty) change of telling the python script that the target is arm64 resolved it. Do any of the maintainers here know a better attribute/variable that would be the actual build target - my patch is simple but the build now completes successfully: (obviously this patch is conditional on being a linux-aarch64 build..) |
The docs suggest that '@platforms//cpu' is supposed to be the result of bazel --cpu .. - in our case, the target cpu (aarch64), so why that's not the case in practice, I don't know. |
|
@conda-forge-admin, please rerender |
1 similar comment
|
@conda-forge-admin, please rerender |
|
@h-vetinari - can you take a look at this. I think it's ready for review - it doesn't build in the CI (timeouts?) but does build locally- which is pretty much normal for me with this feedstock. |
fe4d01a to
ea73259
Compare
|
Hi! This is the friendly automated conda-forge-linting service. I was trying to look for recipes to lint for you, but it appears we have a merge conflict. Please try to merge or rebase with the base branch to resolve this conflict. Please ping the 'conda-forge/core' team (using the |
ea73259 to
f393eea
Compare
|
Thanks - roger that, I'm on it. |
f393eea to
8ef40b1
Compare
- remove previous python package id limits - fix build issue in xla Co-Authored-By: H. Vetinari <h.vetinari@gmx.com>
8ef40b1 to
27a5ae3
Compare
|
Hi! This is the friendly automated conda-forge-linting service. I just wanted to let you know that I linted all conda-recipes in your PR ( I do have some suggestions for making it better though... For recipe/meta.yaml:
This message was generated by GitHub Actions workflow run https://github.com/conda-forge/conda-forge-webservices/actions/runs/21560039270. Examine the logs at this URL for more detail. |
27a5ae3 to
da1833b
Compare
…rch64 to aarch64 rather than the build host which is passed in.
9348602 to
d0e345e
Compare
|
uh oh. and yet this build is supposed to not be cuda. |
|
hmm, bit more of the cut n paste context:
hmm - so that's the host (build platform) - and it looks like it actually is just that the icu78-compatible tensorflows have not been built yet - which were committed this morning. I'll wait! |
|
@h-vetinari - OK, it's good now for your review - the icu 78 builds are live - the aarch64 build completes just fine, and I checked one of the CUDA x86 builds - that also worked. |
h-vetinari
left a comment
There was a problem hiding this comment.
Nice job on fixing the aarch build! Next stop aarch+CUDA? 🙃
I'm going to merge this with the skip, because we don't need to rebuild the x64 packages here.
This PR will get us going again on linux-aarch64
Work in progress:
Locally this reaches the end of the compilation phase (no C++ compile errors :0), but there's a wheels problem:
a) it looks to me like it's trying to make x86_64 ones when doing aarch64 - although that does not error (surely it should.) - smells like it's from the bazel part of the build:
and
Checklist
0(if the version changed)conda-smithy(Use the phrase@conda-forge-admin, please rerenderin a comment in this PR for automated rerendering)