-
Notifications
You must be signed in to change notification settings - Fork 116
Add s390x Support for Codeserver Notebook #2573
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Harshad Reddy Nalla <[email protected]>
…me-manifests RHOAIENG-28184: apply runtime image via the params-latest.env using kustomize
…ekton Remove upstream tekton pipelines that incorporated on downstream by nightly sync
add Python 3.12 ODH Workbench image references to `params-latest.env`
…8512 Update the params-latest file with the new registries
* Remove runtime-rocm-tensorflow py312 from the params-latest and commit-latest files * Fix image references on main branch as the old sha references were the placeholders
…ng-rhods Fix ordering on downstream beta py312 images
Co-authored-by: Jiri Daněk <[email protected]>
Co-authored-by: Jiri Daněk <[email protected]>
…eam-pytorch-rocm-63 Update PyTorch ROCm from version 6.2.4 to 6.3
Sync `rhds:main` from `odh:main`
Sync odh-io/main to rhds/main
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Important Review skippedMore than 25% of the files skipped due to max files limit. The review is being skipped to prevent a low-quality review. 43 files out of 150 files are above the max files limit of 100. Please upgrade to Pro plan to get higher limits. You can disable this status message by setting the WalkthroughAdds s390x support across the UBI9 Python 3.12 codeserver build: Dockerfile updates, devel environment setup for s390x, s390x-specific code-server patch/build path, and dependency exclusions for s390x in pylock.toml and pyproject.toml. Also installs tar, adjusts yum/dnf cache cleanup, and refines OpenBLAS handling by architecture. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
codeserver/ubi9-python-3.12/devel_env_setup.sh (1)
46-53
: Honor CMAKE_ARGS in build_pyarrow.You export CMAKE_ARGS later, but the cmake invocation ignores it. Include it to pass Python and prefix hints.
- cmake -DCMAKE_BUILD_TYPE=release \ + cmake ${CMAKE_ARGS} -DCMAKE_BUILD_TYPE=release \ -DCMAKE_INSTALL_PREFIX=/usr/local \ -DARROW_PYTHON=ON \ -DARROW_BUILD_TESTS=OFF \ -DARROW_JEMALLOC=ON \ -DARROW_BUILD_STATIC="OFF" \ -DARROW_PARQUET=ON \ ..codeserver/ubi9-python-3.12/Dockerfile.cpu (2)
95-101
: s390x likely needs OpenBLAS runtime install as well.If wheels (numpy/scipy) link to OpenBLAS built in whl-cache, s390x will miss shared libs at runtime without
make install
here.-# (ARCH-ppc64le): since wheels are compiled from source, we need shared libs available at runtime +# (ARCH-ppc64le,s390x): since wheels are compiled from source, we need shared libs available at runtime RUN --mount=type=cache,from=whl-cache,source=/root/OpenBLAS,target=/OpenBlas,rw \ bash -c ' \ - if [[ $(uname -m) == "ppc64le" ]]; then \ + if [[ $(uname -m) == "ppc64le" ]] || [[ $(uname -m) == "s390x" ]]; then \ PREFIX=/usr/ make install -C /OpenBlas; \ fi '
151-157
: Ensure rpm2cpio is installed by adding rpm-buildAdd
rpm-build
to the install line sorpm2cpio
is always available:- dnf install -y cpio && dnf -y clean all && \ + dnf install -y cpio rpm-build && dnf -y clean all && \ cd / && rpm2cpio "/code-server-rpm/code-server-${CODESERVER_VERSION/v/}-${TARGETARCH}.rpm" | cpio -idmv
🧹 Nitpick comments (9)
codeserver/ubi9-python-3.12/get_code_server_rpm.sh (6)
4-8
: Update banner to reflect multi-arch build.Script now builds for amd64, arm64, ppc64le, and s390x; comment still says ppc64le-only. Please update to avoid confusion.
18-21
: Tighten architecture validation and error message.Current else covers unknown arches, but message omits uname. Consider explicit validation and listing supported arches for clarity.
Apply this minimal tweak:
-ARCH="${UNAME_TO_GOARCH[$(uname -m)]}" +UNAME_M="$(uname -m)" +ARCH="${UNAME_TO_GOARCH[$UNAME_M]}" -if [[ "$ARCH" == "amd64" || "$ARCH" == "arm64" ||"$ARCH" == "ppc64le" || "$ARCH" == "s390x" ]]; then +if [[ "$ARCH" == "amd64" || "$ARCH" == "arm64" || "$ARCH" == "ppc64le" || "$ARCH" == "s390x" ]]; then … else - # we shall not download rpm for other architectures - echo "Unsupported architecture: $ARCH" >&2 + echo "Unsupported architecture: uname -m=${UNAME_M}. Supported: x86_64, aarch64, ppc64le, s390x" >&2 exit 1 fiBased on learnings
53-61
: Pin NFPM/NVM versions or allow override for reproducibility.Fetching “latest” from GitHub makes builds non-deterministic and rate-limit prone. Either pin versions or allow env overrides for CI.
Example:
-NFPM_VERSION=$(curl -s "https://api.github.com/repos/goreleaser/nfpm/releases/latest" | jq -r '.tag_name') \ +NFPM_VERSION=${NFPM_VERSION:-$(curl -s "https://api.github.com/repos/goreleaser/nfpm/releases/latest" | jq -r '.tag_name')} \ … -NVM_VERSION=$(curl -s "https://api.github.com/repos/nvm-sh/nvm/releases/latest" | jq -r '.tag_name') \ +NVM_VERSION=${NVM_VERSION:-$(curl -s "https://api.github.com/repos/nvm-sh/nvm/releases/latest" | jq -r '.tag_name')} \
65-106
: Externalize the s390x patch with provenance and validate apply.Generating a large patch via heredoc is brittle. Prefer committing it under patches/ with a series entry and a comment/link to the VSCodium/IBM source. This improves reviewability and reuse. Also consider failing early with a clear message if patch no longer applies due to upstream changes.
Please add a reference URL for the patch origin (commit/tag), and confirm the patch still applies to CODESERVER_VERSION=v4.104.0 on rebase.
108-110
: Guard missing patches/series.Loop assumes patches/series exists. Add a check to skip when absent to avoid failing unrelated builds.
Apply:-source ${NVM_DIR}/nvm.sh -while IFS= read -r src_patch; do echo "patches/$src_patch"; patch -p1 < "patches/$src_patch"; done < patches/series +source ${NVM_DIR}/nvm.sh +if [[ -f patches/series ]]; then + while IFS= read -r src_patch; do + echo "patches/$src_patch" + patch -p1 < "patches/$src_patch" + done < patches/series +fiBased on learnings
111-116
: Prefer npm ci for lockfile fidelity.Use npm ci to ensure dependency tree matches patched lockfile and improve reproducibility.
-npm cache clean --force -npm install +npm ci --prefer-offlinecodeserver/ubi9-python-3.12/Dockerfile.cpu (3)
58-64
: Prefer POSIX-compliant sourcing (or set SHELL to bash).Using
source
assumes bash. For portability, use the POSIX dot command.- source ./devel_env_setup.sh && \ + . ./devel_env_setup.sh && \
92-94
: OS packages: LGTM; consider consistent dnf flags to shrink image.Install looks good. To reduce size and keep parity with the upgrade step, add
--setopt=tsflags=nodocs --setopt=install_weak_deps=0
.-RUN dnf install -y tar perl mesa-libGL skopeo && dnf clean all && rm -rf /var/cache/dnf +RUN dnf install -y --setopt=tsflags=nodocs --setopt=install_weak_deps=0 tar perl mesa-libGL skopeo \ + && dnf clean all && rm -rf /var/cache/dnf
144-146
: Replace the /dev/null sentinel hack with real marker files.Current pattern works but is opaque. Use descriptive marker files, aligned with prior feedback and issue tracking.
-# wait for rpm-base stage (rpm builds for ppc64le and s390x) -COPY --from=rpm-base /tmp/control /dev/null +# wait for rpm-base stage (rpm builds for ppc64le and s390x) +COPY --from=rpm-base /tmp/control /tmp/.rpm-base.readyAlso mirror the change for whl-cache later:
-# wait for whl-cache stage (builds uv cache) -COPY --from=whl-cache /tmp/control /dev/null +# wait for whl-cache stage (builds uv cache) +COPY --from=whl-cache /tmp/control /tmp/.whl-cache.readyBased on learnings
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
codeserver/ubi9-python-3.12/Dockerfile.cpu
(4 hunks)codeserver/ubi9-python-3.12/devel_env_setup.sh
(2 hunks)codeserver/ubi9-python-3.12/get_code_server_rpm.sh
(2 hunks)codeserver/ubi9-python-3.12/pylock.toml
(4 hunks)codeserver/ubi9-python-3.12/pyproject.toml
(2 hunks)
👮 Files not reviewed due to content moderation or server errors (1)
- codeserver/ubi9-python-3.12/pylock.toml
🧰 Additional context used
🧠 Learnings (11)
📓 Common learnings
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-05T17:24:08.616Z
Learning: jiridanek requested PR review for #1521 covering s390x architecture support improvements, demonstrating continued focus on systematic multi-architecture compatibility enhancements in the opendatahub-io/notebooks repository through clean implementation with centralized configuration, proper CI integration, and architecture-aware testing patterns.
📚 Learning: 2025-09-10T21:01:46.464Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-09-10T21:01:46.464Z
Learning: jiridanek requested GitHub issue creation for banner comment documentation update in codeserver/ubi9-python-3.12/get_code_server_rpm.sh during PR #2356 review. Issue #2395 was created to update outdated banner comment that only mentioned ppc64le support when script now builds RPMs for amd64, arm64, and ppc64le architectures, with specific diff showing the required changes from lines 4-8, continuing the established pattern of systematic documentation improvements through detailed issue tracking.
Applied to files:
codeserver/ubi9-python-3.12/get_code_server_rpm.sh
codeserver/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-08-05T17:24:08.616Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-05T17:24:08.616Z
Learning: jiridanek requested PR review for #1521 covering s390x architecture support improvements, demonstrating continued focus on systematic multi-architecture compatibility enhancements in the opendatahub-io/notebooks repository through clean implementation with centralized configuration, proper CI integration, and architecture-aware testing patterns.
Applied to files:
codeserver/ubi9-python-3.12/get_code_server_rpm.sh
📚 Learning: 2025-09-05T12:34:48.372Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/get_code_server_rpm.sh:65-66
Timestamp: 2025-09-05T12:34:48.372Z
Learning: jiridanek requested GitHub issue creation for patches mechanism improvement in codeserver/ubi9-python-3.12/get_code_server_rpm.sh during PR #2227 review. GitHub issue #2318 was created addressing fragile patches application that assumes patches/series always exists, proposing conditional patch handling with proper validation, error handling, and documentation, assigned to jiridanek, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Applied to files:
codeserver/ubi9-python-3.12/get_code_server_rpm.sh
📚 Learning: 2025-09-05T12:35:44.985Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/get_code_server_rpm.sh:18-19
Timestamp: 2025-09-05T12:35:44.985Z
Learning: jiridanek requested GitHub issue creation for architecture validation guard in codeserver/ubi9-python-3.12/get_code_server_rpm.sh during PR #2227 review. Issue #2320 was successfully created addressing missing validation for unknown architectures in UNAME_TO_GOARCH mapping lookup where empty ARCH values could cause silent failures, with comprehensive problem description, detailed proposed solution with code example, specific acceptance criteria, implementation considerations, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Applied to files:
codeserver/ubi9-python-3.12/get_code_server_rpm.sh
📚 Learning: 2025-09-05T12:35:44.985Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/get_code_server_rpm.sh:18-19
Timestamp: 2025-09-05T12:35:44.985Z
Learning: jiridanek requested GitHub issue creation for architecture validation guard in codeserver/ubi9-python-3.12/get_code_server_rpm.sh during PR #2227 review. The issue addresses missing validation for unknown architectures in UNAME_TO_GOARCH mapping lookup where empty ARCH values could cause silent failures, proposing defensive programming with clear error messages, supported architecture listing, and proper exit codes, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Applied to files:
codeserver/ubi9-python-3.12/get_code_server_rpm.sh
📚 Learning: 2025-09-05T10:05:35.575Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1513
File: runtimes/datascience/ubi9-python-3.12/pylock.toml:180-180
Timestamp: 2025-09-05T10:05:35.575Z
Learning: In Python lock files for the datascience runtime, both bcrypt and paramiko packages are excluded from s390x platform using the marker "platform_machine != 's390x'" due to compatibility issues on IBM System z mainframe architecture.
Applied to files:
codeserver/ubi9-python-3.12/pylock.toml
📚 Learning: 2025-09-05T12:25:09.719Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:122-123
Timestamp: 2025-09-05T12:25:09.719Z
Learning: jiridanek requested GitHub issue creation for Docker multi-stage synchronization improvement in codeserver/ubi9-python-3.12/Dockerfile.cpu during PR #2227 review. The issue addresses sentinel file pattern using /tmp/control copied to /dev/null for stage coordination between rpm-base, whl-cache, and codeserver stages, proposing semantic improvements with descriptive file names, inline documentation, and elimination of /dev/null hack while maintaining multi-architecture build functionality for ppc64le support.
Applied to files:
codeserver/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-12T08:27:00.439Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2432
File: jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu:232-249
Timestamp: 2025-09-12T08:27:00.439Z
Learning: jiridanek requested GitHub issue creation for Rust toolchain availability during s390x builds in jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu during PR #2432 review. The issue addresses PATH availability problems where Rust/cargo installed in cpu-base stage at /opt/.cargo/bin may not be accessible during uv pip install step in jupyter-datascience stage, proposing three solution approaches: immediate environment variable fix, builder stage pattern following codeserver approach, and ENV declaration fix, with comprehensive acceptance criteria covering build reliability, multi-architecture compatibility, and alignment with established patterns, continuing the systematic infrastructure improvement tracking methodology.
Applied to files:
codeserver/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-12T08:27:00.439Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2432
File: jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu:232-249
Timestamp: 2025-09-12T08:27:00.439Z
Learning: jiridanek requested GitHub issue creation for Rust toolchain availability during s390x builds in jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu during PR #2432 review. Issue #2435 was created addressing PATH availability problems where Rust/cargo installed in cpu-base stage at /opt/.cargo/bin may not be accessible during uv pip install step in jupyter-datascience stage, proposing three solution approaches: immediate environment variable fix, builder stage pattern following codeserver approach, and ENV declaration fix, with comprehensive acceptance criteria covering build reliability, multi-architecture compatibility, and alignment with established patterns, continuing the systematic infrastructure improvement tracking methodology.
Applied to files:
codeserver/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-05T12:25:58.344Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:126-128
Timestamp: 2025-09-05T12:25:58.344Z
Learning: jiridanek expressed concern about code-server installation complexity in codeserver/ubi9-python-3.12/Dockerfile.cpu during PR #2227 review, describing the multi-stage cache mount approach as a "hack" that worries them, leading to GitHub issue creation for systematic architectural improvement addressing maintainability concerns.
Applied to files:
codeserver/ubi9-python-3.12/Dockerfile.cpu
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: build (codeserver-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (codeserver-ubi9-python-3.12, 3.12, linux/arm64, false) / build
- GitHub Check: Red Hat Konflux / odh-workbench-codeserver-datascience-cpu-py312-ubi9-on-pull-request
🔇 Additional comments (3)
codeserver/ubi9-python-3.12/pyproject.toml (1)
20-20
: Env markers look good; confirm impact.Excluding skl2onnx and py-spy on s390x via platform_machine markers is correct and consistent. Please confirm no notebook features rely on these on s390x, or document the degradation.
Also applies to: 29-29
codeserver/ubi9-python-3.12/get_code_server_rpm.sh (1)
120-122
: Verify rpm filename arch suffix.Confirm the generated RPM path matches ${ARCH} mapping (e.g., amd64 vs x86_64) for all arches; some tooling emits x86_64/aarch64 in RPM filenames. Adjust the mv pattern if needed.
codeserver/ubi9-python-3.12/Dockerfile.cpu (1)
142-143
: Extra OS packages: LGTM.
jq
,git-lfs
, andlibsndfile
install and cleanup are fine.
dnf install -y cmake gcc gcc-toolset-13 fribidi-devel lcms2-devel && \ | ||
openjpeg2-devel libraqm-devel libimagequant-devel tcl-devel tk-devel && \ | ||
dnf clean all && rm -rf /var/cache/dnf; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Broken dnf install chain on s390x.
The second line starts with package names, not a command. This will fail with “command not found”.
Fix by consolidating packages and cleaning cache:
- dnf install -y cmake gcc gcc-toolset-13 fribidi-devel lcms2-devel && \
- openjpeg2-devel libraqm-devel libimagequant-devel tcl-devel tk-devel && \
- dnf clean all && rm -rf /var/cache/dnf;
+ dnf install -y \
+ cmake gcc gcc-toolset-13 fribidi-devel lcms2-devel \
+ openjpeg2-devel libraqm-devel libimagequant-devel tcl-devel tk-devel
+ dnf clean all && rm -rf /var/cache/dnf
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
dnf install -y cmake gcc gcc-toolset-13 fribidi-devel lcms2-devel && \ | |
openjpeg2-devel libraqm-devel libimagequant-devel tcl-devel tk-devel && \ | |
dnf clean all && rm -rf /var/cache/dnf; | |
dnf install -y \ | |
cmake gcc gcc-toolset-13 fribidi-devel lcms2-devel \ | |
openjpeg2-devel libraqm-devel libimagequant-devel tcl-devel tk-devel | |
dnf clean all && rm -rf /var/cache/dnf |
🤖 Prompt for AI Agents
In codeserver/ubi9-python-3.12/devel_env_setup.sh around lines 71 to 73, the dnf
install chain is broken because the second line begins with package names (not a
command) causing a “command not found” on s390x; fix by consolidating all
packages into a single dnf install -y <packages> command (no standalone
package-only line), keep the trailing && dnf clean all && rm -rf /var/cache/dnf
to clean up, and ensure proper line continuation backslashes or a single-line
command so the shell treats the whole install as one command.
if [[ $(uname -m) != "ppc64le" ]] && [[ $(uname -m) != "ppc64le" ]]; then | ||
# only for mounting on other than s390x and ppc64le | ||
mkdir -p /root/OpenBLAS/ | ||
fi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Typo in arch condition; s390x not excluded.
The condition checks ppc64le twice. s390x hosts will incorrectly create /root/OpenBLAS/.
-if [[ $(uname -m) != "ppc64le" ]] && [[ $(uname -m) != "ppc64le" ]]; then
+if [[ $(uname -m) != "ppc64le" ]] && [[ $(uname -m) != "s390x" ]]; then
# only for mounting on other than s390x and ppc64le
mkdir -p /root/OpenBLAS/
fi
🤖 Prompt for AI Agents
In codeserver/ubi9-python-3.12/devel_env_setup.sh around lines 134 to 137, the
architecture check mistakenly repeats "ppc64le" instead of excluding "s390x",
causing /root/OpenBLAS/ to be created on s390x hosts; update the condition to
check both architectures correctly, e.g. use: if [[ $(uname -m) != "ppc64le" ]]
&& [[ $(uname -m) != "s390x" ]]; then mkdir -p /root/OpenBLAS/ fi, ensuring the
second comparison is "s390x" (keep the existing quoting/bracketing style).
# (ARCH-ppc64le and s390x): install packages (eg. pyarrow) that need to be built from source repository on ppc64le and s390x | ||
RUN --mount=type=cache,target=/root/.cache/uv \ | ||
--mount=type=cache,from=whl-cache,source=/wheelsdir/,target=/wheelsdir/,rw \ | ||
bash -c ' \ | ||
if [[ $(uname -m) == "ppc64le" ]]; then \ | ||
if [[ $(uname -m) == "ppc64le" ]] || [[ $(uname -m) == "s390x" ]]; then \ | ||
uv pip install /wheelsdir/*.whl; \ | ||
fi ' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Guard wheel install when directory is empty.
Bare glob can fail if /wheelsdir
has no wheels for a given arch. Add a presence check.
- if [[ $(uname -m) == "ppc64le" ]] || [[ $(uname -m) == "s390x" ]]; then \
- uv pip install /wheelsdir/*.whl; \
- fi '
+ if [[ $(uname -m) == "ppc64le" ]] || [[ $(uname -m) == "s390x" ]]; then \
+ if compgen -G "/wheelsdir/*.whl" > /dev/null; then \
+ uv pip install /wheelsdir/*.whl; \
+ else \
+ echo "No prebuilt wheels found in /wheelsdir; skipping."; \
+ fi; \
+ fi '
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
# (ARCH-ppc64le and s390x): install packages (eg. pyarrow) that need to be built from source repository on ppc64le and s390x | |
RUN --mount=type=cache,target=/root/.cache/uv \ | |
--mount=type=cache,from=whl-cache,source=/wheelsdir/,target=/wheelsdir/,rw \ | |
bash -c ' \ | |
if [[ $(uname -m) == "ppc64le" ]]; then \ | |
if [[ $(uname -m) == "ppc64le" ]] || [[ $(uname -m) == "s390x" ]]; then \ | |
uv pip install /wheelsdir/*.whl; \ | |
fi ' | |
RUN --mount=type=cache,target=/root/.cache/uv \ | |
--mount=type=cache,from=whl-cache,source=/wheelsdir/,target=/wheelsdir/,rw \ | |
bash -c ' \ | |
if [[ $(uname -m) == "ppc64le" ]] || [[ $(uname -m) == "s390x" ]]; then \ | |
if compgen -G "/wheelsdir/*.whl" > /dev/null; then \ | |
uv pip install /wheelsdir/*.whl; \ | |
else \ | |
echo "No prebuilt wheels found in /wheelsdir; skipping."; \ | |
fi; \ | |
fi ' |
🤖 Prompt for AI Agents
In codeserver/ubi9-python-3.12/Dockerfile.cpu around lines 252 to 258, the RUN
step blindly globs /wheelsdir/*.whl which will fail if the directory is empty;
modify the shell logic to first check for the presence of at least one .whl file
(e.g., using a safe test like checking if ls or a shell glob finds any files)
and only call `uv pip install /wheelsdir/*.whl` when that check succeeds,
otherwise skip installation for that architecture.
@Meghagaur: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
Summary
Enable s390x architecture support for Codeserver notebook Targeted for 3.0 Release while preserving existing functionality for amd64, arm64, and ppc64le.
Description
Changes added
How Has This Been Tested?
Self checklist (all need to be checked):
make test
(gmake
on macOS) before asking for reviewDockerfile.konflux
files should be done inodh/notebooks
and automatically synced torhds/notebooks
. For Konflux-specific changes, modifyDockerfile.konflux
files directly inrhds/notebooks
as these require special attention in the downstream repository and flow to the upcoming RHOAI release.Merge criteria:
Summary by CodeRabbit