Skip to content

Conversation

Meghagaur
Copy link
Member

@Meghagaur Meghagaur commented Oct 14, 2025

Summary

Enable s390x architecture support for Codeserver notebook Targeted for 3.0 Release while preserving existing functionality for amd64, arm64, and ppc64le.

Description

Changes added

  • Added a patch to build vscode for s390x in get_code_server_rpm.sh to resolve wasm magic number error in web-tree-sitter module
  • Added Target flags for building openblas from source in devel_env_setup.sh
  • Added required packages for pillow and scipy in devel_env_setup.sh
  • Skipped py-spy and skl2onnx in pyproject.toml
  • Updated the vsix files with updated extentions from Open-vsx.com

How Has This Been Tested?

Self checklist (all need to be checked):

  • Ensure that you have run make test (gmake on macOS) before asking for review
  • Changes to everything except Dockerfile.konflux files should be done in odh/notebooks and automatically synced to rhds/notebooks. For Konflux-specific changes, modify Dockerfile.konflux files directly in rhds/notebooks as these require special attention in the downstream repository and flow to the upcoming RHOAI release.

Merge criteria:

  • The commits are squashed in a cohesive manner and have meaningful messages.
  • Testing instructions have been added in the PR body (for PRs involving changes that are not immediately obvious).
  • The developer has manually tested the changes and verified that the changes work

Summary by CodeRabbit

  • New Features
    • Added s390x architecture support across the development environment and code-server build flow.
  • Chores
    • Introduced s390x-specific setup (toolchains, OpenBLAS prep, PyArrow build steps).
    • Applied s390x patching and build process for code-server (including dependency adjustments).
    • Excluded incompatible packages on s390x via platform markers (ml-dtypes, onnx, py-spy, skl2onnx).
    • Added tar installation and corrected cache cleanup path in the image build.
  • Documentation
    • Updated comments and notes to reflect support for both ppc64le and s390x.

dchourasia and others added 30 commits June 24, 2025 00:16
…me-manifests

RHOAIENG-28184: apply runtime image via the params-latest.env using kustomize
…ekton

Remove upstream tekton pipelines that incorporated on downstream by nightly sync
add Python 3.12 ODH Workbench image references to `params-latest.env`
…8512

Update the params-latest file with the new registries
* Remove runtime-rocm-tensorflow py312 from the params-latest and commit-latest files

* Fix image references on main branch as the old sha references were the placeholders
…ng-rhods

Fix ordering on downstream beta py312 images
@openshift-ci openshift-ci bot requested review from daniellutz and dibryant October 14, 2025 09:41
Copy link
Contributor

openshift-ci bot commented Oct 14, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign jstourac for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@github-actions github-actions bot added the review-requested GitHub Bot creates notification on #pr-review-ai-ide-team slack channel label Oct 14, 2025
Copy link
Contributor

coderabbitai bot commented Oct 14, 2025

Important

Review skipped

More than 25% of the files skipped due to max files limit. The review is being skipped to prevent a low-quality review.

43 files out of 150 files are above the max files limit of 100. Please upgrade to Pro plan to get higher limits.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Walkthrough

Adds s390x support across the UBI9 Python 3.12 codeserver build: Dockerfile updates, devel environment setup for s390x, s390x-specific code-server patch/build path, and dependency exclusions for s390x in pylock.toml and pyproject.toml. Also installs tar, adjusts yum/dnf cache cleanup, and refines OpenBLAS handling by architecture.

Changes

Cohort / File(s) Summary
Dockerfile updates (multi-arch tweaks)
codeserver/ubi9-python-3.12/Dockerfile.cpu
Notes and conditionals expanded to include s390x alongside ppc64le; installs tar; switches cache cleanup to /var/cache/dnf; multi-arch gating for wheel installs and comments.
Dev environment setup (s390x path)
codeserver/ubi9-python-3.12/devel_env_setup.sh
Adds s390x-specific toolchain/deps (Rust, cargo, build tools), OpenBLAS checks, CMAKE/Python prefix config, and PyArrow build; restructures OpenBLAS dir creation to exclude ppc64le and s390x.
code-server RPM build (s390x patch flow)
codeserver/ubi9-python-3.12/get_code_server_rpm.sh
Adds s390x architecture handling with a patch that updates web-tree-sitter deps in package files; runs npm clean/install/build and release with KEEP_MODULES for s390x; retains existing flow for other arches.
Dependency constraints (exclude on s390x)
codeserver/ubi9-python-3.12/pylock.toml, codeserver/ubi9-python-3.12/pyproject.toml
Adds marker = "platform_machine != 's390x'" to ml-dtypes, onnx, py-spy, skl2onnx in pylock; adds platform markers for skl2onnx and py-spy in pyproject to skip installs on s390x.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Title Check ✅ Passed The title clearly and concisely summarizes the main change by noting the addition of s390x support to the Codeserver notebook, accurately reflecting the core update without unnecessary detail.
Description Check ✅ Passed The description follows the repository template by providing a detailed ## Description section outlining specific changes, a ## How Has This Been Tested? section with concrete test steps and links, and includes the required self-checklist and merge criteria, ensuring all mandatory template sections are present and populated with substantive content.

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
codeserver/ubi9-python-3.12/devel_env_setup.sh (1)

46-53: Honor CMAKE_ARGS in build_pyarrow.

You export CMAKE_ARGS later, but the cmake invocation ignores it. Include it to pass Python and prefix hints.

-    cmake -DCMAKE_BUILD_TYPE=release \
+    cmake ${CMAKE_ARGS} -DCMAKE_BUILD_TYPE=release \
         -DCMAKE_INSTALL_PREFIX=/usr/local \
         -DARROW_PYTHON=ON \
         -DARROW_BUILD_TESTS=OFF \
         -DARROW_JEMALLOC=ON \
         -DARROW_BUILD_STATIC="OFF" \
         -DARROW_PARQUET=ON \
         ..
codeserver/ubi9-python-3.12/Dockerfile.cpu (2)

95-101: s390x likely needs OpenBLAS runtime install as well.

If wheels (numpy/scipy) link to OpenBLAS built in whl-cache, s390x will miss shared libs at runtime without make install here.

-# (ARCH-ppc64le): since wheels are compiled from source, we need shared libs available at runtime
+# (ARCH-ppc64le,s390x): since wheels are compiled from source, we need shared libs available at runtime
 RUN --mount=type=cache,from=whl-cache,source=/root/OpenBLAS,target=/OpenBlas,rw \
     bash -c ' \
-        if [[ $(uname -m) == "ppc64le" ]]; then \
+        if [[ $(uname -m) == "ppc64le" ]] || [[ $(uname -m) == "s390x" ]]; then \
             PREFIX=/usr/ make install -C /OpenBlas; \
         fi '

151-157: Ensure rpm2cpio is installed by adding rpm-build

Add rpm-build to the install line so rpm2cpio is always available:

-    dnf install -y cpio && dnf -y clean all && \
+    dnf install -y cpio rpm-build && dnf -y clean all && \
     cd / && rpm2cpio "/code-server-rpm/code-server-${CODESERVER_VERSION/v/}-${TARGETARCH}.rpm" | cpio -idmv
🧹 Nitpick comments (9)
codeserver/ubi9-python-3.12/get_code_server_rpm.sh (6)

4-8: Update banner to reflect multi-arch build.

Script now builds for amd64, arm64, ppc64le, and s390x; comment still says ppc64le-only. Please update to avoid confusion.


18-21: Tighten architecture validation and error message.

Current else covers unknown arches, but message omits uname. Consider explicit validation and listing supported arches for clarity.

Apply this minimal tweak:

-ARCH="${UNAME_TO_GOARCH[$(uname -m)]}"
+UNAME_M="$(uname -m)"
+ARCH="${UNAME_TO_GOARCH[$UNAME_M]}"

-if [[ "$ARCH" == "amd64" || "$ARCH" == "arm64" ||"$ARCH" == "ppc64le" || "$ARCH" == "s390x" ]]; then
+if [[ "$ARCH" == "amd64" || "$ARCH" == "arm64" || "$ARCH" == "ppc64le" || "$ARCH" == "s390x" ]]; then
   …
 else
-  # we shall not download rpm for other architectures
-  echo "Unsupported architecture: $ARCH" >&2
+  echo "Unsupported architecture: uname -m=${UNAME_M}. Supported: x86_64, aarch64, ppc64le, s390x" >&2
   exit 1
 fi

Based on learnings


53-61: Pin NFPM/NVM versions or allow override for reproducibility.

Fetching “latest” from GitHub makes builds non-deterministic and rate-limit prone. Either pin versions or allow env overrides for CI.

Example:

-NFPM_VERSION=$(curl -s "https://api.github.com/repos/goreleaser/nfpm/releases/latest" | jq -r '.tag_name') \
+NFPM_VERSION=${NFPM_VERSION:-$(curl -s "https://api.github.com/repos/goreleaser/nfpm/releases/latest" | jq -r '.tag_name')} \-NVM_VERSION=$(curl -s "https://api.github.com/repos/nvm-sh/nvm/releases/latest" | jq -r '.tag_name') \
+NVM_VERSION=${NVM_VERSION:-$(curl -s "https://api.github.com/repos/nvm-sh/nvm/releases/latest" | jq -r '.tag_name')} \

65-106: Externalize the s390x patch with provenance and validate apply.

Generating a large patch via heredoc is brittle. Prefer committing it under patches/ with a series entry and a comment/link to the VSCodium/IBM source. This improves reviewability and reuse. Also consider failing early with a clear message if patch no longer applies due to upstream changes.

Please add a reference URL for the patch origin (commit/tag), and confirm the patch still applies to CODESERVER_VERSION=v4.104.0 on rebase.


108-110: Guard missing patches/series.

Loop assumes patches/series exists. Add a check to skip when absent to avoid failing unrelated builds.
Apply:

-source ${NVM_DIR}/nvm.sh
-while IFS= read -r src_patch; do echo "patches/$src_patch"; patch -p1 < "patches/$src_patch"; done < patches/series
+source ${NVM_DIR}/nvm.sh
+if [[ -f patches/series ]]; then
+  while IFS= read -r src_patch; do
+    echo "patches/$src_patch"
+    patch -p1 < "patches/$src_patch"
+  done < patches/series
+fi

Based on learnings


111-116: Prefer npm ci for lockfile fidelity.

Use npm ci to ensure dependency tree matches patched lockfile and improve reproducibility.

-npm cache clean --force
-npm install
+npm ci --prefer-offline
codeserver/ubi9-python-3.12/Dockerfile.cpu (3)

58-64: Prefer POSIX-compliant sourcing (or set SHELL to bash).

Using source assumes bash. For portability, use the POSIX dot command.

-    source ./devel_env_setup.sh && \
+    . ./devel_env_setup.sh && \

92-94: OS packages: LGTM; consider consistent dnf flags to shrink image.

Install looks good. To reduce size and keep parity with the upgrade step, add --setopt=tsflags=nodocs --setopt=install_weak_deps=0.

-RUN dnf install -y tar perl mesa-libGL skopeo && dnf clean all && rm -rf /var/cache/dnf
+RUN dnf install -y --setopt=tsflags=nodocs --setopt=install_weak_deps=0 tar perl mesa-libGL skopeo \
+    && dnf clean all && rm -rf /var/cache/dnf

144-146: Replace the /dev/null sentinel hack with real marker files.

Current pattern works but is opaque. Use descriptive marker files, aligned with prior feedback and issue tracking.

-# wait for rpm-base stage (rpm builds for ppc64le and s390x)
-COPY --from=rpm-base /tmp/control /dev/null
+# wait for rpm-base stage (rpm builds for ppc64le and s390x)
+COPY --from=rpm-base /tmp/control /tmp/.rpm-base.ready

Also mirror the change for whl-cache later:

-# wait for whl-cache stage (builds uv cache)
-COPY --from=whl-cache /tmp/control /dev/null
+# wait for whl-cache stage (builds uv cache)
+COPY --from=whl-cache /tmp/control /tmp/.whl-cache.ready

Based on learnings

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7d644ac and 1470a8a.

📒 Files selected for processing (5)
  • codeserver/ubi9-python-3.12/Dockerfile.cpu (4 hunks)
  • codeserver/ubi9-python-3.12/devel_env_setup.sh (2 hunks)
  • codeserver/ubi9-python-3.12/get_code_server_rpm.sh (2 hunks)
  • codeserver/ubi9-python-3.12/pylock.toml (4 hunks)
  • codeserver/ubi9-python-3.12/pyproject.toml (2 hunks)
👮 Files not reviewed due to content moderation or server errors (1)
  • codeserver/ubi9-python-3.12/pylock.toml
🧰 Additional context used
🧠 Learnings (11)
📓 Common learnings
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-05T17:24:08.616Z
Learning: jiridanek requested PR review for #1521 covering s390x architecture support improvements, demonstrating continued focus on systematic multi-architecture compatibility enhancements in the opendatahub-io/notebooks repository through clean implementation with centralized configuration, proper CI integration, and architecture-aware testing patterns.
📚 Learning: 2025-09-10T21:01:46.464Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-09-10T21:01:46.464Z
Learning: jiridanek requested GitHub issue creation for banner comment documentation update in codeserver/ubi9-python-3.12/get_code_server_rpm.sh during PR #2356 review. Issue #2395 was created to update outdated banner comment that only mentioned ppc64le support when script now builds RPMs for amd64, arm64, and ppc64le architectures, with specific diff showing the required changes from lines 4-8, continuing the established pattern of systematic documentation improvements through detailed issue tracking.

Applied to files:

  • codeserver/ubi9-python-3.12/get_code_server_rpm.sh
  • codeserver/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-08-05T17:24:08.616Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-05T17:24:08.616Z
Learning: jiridanek requested PR review for #1521 covering s390x architecture support improvements, demonstrating continued focus on systematic multi-architecture compatibility enhancements in the opendatahub-io/notebooks repository through clean implementation with centralized configuration, proper CI integration, and architecture-aware testing patterns.

Applied to files:

  • codeserver/ubi9-python-3.12/get_code_server_rpm.sh
📚 Learning: 2025-09-05T12:34:48.372Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/get_code_server_rpm.sh:65-66
Timestamp: 2025-09-05T12:34:48.372Z
Learning: jiridanek requested GitHub issue creation for patches mechanism improvement in codeserver/ubi9-python-3.12/get_code_server_rpm.sh during PR #2227 review. GitHub issue #2318 was created addressing fragile patches application that assumes patches/series always exists, proposing conditional patch handling with proper validation, error handling, and documentation, assigned to jiridanek, continuing the established pattern of systematic code quality improvements through detailed issue tracking.

Applied to files:

  • codeserver/ubi9-python-3.12/get_code_server_rpm.sh
📚 Learning: 2025-09-05T12:35:44.985Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/get_code_server_rpm.sh:18-19
Timestamp: 2025-09-05T12:35:44.985Z
Learning: jiridanek requested GitHub issue creation for architecture validation guard in codeserver/ubi9-python-3.12/get_code_server_rpm.sh during PR #2227 review. Issue #2320 was successfully created addressing missing validation for unknown architectures in UNAME_TO_GOARCH mapping lookup where empty ARCH values could cause silent failures, with comprehensive problem description, detailed proposed solution with code example, specific acceptance criteria, implementation considerations, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.

Applied to files:

  • codeserver/ubi9-python-3.12/get_code_server_rpm.sh
📚 Learning: 2025-09-05T12:35:44.985Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/get_code_server_rpm.sh:18-19
Timestamp: 2025-09-05T12:35:44.985Z
Learning: jiridanek requested GitHub issue creation for architecture validation guard in codeserver/ubi9-python-3.12/get_code_server_rpm.sh during PR #2227 review. The issue addresses missing validation for unknown architectures in UNAME_TO_GOARCH mapping lookup where empty ARCH values could cause silent failures, proposing defensive programming with clear error messages, supported architecture listing, and proper exit codes, continuing the established pattern of systematic code quality improvements through detailed issue tracking.

Applied to files:

  • codeserver/ubi9-python-3.12/get_code_server_rpm.sh
📚 Learning: 2025-09-05T10:05:35.575Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1513
File: runtimes/datascience/ubi9-python-3.12/pylock.toml:180-180
Timestamp: 2025-09-05T10:05:35.575Z
Learning: In Python lock files for the datascience runtime, both bcrypt and paramiko packages are excluded from s390x platform using the marker "platform_machine != 's390x'" due to compatibility issues on IBM System z mainframe architecture.

Applied to files:

  • codeserver/ubi9-python-3.12/pylock.toml
📚 Learning: 2025-09-05T12:25:09.719Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:122-123
Timestamp: 2025-09-05T12:25:09.719Z
Learning: jiridanek requested GitHub issue creation for Docker multi-stage synchronization improvement in codeserver/ubi9-python-3.12/Dockerfile.cpu during PR #2227 review. The issue addresses sentinel file pattern using /tmp/control copied to /dev/null for stage coordination between rpm-base, whl-cache, and codeserver stages, proposing semantic improvements with descriptive file names, inline documentation, and elimination of /dev/null hack while maintaining multi-architecture build functionality for ppc64le support.

Applied to files:

  • codeserver/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-12T08:27:00.439Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2432
File: jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu:232-249
Timestamp: 2025-09-12T08:27:00.439Z
Learning: jiridanek requested GitHub issue creation for Rust toolchain availability during s390x builds in jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu during PR #2432 review. The issue addresses PATH availability problems where Rust/cargo installed in cpu-base stage at /opt/.cargo/bin may not be accessible during uv pip install step in jupyter-datascience stage, proposing three solution approaches: immediate environment variable fix, builder stage pattern following codeserver approach, and ENV declaration fix, with comprehensive acceptance criteria covering build reliability, multi-architecture compatibility, and alignment with established patterns, continuing the systematic infrastructure improvement tracking methodology.

Applied to files:

  • codeserver/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-12T08:27:00.439Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2432
File: jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu:232-249
Timestamp: 2025-09-12T08:27:00.439Z
Learning: jiridanek requested GitHub issue creation for Rust toolchain availability during s390x builds in jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu during PR #2432 review. Issue #2435 was created addressing PATH availability problems where Rust/cargo installed in cpu-base stage at /opt/.cargo/bin may not be accessible during uv pip install step in jupyter-datascience stage, proposing three solution approaches: immediate environment variable fix, builder stage pattern following codeserver approach, and ENV declaration fix, with comprehensive acceptance criteria covering build reliability, multi-architecture compatibility, and alignment with established patterns, continuing the systematic infrastructure improvement tracking methodology.

Applied to files:

  • codeserver/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-05T12:25:58.344Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:126-128
Timestamp: 2025-09-05T12:25:58.344Z
Learning: jiridanek expressed concern about code-server installation complexity in codeserver/ubi9-python-3.12/Dockerfile.cpu during PR #2227 review, describing the multi-stage cache mount approach as a "hack" that worries them, leading to GitHub issue creation for systematic architectural improvement addressing maintainability concerns.

Applied to files:

  • codeserver/ubi9-python-3.12/Dockerfile.cpu
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: build (codeserver-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (codeserver-ubi9-python-3.12, 3.12, linux/arm64, false) / build
  • GitHub Check: Red Hat Konflux / odh-workbench-codeserver-datascience-cpu-py312-ubi9-on-pull-request
🔇 Additional comments (3)
codeserver/ubi9-python-3.12/pyproject.toml (1)

20-20: Env markers look good; confirm impact.

Excluding skl2onnx and py-spy on s390x via platform_machine markers is correct and consistent. Please confirm no notebook features rely on these on s390x, or document the degradation.

Also applies to: 29-29

codeserver/ubi9-python-3.12/get_code_server_rpm.sh (1)

120-122: Verify rpm filename arch suffix.

Confirm the generated RPM path matches ${ARCH} mapping (e.g., amd64 vs x86_64) for all arches; some tooling emits x86_64/aarch64 in RPM filenames. Adjust the mv pattern if needed.

codeserver/ubi9-python-3.12/Dockerfile.cpu (1)

142-143: Extra OS packages: LGTM.

jq, git-lfs, and libsndfile install and cleanup are fine.

Comment on lines +71 to +73
dnf install -y cmake gcc gcc-toolset-13 fribidi-devel lcms2-devel && \
openjpeg2-devel libraqm-devel libimagequant-devel tcl-devel tk-devel && \
dnf clean all && rm -rf /var/cache/dnf;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Broken dnf install chain on s390x.

The second line starts with package names, not a command. This will fail with “command not found”.

Fix by consolidating packages and cleaning cache:

-    dnf install -y cmake gcc gcc-toolset-13 fribidi-devel lcms2-devel && \
-    openjpeg2-devel libraqm-devel libimagequant-devel tcl-devel tk-devel && \
-    dnf clean all && rm -rf /var/cache/dnf;
+    dnf install -y \
+      cmake gcc gcc-toolset-13 fribidi-devel lcms2-devel \
+      openjpeg2-devel libraqm-devel libimagequant-devel tcl-devel tk-devel
+    dnf clean all && rm -rf /var/cache/dnf
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
dnf install -y cmake gcc gcc-toolset-13 fribidi-devel lcms2-devel && \
openjpeg2-devel libraqm-devel libimagequant-devel tcl-devel tk-devel && \
dnf clean all && rm -rf /var/cache/dnf;
dnf install -y \
cmake gcc gcc-toolset-13 fribidi-devel lcms2-devel \
openjpeg2-devel libraqm-devel libimagequant-devel tcl-devel tk-devel
dnf clean all && rm -rf /var/cache/dnf
🤖 Prompt for AI Agents
In codeserver/ubi9-python-3.12/devel_env_setup.sh around lines 71 to 73, the dnf
install chain is broken because the second line begins with package names (not a
command) causing a “command not found” on s390x; fix by consolidating all
packages into a single dnf install -y <packages> command (no standalone
package-only line), keep the trailing && dnf clean all && rm -rf /var/cache/dnf
to clean up, and ensure proper line continuation backslashes or a single-line
command so the shell treats the whole install as one command.

Comment on lines +134 to 137
if [[ $(uname -m) != "ppc64le" ]] && [[ $(uname -m) != "ppc64le" ]]; then
# only for mounting on other than s390x and ppc64le
mkdir -p /root/OpenBLAS/
fi
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Typo in arch condition; s390x not excluded.

The condition checks ppc64le twice. s390x hosts will incorrectly create /root/OpenBLAS/.

-if [[ $(uname -m) != "ppc64le" ]] && [[ $(uname -m) != "ppc64le" ]]; then
+if [[ $(uname -m) != "ppc64le" ]] && [[ $(uname -m) != "s390x" ]]; then
    # only for mounting on other than s390x and ppc64le
    mkdir -p /root/OpenBLAS/
 fi
🤖 Prompt for AI Agents
In codeserver/ubi9-python-3.12/devel_env_setup.sh around lines 134 to 137, the
architecture check mistakenly repeats "ppc64le" instead of excluding "s390x",
causing /root/OpenBLAS/ to be created on s390x hosts; update the condition to
check both architectures correctly, e.g. use: if [[ $(uname -m) != "ppc64le" ]]
&& [[ $(uname -m) != "s390x" ]]; then mkdir -p /root/OpenBLAS/ fi, ensuring the
second comparison is "s390x" (keep the existing quoting/bracketing style).

Comment on lines +252 to 258
# (ARCH-ppc64le and s390x): install packages (eg. pyarrow) that need to be built from source repository on ppc64le and s390x
RUN --mount=type=cache,target=/root/.cache/uv \
--mount=type=cache,from=whl-cache,source=/wheelsdir/,target=/wheelsdir/,rw \
bash -c ' \
if [[ $(uname -m) == "ppc64le" ]]; then \
if [[ $(uname -m) == "ppc64le" ]] || [[ $(uname -m) == "s390x" ]]; then \
uv pip install /wheelsdir/*.whl; \
fi '
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Guard wheel install when directory is empty.

Bare glob can fail if /wheelsdir has no wheels for a given arch. Add a presence check.

-        if [[ $(uname -m) == "ppc64le" ]] || [[ $(uname -m) == "s390x" ]]; then \
-            uv pip install /wheelsdir/*.whl; \
-        fi '
+        if [[ $(uname -m) == "ppc64le" ]] || [[ $(uname -m) == "s390x" ]]; then \
+            if compgen -G "/wheelsdir/*.whl" > /dev/null; then \
+                uv pip install /wheelsdir/*.whl; \
+            else \
+                echo "No prebuilt wheels found in /wheelsdir; skipping."; \
+            fi; \
+        fi '
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# (ARCH-ppc64le and s390x): install packages (eg. pyarrow) that need to be built from source repository on ppc64le and s390x
RUN --mount=type=cache,target=/root/.cache/uv \
--mount=type=cache,from=whl-cache,source=/wheelsdir/,target=/wheelsdir/,rw \
bash -c ' \
if [[ $(uname -m) == "ppc64le" ]]; then \
if [[ $(uname -m) == "ppc64le" ]] || [[ $(uname -m) == "s390x" ]]; then \
uv pip install /wheelsdir/*.whl; \
fi '
RUN --mount=type=cache,target=/root/.cache/uv \
--mount=type=cache,from=whl-cache,source=/wheelsdir/,target=/wheelsdir/,rw \
bash -c ' \
if [[ $(uname -m) == "ppc64le" ]] || [[ $(uname -m) == "s390x" ]]; then \
if compgen -G "/wheelsdir/*.whl" > /dev/null; then \
uv pip install /wheelsdir/*.whl; \
else \
echo "No prebuilt wheels found in /wheelsdir; skipping."; \
fi; \
fi '
🤖 Prompt for AI Agents
In codeserver/ubi9-python-3.12/Dockerfile.cpu around lines 252 to 258, the RUN
step blindly globs /wheelsdir/*.whl which will fail if the directory is empty;
modify the shell logic to first check for the presence of at least one .whl file
(e.g., using a safe test like checking if ls or a shell glob finds any files)
and only call `uv pip install /wheelsdir/*.whl` when that check succeeds,
otherwise skip installation for that architecture.

@openshift-ci openshift-ci bot added size/xxl and removed size/l labels Oct 14, 2025
Copy link
Contributor

openshift-ci bot commented Oct 14, 2025

@Meghagaur: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/notebook-jupyter-ubi9-python-3-12-pr-image-mirror 3a48dd4 link true /test notebook-jupyter-ubi9-python-3-12-pr-image-mirror
ci/prow/rocm-runtime-pt-ubi9-python-3-12-pr-image-mirror 3a48dd4 link true /test rocm-runtime-pt-ubi9-python-3-12-pr-image-mirror
ci/prow/runtime-cuda-pt-ubi9-python-3-12-pr-image-mirror 3a48dd4 link true /test runtime-cuda-pt-ubi9-python-3-12-pr-image-mirror
ci/prow/notebook-rocm-jupyter-ubi9-python-3-12-pr-image-mirror 3a48dd4 link true /test notebook-rocm-jupyter-ubi9-python-3-12-pr-image-mirror
ci/prow/runtime-cuda-tf-ubi9-python-3-12-pr-image-mirror 3a48dd4 link true /test runtime-cuda-tf-ubi9-python-3-12-pr-image-mirror
ci/prow/notebook-cuda-jupyter-ubi9-python-3-12-pr-image-mirror 3a48dd4 link true /test notebook-cuda-jupyter-ubi9-python-3-12-pr-image-mirror
ci/prow/notebook-cuda-jupyter-tf-ubi9-python-3-12-pr-image-mirror 3a48dd4 link true /test notebook-cuda-jupyter-tf-ubi9-python-3-12-pr-image-mirror
ci/prow/notebook-jupyter-tai-ubi9-python-3-12-pr-image-mirror 3a48dd4 link true /test notebook-jupyter-tai-ubi9-python-3-12-pr-image-mirror
ci/prow/runtime-ubi9-python-3-12-pr-image-mirror 3a48dd4 link true /test runtime-ubi9-python-3-12-pr-image-mirror
ci/prow/notebook-rocm-jupyter-pt-ubi9-python-3-12-pr-image-mirror 3a48dd4 link true /test notebook-rocm-jupyter-pt-ubi9-python-3-12-pr-image-mirror
ci/prow/notebook-cuda-jupyter-pt-ubi9-python-3-12-pr-image-mirror 3a48dd4 link true /test notebook-cuda-jupyter-pt-ubi9-python-3-12-pr-image-mirror

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@Meghagaur Meghagaur closed this Oct 14, 2025
@Meghagaur Meghagaur deleted the megha/s390x-codeserver branch October 14, 2025 14:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

review-requested GitHub Bot creates notification on #pr-review-ai-ide-team slack channel size/xxl

Projects

None yet

Development

Successfully merging this pull request may close these issues.