-
Notifications
You must be signed in to change notification settings - Fork 113
RHAIENG-1238: fix(codeserver/ppc64le): remove EPEL from final image of Codeserver #2561
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Md. Shafi Hussain <[email protected]>
WalkthroughRemoved EPEL and image-processing package installs for ppc64le in the cpu-base stage of the Dockerfile. Extended devel_env_setup.sh with a new build_pillow() routine, made pyarrow version explicit, installed patchelf on ppc64le, parsed PILLOW_VERSION from pylock.toml, and built/staged a Pillow wheel before installing wheels. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Pre-merge checks and finishing touches❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
/build-konflux |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
codeserver/ubi9-python-3.12/Dockerfile.cpu
(0 hunks)codeserver/ubi9-python-3.12/devel_env_setup.sh
(3 hunks)
💤 Files with no reviewable changes (1)
- codeserver/ubi9-python-3.12/Dockerfile.cpu
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: build (codeserver-ubi9-python-3.12, 3.12, linux/arm64, false) / build
- GitHub Check: build (codeserver-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: Red Hat Konflux / odh-workbench-codeserver-datascience-cpu-py312-ubi9-on-pull-request
/build-konflux |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks good so far, let us know when you're satisfied with the konflux build results and this is ready to merge, thanks
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: jiridanek The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Co-authored-by: Jiri Daněk <[email protected]>
New changes are detected. LGTM label has been removed. |
/build-konflux |
We can have it merged. Konflux builds still seem flaky. Feel free to have it merged if it's all good. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (6)
codeserver/ubi9-python-3.12/devel_env_setup.sh (6)
27-29
: Avoid persisting auditwheel; run it via uvx.Installing auditwheel into the image is unnecessary; use uvx to run it transiently.
- uv pip install auditwheel - auditwheel repair pillow*.whl + uvx --from auditwheel auditwheel repair pillow*.whl
97-101
: Version parsing from pylock.toml is fragile (drops rc/post/local).The regex only captures digits/dots; pre-releases (rc), post releases, or locals will be misparsed. Use tomllib to read TOML reliably.
Example replacement (adjust keys to your actual lock schema):
- PYARROW_VERSION=$(grep -A1 '"pyarrow"' pylock.toml | grep -Eo '\b[0-9\.]+\b') + PYARROW_VERSION=$(python - <<'PY' +import tomllib, sys +with open("pylock.toml","rb") as f: + d = tomllib.load(f) +# Adapt this lookup to your lock structure: +def find(pkg): + # common patterns: {'package':[{'name':'pyarrow','version':'X'}]} + for key,val in d.items(): + if isinstance(val, list): + for it in val: + if isinstance(it, dict) and it.get("name")==pkg and "version" in it: + return it["version"] + raise SystemExit(f"version for {pkg} not found") +print(find("pyarrow")) +PY +) - PILLOW_VERSION=$(grep -A1 '"pillow"' pylock.toml | grep -Eo '\b[0-9\.]+\b') + PILLOW_VERSION=$(python - <<'PY' +import tomllib, sys +with open("pylock.toml","rb") as f: + d = tomllib.load(f) +def find(pkg): + for key,val in d.items(): + if isinstance(val, list): + for it in val: + if isinstance(it, dict) and it.get("name")==pkg and "version" in it: + return it["version"] + raise SystemExit(f"version for {pkg} not found") +print(find("pillow")) +PY +)If the structure differs, I can tailor the lookup once you share an example of pylock.toml.
93-95
: Avoid PKG_CONFIG_PATH over the entire filesystem.Finding every pkgconfig dir across / is slow and non-deterministic. Set known paths explicitly.
- export PKG_CONFIG_PATH=$(find / -type d -name "pkgconfig" 2>/dev/null | tr '\n' ':') + export PKG_CONFIG_PATH="/usr/lib64/pkgconfig:/usr/local/lib64/pkgconfig:/opt/OpenBLAS/lib/pkgconfig:/usr/lib/pkgconfig:/usr/local/lib/pkgconfig"
17-19
: Ensure temp dirs are cleaned up on early failure.With set -e, failures skip the rm -rf. Add traps in both builders.
TEMP_BUILD_DIR=$(mktemp -d) + trap 'rm -rf "${TEMP_BUILD_DIR}"' EXIT cd ${TEMP_BUILD_DIR} @@ - rm -rf ${TEMP_BUILD_DIR} + rm -rf ${TEMP_BUILD_DIR} + trap - EXITApply the same pattern in build_pyarrow().
Also applies to: 31-33, 39-41, 62-64
103-103
: Install wheels strictly from local artifacts.To prevent network resolution pulling in extras from indexes, install using only local wheels.
- uv pip install ${WHEEL_DIR}/*.whl + uv pip install --no-index --find-links "${WHEEL_DIR}" ${WHEEL_DIR}/*.whl
20-21
: Minor: use echo for banner lines (readability).The “:” builtin discards output; echo makes intent clear in logs.
- : ================== Installing Pillow ================== + echo "================== Installing Pillow ==================" @@ - : ================= Fix Pillow Wheel ==================== + echo "================= Fix Pillow Wheel ===================="Also applies to: 25-26
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
codeserver/ubi9-python-3.12/devel_env_setup.sh
(3 hunks)
🧰 Additional context used
🧠 Learnings (4)
📚 Learning: 2025-07-16T00:17:10.313Z
Learnt from: grdryn
PR: opendatahub-io/notebooks#1396
File: jupyter/tensorflow/ubi9-python-3.12/Pipfile:13-14
Timestamp: 2025-07-16T00:17:10.313Z
Learning: When checking PyPI wheel availability for aarch64 architecture, the actual wheel filenames use "manylinux2014_aarch64" pattern, not "linux_aarch64". This caused incorrect analysis in PR #1396 where CodeRabbit falsely claimed CUDA companion packages lacked aarch64 wheels when searching with the wrong pattern. The correct approach is to search for "manylinux2014_aarch64" or use a more generic "aarch64" pattern when verifying wheel availability for ARM64 compatibility.
Applied to files:
codeserver/ubi9-python-3.12/devel_env_setup.sh
📚 Learning: 2025-08-19T11:40:33.157Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1998
File: runtimes/pytorch/ubi9-python-3.11/Dockerfile.cuda:23-24
Timestamp: 2025-08-19T11:40:33.157Z
Learning: jiridanek corrected CodeRabbit's false assessment about uv pip install --build-constraints flag during PR #1998 review. User demonstrated that uv 0.8.12 uses the plural form --build-constraints (not --build-constraint as incorrectly suggested by CodeRabbit), verified through direct command line help output, showing the importance of accepting corrections when evidence contradicts initial analysis.
Applied to files:
codeserver/ubi9-python-3.12/devel_env_setup.sh
📚 Learning: 2025-08-06T15:19:32.488Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#968
File: runtimes/rocm-pytorch/ubi9-python-3.12/Dockerfile.rocm:79-88
Timestamp: 2025-08-06T15:19:32.488Z
Learning: jiridanek corrected CodeRabbit's false assessment about uv pip install --index-strategy=unsafe-best-match flag during PR #968 review. The flag controls index selection strategy for choosing between multiple package registries (like PyPI and supplementary registries), not hash verification behavior as incorrectly suggested by CodeRabbit.
Applied to files:
codeserver/ubi9-python-3.12/devel_env_setup.sh
📚 Learning: 2025-08-02T10:26:52.388Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#968
File: runtimes/tensorflow/ubi9-python-3.12/Dockerfile.cuda:152-158
Timestamp: 2025-08-02T10:26:52.388Z
Learning: CodeRabbit made an incorrect assessment about uv pip install --strict flag support during PR #968 review, suggesting its removal when the flag is actually supported in both uv 0.8.0 and 0.8.4. jiridanek correctly demonstrated the flag's availability through command line verification, showing the importance of thorough verification and accepting corrections when evidence contradicts initial analysis.
Applied to files:
codeserver/ubi9-python-3.12/devel_env_setup.sh
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: Red Hat Konflux / odh-workbench-codeserver-datascience-cpu-py312-ubi9-on-pull-request
- GitHub Check: build (codeserver-ubi9-python-3.12, 3.12, linux/arm64, false) / build
- GitHub Check: build (codeserver-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: code-static-analysis
- GitHub Check: pytest-tests
🔇 Additional comments (1)
codeserver/ubi9-python-3.12/devel_env_setup.sh (1)
69-71
: Good catch adding patchelf for auditwheel.Required for auditwheel repair; thanks for including it.
dnf install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm | ||
dnf install -y cmake gcc-toolset-13 fribidi-devel lcms2-devel \ | ||
# patchelf: needed by `auditwheel repair` | ||
dnf install -y cmake gcc-toolset-13 fribidi-devel lcms2-devel patchelf \ | ||
libimagequant-devel libraqm-devel openjpeg2-devel tcl-devel tk-devel |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Ensure EPEL and build-only deps aren’t present in the final image.
PR goal is to remove EPEL from the final image. This script enables EPEL and installs -devel packages; please disable/remove EPEL and drop build-only deps after wheel installation, and clean caches.
Add a cleanup block after installing wheels:
uv pip install ${WHEEL_DIR}/*.whl
+
+ # Cleanup: ensure EPEL and build-only deps do not remain in final image
+ dnf remove -y cmake gcc-toolset-13 fribidi-devel lcms2-devel libimagequant-devel \
+ libraqm-devel openjpeg2-devel tcl-devel tk-devel patchelf || true
+ dnf remove -y epel-release || true
+ rm -f /etc/yum.repos.d/epel*.repo /etc/yum.repos.d/epel*.repo.rpmnew || true
+ # Remove rust toolchain if not needed at runtime
+ rm -rf "$HOME/.cargo" "$HOME/.rustup" || true
+ dnf clean all && rm -rf /var/cache/dnf /var/cache/yum || true
Please confirm whether this script runs in a builder-only stage; if yes, cleanup may move to the Dockerfile stage boundary instead. Based on PR objectives.
Also applies to: 103-103
🏁 Script executed:
#!/bin/bash
# List all Dockerfiles
echo "Dockerfiles in repo:"
find . -type f -iname '*dockerfile*'
# Search for multi-stage builder patterns
echo -e "\nBuilder stages:"
rg -nP 'FROM\s+\S+\s+as\s+\w+' -C3
# Locate invocation of devel_env_setup.sh
echo -e "\nScript usage:"
rg -n 'devel_env_setup\.sh' -C3
Length of output: 5396
🏁 Script executed:
#!/bin/bash
echo "codeserver/ubi9-python-3.12/Dockerfile.cpu stages:"
rg -n '^FROM' codeserver/ubi9-python-3.12/Dockerfile.cpu
echo -e "\njupyter/trustyai/ubi9-python-3.12/Dockerfile.cpu stages:"
rg -n '^FROM' jupyter/trustyai/ubi9-python-3.12/Dockerfile.cpu
Length of output: 720
Append cleanup of EPEL and build‐only deps to devel_env_setup.sh
devel_env_setup.sh is run in the final image stage for both codeserver and jupyter-trustyai, so add a cleanup step after wheel installation to remove epel-release, build-time packages, caches, and the Rust toolchain.
🤖 Prompt for AI Agents
In codeserver/ubi9-python-3.12/devel_env_setup.sh around lines 68 to 71, append
a cleanup step after wheel installation that removes the EPEL release package
and build-only packages, uninstalls the Rust toolchain, and clears package and
pip caches: remove epel-release-rpm (dnf remove -y epel-release-latest-9.noarch
or epel-release), run dnf remove/erase or dnf autoremove -y for the build deps
you installed (cmake gcc-toolset-13 fribidi-devel lcms2-devel patchelf
libimagequant-devel libraqm-devel openjpeg2-devel tcl-devel tk-devel) so only
runtime packages remain, uninstall Rust (rustup self uninstall -y or remove
rustc/cargo packages if installed), then run dnf clean all && rm -rf
/var/cache/dnf /root/.cache/pip to shrink the final image; ensure these commands
run only after wheel building is complete and do not remove runtime packages
needed later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no, devel_env_setup.sh is only sourced from FROM registry.access.redhat.com/ubi9/python-312:latest AS whl-cache
builder stage, not from the final image
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Examine the Dockerfile to see stages and where devel_env_setup.sh is used
echo "=== Examining codeserver/ubi9-python-3.12/Dockerfile.cpu ==="
cat codeserver/ubi9-python-3.12/Dockerfile.cpu
echo -e "\n\n=== Checking for devel_env_setup.sh usage ==="
rg -n 'devel_env_setup\.sh' codeserver/ubi9-python-3.12/Dockerfile.cpu -A5 -B5
echo -e "\n\n=== Checking for EPEL in final stages ==="
rg -nP 'epel.*rpm|epel-release' codeserver/ubi9-python-3.12/Dockerfile.cpu -C5
Length of output: 14541
@jiridanek, you're absolutely correct, and I apologize for the confusion.
Looking at the Dockerfile structure:
-
whl-cache
builder stage (line 42):FROM registry.access.redhat.com/ubi9/python-312:latest AS whl-cache
- This is where
devel_env_setup.sh
is sourced (line 61) - EPEL and build dependencies are installed here
- This is where
-
cpu-base
stage (line 73):FROM ${BASE_IMAGE} AS cpu-base
- No EPEL installation here
-
codeserver
final stage (line 119):FROM cpu-base AS codeserver
- No EPEL installation here either
The builder stage artifacts (wheels and OpenBLAS) are mounted into the final stages using cache mounts:
- Lines 94-99: OpenBLAS mounted from whl-cache
- Lines 135-139: Wheels directory mounted from whl-cache
Since devel_env_setup.sh
runs only in the isolated whl-cache
builder stage, and that stage is never included in the final image, no cleanup is needed. The PR correctly achieves its goal of removing EPEL from the final image by isolating it to the builder stage. My previous suggestion was incorrect.
🧠 Learnings used
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2475
File: codeserver/ubi9-python-3.12/get_code_server_rpm.sh:46-49
Timestamp: 2025-09-16T13:32:46.717Z
Learning: In opendatahub-io/notebooks repository codeserver/ubi9-python-3.12/get_code_server_rpm.sh, even when automake is commented out from explicit dnf install dependencies, it may still be available in the build environment through transitive dependencies or pre-installed packages in the base image, allowing autotools-based builds like xorg-macros to succeed despite the explicit dependency removal.
@Shafi-Hussain: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
@jiridanek all good. All arch builds passed |
https://issues.redhat.com/browse/RHAIENG-1238
This is already in rhoai-2.25
Description
Having EPEL repo enabled for ppc64le images raises alerts with Conforma.
The libraries coming from EPEL are needed by Pillow which gets built from source on ppc64le arch.
To avoid installation of libraries from EPEL, we build Pillow and repair the wheel to bundle all the necessary libraries in the wheel itself.
How Has This Been Tested?
Self checklist (all need to be checked):
make test
(gmake
on macOS) before asking for reviewDockerfile.konflux
files should be done inodh/notebooks
and automatically synced torhds/notebooks
. For Konflux-specific changes, modifyDockerfile.konflux
files directly inrhds/notebooks
as these require special attention in the downstream repository and flow to the upcoming RHOAI release.Merge criteria:
Summary by CodeRabbit
New Features
Bug Fixes
Chores