Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 0 additions & 2 deletions codeserver/ubi9-python-3.12/Dockerfile.cpu
Original file line number Diff line number Diff line change
Expand Up @@ -96,8 +96,6 @@ RUN dnf install -y perl mesa-libGL skopeo && dnf clean all && rm -rf /var/cache/
RUN --mount=type=cache,from=whl-cache,source=/root/OpenBLAS,target=/OpenBlas,rw \
bash -c ' \
if [[ $(uname -m) == "ppc64le" ]]; then \
dnf install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm; \
dnf install -y lcms2 libraqm libimagequant openjpeg2; \
PREFIX=/usr/ make install -C /OpenBlas; \
fi '

Expand Down
31 changes: 29 additions & 2 deletions codeserver/ubi9-python-3.12/devel_env_setup.sh
Original file line number Diff line number Diff line change
Expand Up @@ -9,10 +9,32 @@ set -eoux pipefail
export WHEEL_DIR=${WHEEL_DIR:-"/wheelsdir"}
mkdir -p ${WHEEL_DIR}

build_pillow() {
CURDIR=$(pwd)

export PILLOW_VERSION=$1

TEMP_BUILD_DIR=$(mktemp -d)
cd ${TEMP_BUILD_DIR}

: ================== Installing Pillow ==================
git clone --recursive https://github.com/python-pillow/Pillow.git -b ${PILLOW_VERSION}
cd Pillow
uv build --wheel --out-dir /pillowwheel

: ================= Fix Pillow Wheel ====================
cd /pillowwheel
uv pip install auditwheel
auditwheel repair pillow*.whl
mv wheelhouse/pillow*.whl ${WHEEL_DIR}

cd ${CURDIR}
rm -rf ${TEMP_BUILD_DIR}
}
build_pyarrow() {
CURDIR=$(pwd)

export PYARROW_VERSION=${1:-$(curl -s https://api.github.com/repos/apache/arrow/releases/latest | jq -r '.tag_name' | grep -Eo "[0-9\.]+")}
export PYARROW_VERSION=$1

TEMP_BUILD_DIR=$(mktemp -d)
cd ${TEMP_BUILD_DIR}
Expand Down Expand Up @@ -44,7 +66,8 @@ build_pyarrow() {
if [[ $(uname -m) == "ppc64le" ]]; then
# install development packages
dnf install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm
dnf install -y cmake gcc-toolset-13 fribidi-devel lcms2-devel \
# patchelf: needed by `auditwheel repair`
dnf install -y cmake gcc-toolset-13 fribidi-devel lcms2-devel patchelf \
libimagequant-devel libraqm-devel openjpeg2-devel tcl-devel tk-devel
Comment on lines 68 to 71
Copy link
Contributor

@coderabbitai coderabbitai bot Oct 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

Ensure EPEL and build-only deps aren’t present in the final image.

PR goal is to remove EPEL from the final image. This script enables EPEL and installs -devel packages; please disable/remove EPEL and drop build-only deps after wheel installation, and clean caches.

Add a cleanup block after installing wheels:

     uv pip install ${WHEEL_DIR}/*.whl
+
+    # Cleanup: ensure EPEL and build-only deps do not remain in final image
+    dnf remove -y cmake gcc-toolset-13 fribidi-devel lcms2-devel libimagequant-devel \
+        libraqm-devel openjpeg2-devel tcl-devel tk-devel patchelf || true
+    dnf remove -y epel-release || true
+    rm -f /etc/yum.repos.d/epel*.repo /etc/yum.repos.d/epel*.repo.rpmnew || true
+    # Remove rust toolchain if not needed at runtime
+    rm -rf "$HOME/.cargo" "$HOME/.rustup" || true
+    dnf clean all && rm -rf /var/cache/dnf /var/cache/yum || true

Please confirm whether this script runs in a builder-only stage; if yes, cleanup may move to the Dockerfile stage boundary instead. Based on PR objectives.

Also applies to: 103-103


🏁 Script executed:

#!/bin/bash
# List all Dockerfiles
echo "Dockerfiles in repo:"
find . -type f -iname '*dockerfile*'
# Search for multi-stage builder patterns
echo -e "\nBuilder stages:"
rg -nP 'FROM\s+\S+\s+as\s+\w+' -C3
# Locate invocation of devel_env_setup.sh
echo -e "\nScript usage:"
rg -n 'devel_env_setup\.sh' -C3

Length of output: 5396


🏁 Script executed:

#!/bin/bash
echo "codeserver/ubi9-python-3.12/Dockerfile.cpu stages:"
rg -n '^FROM' codeserver/ubi9-python-3.12/Dockerfile.cpu

echo -e "\njupyter/trustyai/ubi9-python-3.12/Dockerfile.cpu stages:"
rg -n '^FROM' jupyter/trustyai/ubi9-python-3.12/Dockerfile.cpu

Length of output: 720


Append cleanup of EPEL and build‐only deps to devel_env_setup.sh
devel_env_setup.sh is run in the final image stage for both codeserver and jupyter-trustyai, so add a cleanup step after wheel installation to remove epel-release, build-time packages, caches, and the Rust toolchain.

🤖 Prompt for AI Agents
In codeserver/ubi9-python-3.12/devel_env_setup.sh around lines 68 to 71, append
a cleanup step after wheel installation that removes the EPEL release package
and build-only packages, uninstalls the Rust toolchain, and clears package and
pip caches: remove epel-release-rpm (dnf remove -y epel-release-latest-9.noarch
or epel-release), run dnf remove/erase or dnf autoremove -y for the build deps
you installed (cmake gcc-toolset-13 fribidi-devel lcms2-devel patchelf
libimagequant-devel libraqm-devel openjpeg2-devel tcl-devel tk-devel) so only
runtime packages remain, uninstall Rust (rustup self uninstall -y or remove
rustc/cargo packages if installed), then run dnf clean all && rm -rf
/var/cache/dnf /root/.cache/pip to shrink the final image; ensure these commands
run only after wheel building is complete and do not remove runtime packages
needed later.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no, devel_env_setup.sh is only sourced from FROM registry.access.redhat.com/ubi9/python-312:latest AS whl-cache builder stage, not from the final image

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Examine the Dockerfile to see stages and where devel_env_setup.sh is used
echo "=== Examining codeserver/ubi9-python-3.12/Dockerfile.cpu ==="
cat codeserver/ubi9-python-3.12/Dockerfile.cpu

echo -e "\n\n=== Checking for devel_env_setup.sh usage ==="
rg -n 'devel_env_setup\.sh' codeserver/ubi9-python-3.12/Dockerfile.cpu -A5 -B5

echo -e "\n\n=== Checking for EPEL in final stages ==="
rg -nP 'epel.*rpm|epel-release' codeserver/ubi9-python-3.12/Dockerfile.cpu -C5

Length of output: 14541


@jiridanek, you're absolutely correct, and I apologize for the confusion.

Looking at the Dockerfile structure:

  1. whl-cache builder stage (line 42): FROM registry.access.redhat.com/ubi9/python-312:latest AS whl-cache

    • This is where devel_env_setup.sh is sourced (line 61)
    • EPEL and build dependencies are installed here
  2. cpu-base stage (line 73): FROM ${BASE_IMAGE} AS cpu-base

    • No EPEL installation here
  3. codeserver final stage (line 119): FROM cpu-base AS codeserver

    • No EPEL installation here either

The builder stage artifacts (wheels and OpenBLAS) are mounted into the final stages using cache mounts:

  • Lines 94-99: OpenBLAS mounted from whl-cache
  • Lines 135-139: Wheels directory mounted from whl-cache

Since devel_env_setup.sh runs only in the isolated whl-cache builder stage, and that stage is never included in the final image, no cleanup is needed. The PR correctly achieves its goal of removing EPEL from the final image by isolating it to the builder stage. My previous suggestion was incorrect.


🧠 Learnings used
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2475
File: codeserver/ubi9-python-3.12/get_code_server_rpm.sh:46-49
Timestamp: 2025-09-16T13:32:46.717Z
Learning: In opendatahub-io/notebooks repository codeserver/ubi9-python-3.12/get_code_server_rpm.sh, even when automake is commented out from explicit dnf install dependencies, it may still be available in the build environment through transitive dependencies or pre-installed packages in the base image, allowing autotools-based builds like xorg-macros to succeed despite the explicit dependency removal.


# install rust
Expand Down Expand Up @@ -73,6 +96,10 @@ if [[ $(uname -m) == "ppc64le" ]]; then

PYARROW_VERSION=$(grep -A1 '"pyarrow"' pylock.toml | grep -Eo '\b[0-9\.]+\b')
build_pyarrow ${PYARROW_VERSION}

PILLOW_VERSION=$(grep -A1 '"pillow"' pylock.toml | grep -Eo '\b[0-9\.]+\b')
build_pillow ${PILLOW_VERSION}

uv pip install ${WHEEL_DIR}/*.whl
else
# only for mounting on non-ppc64le
Expand Down
Loading