Skip to content

Conversation

bhagyashrigai
Copy link
Contributor

@bhagyashrigai bhagyashrigai commented Sep 11, 2025

Description

This PR introduces ppc64le (IBM Power) support for the jupyter -datascience notebook images.
Changes will not impact other platforms (x86_64, arm64, etc.)
Able to build image locally.

Skipping codeflare-sdk on ppc64le by updating pyproject.toml. Building onnx, openblas from source [power]
So, Regenerated pylock.toml using following command:

uv pip compile pyproject.toml --output-file pylock.toml --format pylock.toml --generate-hashes --emit-index-url --python-version=3.12 --python-platform linux --no-annotate

How Has This Been Tested?

Successfully tested builds through Konflux for ppc64le,x86_64 and arm64 [https://github.com.mcas.ms/red-hat-data-services/pull/1520]
https://konflux-ui.apps.stone-prod-p02.hjvn.p1.openshiftapps.com/ns/rhoai-tenant/applications/automation/pipelineruns/odh-workbench-jupyter-datascience-cpu-py312-on-pull-reques282fx

Screenshot:-
konflux_done_9sept

Screenshot:-

Merge criteria:

  • The commits are squashed in a cohesive manner and have meaningful messages.
  • Testing instructions have been added in the PR body (for PRs involving changes that are not immediately obvious).
  • The developer has manually tested the changes and verified that the changes work

Summary by CodeRabbit

  • New Features

    • Added Power (ppc64le) build flow with prebuilt ONNX/OpenBLAS provisioning and Power-specific packaging/startup steps.
  • Bug Fixes

    • Prevented incompatible wheels from being installed on ppc64le to reduce build/runtime failures.
  • Chores

    • Updated packaging to gate many packages and one SDK on non-ppc64le architectures; added unixODBC and pkg-config environment adjustments.

Copy link
Contributor

openshift-ci bot commented Sep 11, 2025

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

Copy link
Contributor

coderabbitai bot commented Sep 11, 2025

Walkthrough

Adds PPC64LE-aware multi-stage builders and conditional runtime install logic to the Jupyter datascience Dockerfile; gates many wheel entries in pylock.toml by architecture and excludes codeflare-sdk on ppc64le via pyproject.toml.

Changes

Cohort / File(s) Summary of Changes
PPC64LE multi-stage build and runtime wiring
jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu
Adds common-builder, onnx-builder, and openblas-builder stages; introduces ARGs (OPENBLAS_VERSION, ONNX_VERSION); copies builder artifacts into final image; adds conditional ppc64le RUN block to install ONNX/OpenBLAS or skip on other arches; adds unixODBC-devel, sets ENV PKG_CONFIG_PATH, and integrates pylock.toml-based install and Elyra setup utils.
Architecture-gated dependency lockfile
jupyter/datascience/ubi9-python-3.12/pylock.toml
Adds marker = "platform_machine != 'ppc64le'" to many package entries to exclude PPC64LE wheels; updates grpcio and py-spy markers to include the same architecture constraint; versions and hashes unchanged.
Project dependency constraint
jupyter/datascience/ubi9-python-3.12/pyproject.toml
Adds ; platform_machine != 'ppc64le' to the codeflare-sdk~=0.31.0 dependency to prevent installation on PPC64LE.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Pre-merge checks (3 passed)

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title succinctly identifies the primary change—updating the Dockerfile.cpu in the jupyter-datascience image to add ppc64le (Power) support—and aligns with the PR objectives and the changes in the diff. It is specific to the affected file and target architecture, avoids extraneous detail, and would be understandable to a reviewer scanning PR history. The phrasing is slightly informal but still accurately summarizes the main change.
Docstring Coverage ✅ Passed No functions found in the changes. Docstring coverage check skipped.

Tip

👮 Agentic pre-merge checks are now available in preview!

Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.

  • Built-in checks – Quickly apply ready-made checks to enforce title conventions, require pull request descriptions that follow templates, validate linked issues for compliance, and more.
  • Custom agentic checks – Define your own rules using CodeRabbit’s advanced agentic capabilities to enforce organization-specific policies and workflows. For example, you can instruct CodeRabbit’s agent to verify that API documentation is updated whenever API schema files are modified in a PR. Note: Upto 5 custom checks are currently allowed during the preview period. Pricing for this feature will be announced in a few weeks.

Please see the documentation for more information.

Example:

reviews:
  pre_merge_checks:
    custom_checks:
      - name: "Undocumented Breaking Changes"
        mode: "warning"
        instructions: |
          Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal).

Please share your feedback with us on this Discord post.


📜 Recent review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9cfa50f and 7ca6955.

📒 Files selected for processing (1)
  • jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu (3 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (10)
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-datascience-cpu-py312-ubi9-on-pull-request
  • GitHub Check: build (rocm-jupyter-pytorch-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (cuda-jupyter-pytorch-llmcompressor-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (cuda-jupyter-tensorflow-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (jupyter-trustyai-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (cuda-jupyter-tensorflow-ubi9-python-3.12, 3.12, linux/arm64, false) / build
  • GitHub Check: build (jupyter-datascience-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (rocm-jupyter-tensorflow-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (cuda-jupyter-pytorch-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: code-static-analysis
✨ Finishing touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

openshift-ci bot commented Sep 11, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign daniellutz for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the size/l label Sep 11, 2025
@github-actions github-actions bot added the review-requested GitHub Bot creates notification on #pr-review-ai-ide-team slack channel label Sep 11, 2025
@openshift-ci openshift-ci bot added size/l and removed size/l labels Sep 11, 2025
@bhagyashrigai bhagyashrigai marked this pull request as ready for review September 11, 2025 06:20
@openshift-ci openshift-ci bot requested review from atheo89 and dibryant September 11, 2025 06:20
@openshift-ci openshift-ci bot added size/l and removed size/l labels Sep 11, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (23)
jupyter/datascience/ubi9-python-3.12/pyproject.toml (1)

19-19: Optional: Prefer allow-list of supported arches to avoid future surprises.

Safer to explicitly include known-good arches rather than exclude PPC, which would otherwise permit s390x/riscv64, etc., if ever built.

-    "codeflare-sdk~=0.30.0; platform_machine != 'ppc64le'",
+    "codeflare-sdk~=0.30.0; platform_machine == 'x86_64' or platform_machine == 'aarch64'",
jupyter/datascience/ubi9-python-3.12/pylock.toml (16)

108-110: aiohttp-cors: likely pure-python; avoid unnecessary PPC64LE gating.

Unless this is only pulled via codeflare-sdk extras, drop the marker in source (pyproject) and let it install on ppc64le.


539-543: colorful: pure-python; unnecessary PPC64LE exclusion.

Keep parity across arches unless dependency chain forces exclusion.


726-731: distlib: pure-python tooling; exclusion likely overbroad.

If not strictly tied to a blocked dependency, allow on ppc64le.


769-773: filelock: pure-python; keep on ppc64le.

File locking is cross-platform; removing may regress installers/caches.


1007-1011: fsspec: pure-python core I/O; avoid excluding on ppc64le.

This underpins smart-open and many FS backends. Prefer parity unless broken.


1190-1194: invoke: pure-python; exclusion probably unnecessary.

If only used by codeflare/ray toolchains, keep; else restore for parity.


1541-1545: markdown-it-py: pure-python; prefer enabling on ppc64le.


1688-1692: mdurl: pure-python; same concern—avoid ppc64le gating unless required.


2154-2180: opencensus/opencensus-context: pure-python; reconsider exclusion.

Telemetry should remain consistent across arches unless backend libs block it.


2167-2173: openshift-client: pure-python; excluding reduces cluster UX on ppc64le.

If functional, keep it enabled.


2175-2201: opentelemetry- (API/SDK/exporter/semconv): pure-python; avoid blanket PPC64LE exclusion.*

Only gate parts that depend on native libs.


2753-2758: pydantic v1: primarily pure-python (C speedups optional).

Unless solely transitive via gated deps, keep it enabled on ppc64le.


3246-3250: rich: pure-python; safe on ppc64le.

Recommend removing the platform gate for parity.


3609-3613: smart-open: pure-python; avoid excluding unless a backend driver fails on ppc64le.


3838-3842: virtualenv: pure-python; exclusion may hinder tooling.

If not used at runtime, fine; otherwise keep parity.


2631-2636: pyarrow excluded on ppc64le — either build it in a builder stage or document the absence

jupyter/datascience/ubi9-python-3.12/pylock.toml pins pyarrow 21.0.0 with marker "platform_machine != 'ppc64le'", removing Parquet/Arrow support on ppc64le and degrading DS UX.

  • Preferred: add a multi-stage builder to compile a ppc64le pyarrow wheel and include it in the image (follow existing ONNX/OpenBLAS builder pattern).
  • Alternative: explicitly document pyarrow/parquet is unavailable on ppc64le and provide clear workarounds (e.g., use x86_64 image or install wheel externally).
jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu (6)

75-81: Make RUN shell POSIX-safe: replace “source” with “.” or set SHELL to bash

source is a bashism; portability is better with . /opt/rh/gcc-toolset-13/enable. Alternatively set SHELL ["/bin/bash","-lc"] for the builder stages.

-    source /opt/rh/gcc-toolset-13/enable && \
+    . /opt/rh/gcc-toolset-13/enable && \

159-160: Harden PKG_CONFIG_PATH export

Append instead of overwrite to avoid clobbering any upstream value.

-ENV PKG_CONFIG_PATH=/usr/local/lib/pkgconfig/
+ENV PKG_CONFIG_PATH=/usr/local/lib/pkgconfig${PKG_CONFIG_PATH:+:$PKG_CONFIG_PATH}

91-91: Avoid version drift: declare OPENBLAS_VERSION once

Define the default once (top-level) and reference it in stages without defaults to keep versions in sync.

-FROM common-builder AS openblas-builder
-ARG OPENBLAS_VERSION=0.3.30
+FROM common-builder AS openblas-builder
+ARG OPENBLAS_VERSION
@@
-FROM jupyter-minimal AS jupyter-datascience
-ARG OPENBLAS_VERSION=0.3.30
+FROM jupyter-minimal AS jupyter-datascience
+ARG OPENBLAS_VERSION

Additionally, add near the top (after ARG BASE_IMAGE):

ARG OPENBLAS_VERSION=0.3.30

Also applies to: 134-134


71-85: Pin ONNX build inputs tighter

Consider pinning submodules via git clone --depth 1 --branch ${ONNX_VERSION} and using pip install --no-cache-dir -r requirements.txt to reduce build time and improve determinism.


97-97: OpenBLAS build-time threads cap is aggressive

NUM_THREADS=120 bakes in a high default and can overwhelm smaller Power nodes. Consider omitting (runtime can control with OMP_NUM_THREADS) or lowering.


152-153: Drop unixODBC-devel and remove redundant rm -rf /var/cache/yum

pylock.toml contains pyodbc 5.2.0 with manylinux wheels for CPython 3.12 (manylinux_2_17 x86_64/aarch64), so pyodbc will install from a wheel in this image — unixODBC-devel can be removed. On UBI9 dnf clean all is sufficient; remove rm -rf /var/cache/yum.

File: jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu (lines 152-153)

-RUN dnf install -y jq unixODBC unixODBC-devel postgresql git-lfs libsndfile libxcrypt-compat && \
-    dnf clean all && rm -rf /var/cache/yum
+RUN dnf install -y jq unixODBC postgresql git-lfs libsndfile libxcrypt-compat && \
+    dnf clean all
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e3f47c1 and bad1322.

📒 Files selected for processing (3)
  • jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu (3 hunks)
  • jupyter/datascience/ubi9-python-3.12/pylock.toml (22 hunks)
  • jupyter/datascience/ubi9-python-3.12/pyproject.toml (1 hunks)
🧰 Additional context used
🧠 Learnings (15)
📓 Common learnings
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-05T17:24:08.616Z
Learning: jiridanek requested PR review for #1521 covering s390x architecture support improvements, demonstrating continued focus on systematic multi-architecture compatibility enhancements in the opendatahub-io/notebooks repository through clean implementation with centralized configuration, proper CI integration, and architecture-aware testing patterns.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:0-0
Timestamp: 2025-09-05T12:10:50.856Z
Learning: jiridanek requested GitHub issue creation for Dockerfile environment variable refactoring during PR #2215 review. Issue addresses build-only variables (OPENBLAS_VERSION, ONNX_VERSION, GRPC_PYTHON_BUILD_SYSTEM_OPENSSL) being unnecessarily written to /etc/profile.d/ppc64le.sh in runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu, causing variable duplication across stages, unreliable sourcing in non-login build contexts, and violation of DRY principles. The issue includes comprehensive problem description covering affected lines 30-37, detailed impact analysis of build reliability and maintenance overhead, three solution options with centralized ARG/ENV approach as recommended, clear acceptance criteria for version centralization and build-only variable cleanup, and specific implementation guidance with code examples, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:0-0
Timestamp: 2025-09-05T11:27:31.040Z
Learning: jiridanek requested GitHub issue creation for build toolchain optimization in datascience runtime during PR #2215 review. Issue #2308 was created addressing unnecessary build dependencies (gcc-toolset-13, cmake, ninja-build, rust, cargo) in final runtime image for ppc64le architecture, covering comprehensive problem analysis with specific line numbers, multiple solution options for builder-only toolchains, clear acceptance criteria for size reduction and security improvement, detailed implementation guidance for package segregation, and proper context linking to PR #2215 review comment, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:0-0
Timestamp: 2025-09-05T12:10:50.856Z
Learning: jiridanek requested GitHub issue creation for Dockerfile environment variable refactoring during PR #2215 review. Issue #2311 was created addressing build-only variables (OPENBLAS_VERSION, ONNX_VERSION, GRPC_PYTHON_BUILD_SYSTEM_OPENSSL) being unnecessarily written to /etc/profile.d/ppc64le.sh in runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu, causing variable duplication across stages, unreliable sourcing in non-login build contexts, and violation of DRY principles. The issue includes comprehensive problem description covering affected lines 30-37, detailed impact analysis of build reliability and maintenance overhead, three solution options with centralized ARG/ENV approach as recommended, clear acceptance criteria for version centralization and build-only variable cleanup, and specific implementation guidance with code examples, assigned to jiridanek, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1320
File: jupyter/pytorch/ubi9-python-3.12/Dockerfile.cuda:65-66
Timestamp: 2025-07-09T12:31:02.033Z
Learning: jiridanek requested GitHub issue creation for MSSQL repo file hardcoding problem during PR #1320 review. Issue #1363 was created and updated with comprehensive problem description covering hardcoded x86_64 MSSQL repo files breaking multi-architecture builds across 10 affected Dockerfiles (including datascience, CUDA, ROCm, and TrustyAI variants), detailed root cause analysis, three solution options with code examples, clear acceptance criteria for all image types, implementation guidance following established multi-architecture patterns, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:122-123
Timestamp: 2025-09-05T12:25:09.719Z
Learning: jiridanek requested GitHub issue creation for Docker multi-stage synchronization improvement in codeserver/ubi9-python-3.12/Dockerfile.cpu during PR #2227 review. The issue addresses sentinel file pattern using /tmp/control copied to /dev/null for stage coordination between rpm-base, whl-cache, and codeserver stages, proposing semantic improvements with descriptive file names, inline documentation, and elimination of /dev/null hack while maintaining multi-architecture build functionality for ppc64le support.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2145
File: runtimes/rocm-tensorflow/ubi9-python-3.12/pyproject.toml:75-77
Timestamp: 2025-08-27T15:33:28.871Z
Learning: jiridanek requested GitHub issue creation for multi-architecture environment exploration during PR #2145 review, specifically to investigate architecture-specific environments for ROCm TensorFlow wheel optimization across x86_64, aarch64, ppc64le, and s390x architectures, noting uncertainty about implementation benefits but wanting systematic exploration through issue tracking. Issue #2158 was created with comprehensive analysis covering current limitations, investigation areas, multiple solution options, and clear acceptance criteria.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2145
File: runtimes/rocm-tensorflow/ubi9-python-3.12/pyproject.toml:75-77
Timestamp: 2025-08-27T15:33:28.871Z
Learning: jiridanek requested GitHub issue creation for multi-architecture environment exploration during PR #2145 review, specifically to investigate architecture-specific environments for ROCm TensorFlow wheel optimization across x86_64, aarch64, ppc64le, and s390x architectures, noting uncertainty about implementation benefits but wanting systematic exploration through issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1333
File: runtimes/rocm-tensorflow/ubi9-python-3.12/Dockerfile.rocm:50-50
Timestamp: 2025-07-08T19:30:01.738Z
Learning: jiridanek requested GitHub issue creation for multi-architecture support in ROCm TensorFlow image during PR #1333 review. Issue #1346 was created with comprehensive problem description covering hardcoded x86_64 architecture breaking multi-arch support, detailed impact analysis, three solution options (runtime detection, BuildKit TARGETARCH integration, hybrid approach) with pros/cons analysis, comprehensive acceptance criteria covering core requirements and testing, phased implementation guidance, related files identification, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1320
File: jupyter/pytorch/ubi9-python-3.12/Dockerfile.cuda:42-52
Timestamp: 2025-07-09T12:29:56.162Z
Learning: jiridanek requested GitHub issue creation for OpenShift client architecture mapping problem affecting 29 Dockerfiles during PR #1320 review. Issue was created with comprehensive analysis covering all affected files using $(uname -m) returning 'aarch64' but OpenShift mirror expecting 'arm64', systematic solution using BuildKit TARGETARCH mapping with proper amd64→x86_64 and arm64→arm64 conversion, detailed acceptance criteria, and implementation guidance, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:218-218
Timestamp: 2025-09-05T12:29:07.819Z
Learning: jiridanek requested GitHub issue creation for uv multi-stage Docker build architectural investigation during PR #2227 review. The current implementation uses a three-stage build with whl-cache stage for wheel building/caching, base stage for OS setup, and final codeserver stage for offline installation using --offline flag and cache mounts. The pattern separates build phase (internet access, build tools) from install phase (offline, faster) while supporting multi-architecture builds (x86_64, ppc64le) with sentinel file coordination using /tmp/control files.
📚 Learning: 2025-09-05T10:05:35.575Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1513
File: runtimes/datascience/ubi9-python-3.12/pylock.toml:180-180
Timestamp: 2025-09-05T10:05:35.575Z
Learning: In Python lock files for the datascience runtime, both bcrypt and paramiko packages are excluded from s390x platform using the marker "platform_machine != 's390x'" due to compatibility issues on IBM System z mainframe architecture.

Applied to files:

  • jupyter/datascience/ubi9-python-3.12/pylock.toml
📚 Learning: 2025-08-29T15:18:43.229Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/pylock.toml:110-110
Timestamp: 2025-08-29T15:18:43.229Z
Learning: The file `runtimes/datascience/ubi9-python-3.12/pylock.toml` is a generated file and should not be edited directly. Changes should be made to the source pyproject.toml file instead.

Applied to files:

  • jupyter/datascience/ubi9-python-3.12/pylock.toml
📚 Learning: 2025-07-20T20:47:36.509Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1396
File: runtimes/tensorflow/ubi9-python-3.12/Dockerfile.cuda:124-127
Timestamp: 2025-07-20T20:47:36.509Z
Learning: jiridanek identified that ARM64 wheels for h5py 3.14.0 are available on PyPI but being ignored due to AMD64-only dependency locking with --platform=linux/amd64. This causes unnecessary hdf5-devel package installation in ARM64 TensorFlow images when the ARM64 wheel h5py-3.14.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl could be used instead. The Pipfile.lock only contains 2 hashes for h5py, confirming limited platform consideration during lock generation.

Applied to files:

  • jupyter/datascience/ubi9-python-3.12/pylock.toml
📚 Learning: 2025-08-29T15:18:43.229Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/pylock.toml:110-110
Timestamp: 2025-08-29T15:18:43.229Z
Learning: When uv compiles dependencies with platform restrictions (like `platform_machine != 'ppc64le'`), it automatically propagates these markers to transitive dependencies in the generated lock files. This is expected behavior and doesn't require manual intervention in source files.

Applied to files:

  • jupyter/datascience/ubi9-python-3.12/pylock.toml
📚 Learning: 2025-08-27T14:49:24.112Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2145
File: runtimes/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda:152-159
Timestamp: 2025-08-27T14:49:24.112Z
Learning: jiridanek requested GitHub issue creation for CUDA version alignment in pytorch+llmcompressor runtime during PR #2145 review. Issue #2148 was created addressing the mismatch between Dockerfile CUDA 12.6 and pylock.toml cu124 PyTorch wheels. The issue includes comprehensive problem description covering affected files (runtimes/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda and pylock.toml), detailed solution with PyTorch index URL update from cu124 to cu126, lock regeneration steps using uv, clear acceptance criteria for wheel alignment verification, and proper context linking to PR #2145 review comment, assigned to jiridanek.

Applied to files:

  • jupyter/datascience/ubi9-python-3.12/pylock.toml
📚 Learning: 2025-09-05T12:10:50.856Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:0-0
Timestamp: 2025-09-05T12:10:50.856Z
Learning: jiridanek requested GitHub issue creation for Dockerfile environment variable refactoring during PR #2215 review. Issue addresses build-only variables (OPENBLAS_VERSION, ONNX_VERSION, GRPC_PYTHON_BUILD_SYSTEM_OPENSSL) being unnecessarily written to /etc/profile.d/ppc64le.sh in runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu, causing variable duplication across stages, unreliable sourcing in non-login build contexts, and violation of DRY principles. The issue includes comprehensive problem description covering affected lines 30-37, detailed impact analysis of build reliability and maintenance overhead, three solution options with centralized ARG/ENV approach as recommended, clear acceptance criteria for version centralization and build-only variable cleanup, and specific implementation guidance with code examples, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.

Applied to files:

  • jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-05T12:10:50.856Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:0-0
Timestamp: 2025-09-05T12:10:50.856Z
Learning: jiridanek requested GitHub issue creation for Dockerfile environment variable refactoring during PR #2215 review. Issue #2311 was created addressing build-only variables (OPENBLAS_VERSION, ONNX_VERSION, GRPC_PYTHON_BUILD_SYSTEM_OPENSSL) being unnecessarily written to /etc/profile.d/ppc64le.sh in runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu, causing variable duplication across stages, unreliable sourcing in non-login build contexts, and violation of DRY principles. The issue includes comprehensive problem description covering affected lines 30-37, detailed impact analysis of build reliability and maintenance overhead, three solution options with centralized ARG/ENV approach as recommended, clear acceptance criteria for version centralization and build-only variable cleanup, and specific implementation guidance with code examples, assigned to jiridanek, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.

Applied to files:

  • jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-05T12:25:09.719Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:122-123
Timestamp: 2025-09-05T12:25:09.719Z
Learning: jiridanek requested GitHub issue creation for Docker multi-stage synchronization improvement in codeserver/ubi9-python-3.12/Dockerfile.cpu during PR #2227 review. The issue addresses sentinel file pattern using /tmp/control copied to /dev/null for stage coordination between rpm-base, whl-cache, and codeserver stages, proposing semantic improvements with descriptive file names, inline documentation, and elimination of /dev/null hack while maintaining multi-architecture build functionality for ppc64le support.

Applied to files:

  • jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-05T11:27:31.040Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:0-0
Timestamp: 2025-09-05T11:27:31.040Z
Learning: jiridanek requested GitHub issue creation for build toolchain optimization in datascience runtime during PR #2215 review. Issue #2308 was created addressing unnecessary build dependencies (gcc-toolset-13, cmake, ninja-build, rust, cargo) in final runtime image for ppc64le architecture, covering comprehensive problem analysis with specific line numbers, multiple solution options for builder-only toolchains, clear acceptance criteria for size reduction and security improvement, detailed implementation guidance for package segregation, and proper context linking to PR #2215 review comment, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.

Applied to files:

  • jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-05T12:29:07.819Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:218-218
Timestamp: 2025-09-05T12:29:07.819Z
Learning: jiridanek requested GitHub issue creation for uv multi-stage Docker build architectural investigation during PR #2227 review. The current implementation uses a three-stage build with whl-cache stage for wheel building/caching, base stage for OS setup, and final codeserver stage for offline installation using --offline flag and cache mounts. The pattern separates build phase (internet access, build tools) from install phase (offline, faster) while supporting multi-architecture builds (x86_64, ppc64le) with sentinel file coordination using /tmp/control files.

Applied to files:

  • jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-08-12T08:40:55.286Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1909
File: runtimes/pytorch+llmcompressor/ubi9-python-3.11/Dockerfile.cuda:11-15
Timestamp: 2025-08-12T08:40:55.286Z
Learning: jiridanek requested GitHub issue creation for redundant CUDA upgrade optimization during PR #1909 review. Analysis revealed all 14 CUDA Dockerfiles contain redundant `yum upgrade -y` commands in cuda-base stages that execute after base stages already performed comprehensive `dnf upgrade` via pre-upgrade blocks, causing unnecessary CI latency and build inefficiency. Issue includes complete scope analysis with specific line numbers, investigation framework requiring NVIDIA upstream documentation review, multiple solution options, comprehensive acceptance criteria covering systematic testing and performance measurement, and proper context linking to PR #1909 review comment.

Applied to files:

  • jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-07-07T11:08:48.524Z
Learnt from: atheo89
PR: opendatahub-io/notebooks#1258
File: codeserver/ubi9-python-3.11/Dockerfile.cpu:32-32
Timestamp: 2025-07-07T11:08:48.524Z
Learning: atheo89 requested GitHub issue creation for multi-architecture Dockerfile improvements during PR #1258 review, specifically for enhancing structural consistency across Docker stages, replacing $(uname -m) with ${TARGETARCH} for cross-architecture builds, and adding OCI-compliant metadata labels. Issue #1332 was created with comprehensive problem description, phased implementation approach, detailed acceptance criteria, implementation guidance with code examples, and proper context linking, continuing the established pattern of systematic code quality improvements.

Applied to files:

  • jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-07-04T10:41:13.061Z
Learnt from: grdryn
PR: opendatahub-io/notebooks#1320
File: rstudio/rhel9-python-3.11/Dockerfile.cuda:34-35
Timestamp: 2025-07-04T10:41:13.061Z
Learning: In the opendatahub-io/notebooks repository, when adapting NVIDIA CUDA Dockerfiles, the project intentionally maintains consistency with upstream NVIDIA patterns even when it might involve potential risks like empty variable expansions in package installation commands. This is considered acceptable because the containers only run on RHEL 9 with known yum/dnf behavior, and upstream consistency is prioritized over defensive coding practices.

Applied to files:

  • jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-08-07T12:39:01.997Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1720
File: jupyter/tensorflow/ubi9-python-3.12/requirements.txt:1531-1558
Timestamp: 2025-08-07T12:39:01.997Z
Learning: In opendatahub-io/notebooks, the ROCm TensorFlow Python 3.12 UBI9 image (runtimes/rocm-tensorflow/ubi9-python-3.12/Dockerfile.rocm) was missing libxcrypt-compat, which is required for MySQL SASL2 plugin authentication with mysql-connector-python==9.3.0 on Python 3.12 UBI9. Issue #1722 was created to track this, following the established pattern for systematic dependency consistency and runtime compatibility across all Python 3.12 UBI9 images.

Applied to files:

  • jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu
🔇 Additional comments (11)
jupyter/datascience/ubi9-python-3.12/pyproject.toml (2)

19-19: LGTM: Correct PPC64LE exclusion marker for codeflare-sdk.

Using the PEP 508 marker platform_machine != 'ppc64le' is appropriate and keeps x86_64/arm64 unaffected. Matches typical platform.machine() on Power.


19-19: Verify lock/install honor the marker across stages.

jupyter/datascience/ubi9-python-3.12/pylock.toml contains codeflare-sdk (v0.30.0 at ~lines 525–527) but no platform marker was observed; jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu showed no locked-install invocation. Ensure the lockfile preserves/expresses the platform marker or the Dockerfile uses a locked install that consumes the lockfile so ppc64le builds never install codeflare or its transitive deps.

jupyter/datascience/ubi9-python-3.12/pylock.toml (9)

216-221: bcrypt: verify need for PPC64LE exclusion.

We historically excluded on s390x; ppc64le may work via wheels/source. Excluding may break Paramiko auth flows. Confirm build/install status on ppc64le; if OK, remove platform gating in source and re-lock.


526-530: codeflare-sdk exclusion matches PR intent.

Marker aligns with “Skip codeflare-sdk on ppc64le.” LGTM.


1102-1107: grpcio: compiled; PPC64LE exclusion is reasonable if no wheels and source build isn’t planned.

Ensure all grpcio consumers are gated similarly to avoid partial installs.


1754-1759: msgpack: compiled; exclusion may be justified.

If msgpack is needed (e.g., Ray, telemetry), either build from source in builder stage or keep gated. Decide explicitly.


2282-2286: paramiko: depends on cryptography/bcrypt; verify PPC64LE feasibility.

If crypto stack is available on ppc64le, retain Paramiko; otherwise gate intentionally.


2616-2621: py-spy: platform-specific binary; exclusion is reasonable.

No ppc64le wheels typically available.


3167-3172: ray: exclusion is consistent with skipping codeflare-sdk.

Confirm no other components rely on ray on ppc64le.


3923-3928: wrapt: native extension; exclusion acceptable if wheels/build unavailable.

If needed by dependent libs, consider a builder-stage compile.


105-110: Incorrect — no packages in jupyter/datascience/ubi9-python-3.12/pylock.toml use the marker platform_machine != 'ppc64le'.
I ran the provided verification script against that file; it returned no matches, so the PPC64LE-exclusion audit is not applicable.

Likely an incorrect or invalid review comment.

Comment on lines +56 to +67
#######################################################
# common-builder (for Power-only)
#######################################################
FROM cpu-base AS common-builder
USER root
RUN if [ "$(uname -m)" = "ppc64le" ]; then \
dnf install -y gcc-toolset-13 cmake ninja-build git wget unzip && \
dnf clean all; \
else \
echo "Skipping common-builder package install on non-Power"; \
fi

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Prefer TARGETARCH over uname -m for build-time conditionals

Using ${TARGETARCH} is more robust in cross-builds and aligns with our multi-arch patterns. Replace $(uname -m) checks with $TARGETARCH in all conditional RUNs for Power-only steps.

@@
-FROM cpu-base AS common-builder
+FROM cpu-base AS common-builder
+ARG TARGETARCH
@@
-RUN if [ "$(uname -m)" = "ppc64le" ]; then \
+RUN if [ "$TARGETARCH" = "ppc64le" ]; then \
@@
-FROM common-builder AS onnx-builder
+FROM common-builder AS onnx-builder
+ARG TARGETARCH
@@
-RUN if [ "$(uname -m)" = "ppc64le" ]; then \
+RUN if [ "$TARGETARCH" = "ppc64le" ]; then \
@@
-FROM common-builder AS openblas-builder
+FROM common-builder AS openblas-builder
+ARG TARGETARCH
@@
-RUN if [ "$(uname -m)" = "ppc64le" ]; then \
+RUN if [ "$TARGETARCH" = "ppc64le" ]; then \
@@
-FROM jupyter-minimal AS jupyter-datascience
+FROM jupyter-minimal AS jupyter-datascience
+ARG TARGETARCH
@@
-RUN if [ "$(uname -m)" = "ppc64le" ]; then \
+RUN if [ "$TARGETARCH" = "ppc64le" ]; then \

Also applies to: 71-86, 90-101, 165-172

🤖 Prompt for AI Agents
In jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu around lines 56-67, the
RUN step uses `$(uname -m)` to gate Power-only installs; replace this build-time
runtime check with the build-arg/ARG-based TARGETARCH variable (use
`${TARGETARCH}`) in the conditional so cross-builds work correctly. Update the
RUN conditionals at the other locations noted (lines ~71-86, 90-101, 165-172)
the same way: ensure there is an ARG TARGETARCH earlier in the Dockerfile or
rely on Docker buildkit provided TARGETARCH, and change `if [ "$(uname -m)" =
"ppc64le" ]; then` to `if [ "${TARGETARCH}" = "ppc64le" ]; then` (or equivalent
POSIX test) so the Power-only package installs run based on TARGETARCH during
build.

Comment on lines +162 to +169
COPY --from=openblas-builder /root/OpenBLAS-${OPENBLAS_VERSION} /openblas
COPY --from=onnx-builder /root/onnx_wheel/ /onnxwheels/

# Power-specific ONNX/OpenBLAS installation
RUN if [ "$(uname -m)" = "ppc64le" ]; then \
pip install /onnxwheels/*.whl && \
PREFIX=/usr/local make -C /openblas install && \
rm -rf /onnxwheels /openblas; \
Copy link
Contributor

@coderabbitai coderabbitai bot Sep 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Confirm ONNX presence in pylock.toml to avoid double-install

If onnx is present in pylock.toml for non-Power arches, the separate wheel install on Power is fine. Ensure onnx is excluded from the lock for ppc64le to prevent conflicting versions.


🏁 Script executed:

#!/bin/bash
# Confirm ONNX markers in pylock.toml for ppc64le
rg -nP '(?i)^\s*name\s*=\s*"onnx"|platform_machine.*ppc64le' jupyter/datascience/ubi9-python-3.12/pylock.toml -C3

Length of output: 22231


Exclude onnx from pylock.toml for ppc64le to avoid double-install

pylock.toml contains onnx (1.19.0) with no ppc64le exclusion while Dockerfile.cpu installs /onnxwheels/*.whl on ppc64le — this will produce duplicate/conflicting installs.
Action: add marker = "platform_machine != 'ppc64le'" to the onnx [[packages]] entry in jupyter/datascience/ubi9-python-3.12/pylock.toml (approx. lines 2107–2111), or remove onnx from the lock and let jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu (lines 162–169) handle the ppc64le wheel install.

🤖 Prompt for AI Agents
In jupyter/datascience/ubi9-python-3.12/pylock.toml (around lines 2107–2111) and
jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu (lines 162–169): pylock.toml
currently includes onnx (1.19.0) without excluding ppc64le while the Dockerfile
installs the ppc64le wheel, causing duplicate/conflicting installs; fix by
editing the onnx [[packages]] entry in pylock.toml to add marker =
"platform_machine != 'ppc64le'" so the lockfile skips onnx on ppc64le, or
alternatively remove the onnx entry from the lockfile entirely and rely on the
Dockerfile.cpu wheel install for ppc64le, then regenerate/commit the lockfile if
changed.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is difficult; I want to compile some wheels in a separate builder stage of a multistage docker build; is there a way to give uv the result and let it install that (and error out if my builder stage built wrong version, for example, different from what's in pylock.toml?)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

.

Given your multi-architecture build requirements, I'd recommend Option 1 with --find-links as it provides the validation you want while working within uv's intended workflow patterns.

🐰 curious ears perked up, learning about advanced build orchestration


🧠 Learnings used
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:218-218
Timestamp: 2025-09-05T12:29:07.819Z
Learning: jiridanek requested GitHub issue creation for uv multi-stage Docker build architectural investigation during PR #2227 review. The current implementation uses a three-stage build with whl-cache stage for wheel building/caching, base stage for OS setup, and final codeserver stage for offline installation using --offline flag and cache mounts. The pattern separates build phase (internet access, build tools) from install phase (offline, faster) while supporting multi-architecture builds (x86_64, ppc64le) with sentinel file coordination using /tmp/control files.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#2145
File: jupyter/trustyai/ubi9-python-3.12/Dockerfile.cpu:134-140
Timestamp: 2025-08-27T15:05:31.656Z
Learning: jiridanek requested GitHub issue creation for CUDA wheel optimization in TrustyAI CPU image during PR #2145 review. The CPU Dockerfile currently uses pylock.toml with CUDA-enabled PyTorch wheels (torch==2.6.0+cu126) which was previously discussed with harshad16 and grdryn but deferred. Issue created with comprehensive problem analysis covering unnecessary CUDA wheels in CPU-only image, multiple solution options including lock regeneration and separate CPU/CUDA files, clear acceptance criteria for wheel optimization verification, and proper context linking to PR #2145 review comment.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#2145
File: Makefile:451-457
Timestamp: 2025-08-27T16:04:49.874Z
Learning: jiridanek requested GitHub issue creation for uv command optimization during PR #2145 review, specifically to explore using `uv export -o pylock.toml` instead of `uv pip compile pyproject.toml` for lockfile generation. Issue addresses the technical question of which uv command is more appropriate given the current workflow that uses both `uv lock` and pylock.toml generation. The investigation will cover technical differences, workflow integration, performance comparison, and alignment with uv best practices to determine the optimal approach for the project's lockfile generation pipeline.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#2145
File: Makefile:451-457
Timestamp: 2025-08-27T16:04:49.874Z
Learning: jiridanek requested GitHub issue creation for uv command optimization during PR #2145 review, specifically to explore using `uv export -o pylock.toml` instead of `uv pip compile pyproject.toml` for lockfile generation. Issue #2163 was created addressing the technical question of which uv command is more appropriate given the current workflow that uses both `uv lock` and pylock.toml generation. The investigation covers technical differences, workflow integration, performance comparison, and alignment with uv best practices to determine the optimal approach for the project's lockfile generation pipeline across 18+ notebook/runtime environments.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-07-01T10:41:56.419Z
Learning: jiridanek's team uses containerized dependency locking for cross-platform compatibility in opendatahub-io/notebooks. They run `pipenv lock` inside UBI9 containers with specific platform arguments (`--platform=linux/amd64 --python-version 3.12`) to avoid host OS dependency conflicts when generating Pipfile.lock files.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#1396
File: runtimes/tensorflow/ubi9-python-3.12/Dockerfile.cuda:124-127
Timestamp: 2025-07-20T20:47:36.509Z
Learning: jiridanek identified that ARM64 wheels for h5py 3.14.0 are available on PyPI but being ignored due to AMD64-only dependency locking with --platform=linux/amd64. This causes unnecessary hdf5-devel package installation in ARM64 TensorFlow images when the ARM64 wheel h5py-3.14.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl could be used instead. The Pipfile.lock only contains 2 hashes for h5py, confirming limited platform consideration during lock generation.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#2145
File: runtimes/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda:152-159
Timestamp: 2025-08-27T14:49:24.112Z
Learning: jiridanek requested GitHub issue creation for CUDA version alignment in pytorch+llmcompressor runtime during PR #2145 review. Issue #2148 was created addressing the mismatch between Dockerfile CUDA 12.6 and pylock.toml cu124 PyTorch wheels. The issue includes comprehensive problem description covering affected files (runtimes/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda and pylock.toml), detailed solution with PyTorch index URL update from cu124 to cu126, lock regeneration steps using uv, clear acceptance criteria for wheel alignment verification, and proper context linking to PR #2145 review comment, assigned to jiridanek.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#1972
File: runtimes/pytorch+llmcompressor/ubi9-python-3.12/Pipfile:11-14
Timestamp: 2025-08-18T12:32:34.595Z
Learning: In opendatahub-io/notebooks, jiridanek's team uses a sophisticated dependency management workaround where conflicting packages like vllm are placed in [dev-packages] sections of Pipfiles. This allows `pipenv lock` to generate complete dependency resolution during lock file creation, while runtime builds use `micropipenv install` (without --dev) to avoid installing the conflicting dev dependencies. This strategy enables successful Pipfile.lock generation despite package incompatibilities like vllm conflicting with compressed-tensors.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:122-123
Timestamp: 2025-09-05T12:25:09.719Z
Learning: jiridanek requested GitHub issue creation for Docker multi-stage synchronization improvement in codeserver/ubi9-python-3.12/Dockerfile.cpu during PR #2227 review. The issue addresses sentinel file pattern using /tmp/control copied to /dev/null for stage coordination between rpm-base, whl-cache, and codeserver stages, proposing semantic improvements with descriptive file names, inline documentation, and elimination of /dev/null hack while maintaining multi-architecture build functionality for ppc64le support.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#1998
File: runtimes/pytorch+llmcompressor/ubi9-python-3.11/Dockerfile.cuda:159-161
Timestamp: 2025-08-19T11:45:12.501Z
Learning: jiridanek requested GitHub issue creation for duplicated micropipenv installation cleanup in pytorch+llmcompressor images during PR #1998 review. Analysis confirmed duplication exists in both pytorch+llmcompressor Dockerfiles with micropipenv installed twice: unpinned early install (lines 23/36) for Pipfile.lock deployment and pinned later install (lines 160/248) in requirements.txt block. Issue #1999 created with comprehensive problem analysis covering exact line numbers and affected files, three solution options (remove early install, consolidate installations, conditional logic), detailed acceptance criteria covering build testing and functionality verification, implementation notes for coordination with version pinning efforts, and proper context linking to PR #1998 review comment.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-30T07:57:33.466Z
Learning: uv's official documentation recommends against using both uv.lock and requirements.txt (or exported formats like pylock.toml) simultaneously. The recommended approach is to use either uv.lock for uv-native workflows OR exported formats for compatibility, but not both. If both are needed, uv suggests opening an issue to discuss the specific use case.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:0-0
Timestamp: 2025-09-05T11:27:31.040Z
Learning: jiridanek requested GitHub issue creation for build toolchain optimization in datascience runtime during PR #2215 review. Issue #2308 was created addressing unnecessary build dependencies (gcc-toolset-13, cmake, ninja-build, rust, cargo) in final runtime image for ppc64le architecture, covering comprehensive problem analysis with specific line numbers, multiple solution options for builder-only toolchains, clear acceptance criteria for size reduction and security improvement, detailed implementation guidance for package segregation, and proper context linking to PR #2215 review comment, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:0-0
Timestamp: 2025-09-05T12:10:50.856Z
Learning: jiridanek requested GitHub issue creation for Dockerfile environment variable refactoring during PR #2215 review. Issue addresses build-only variables (OPENBLAS_VERSION, ONNX_VERSION, GRPC_PYTHON_BUILD_SYSTEM_OPENSSL) being unnecessarily written to /etc/profile.d/ppc64le.sh in runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu, causing variable duplication across stages, unreliable sourcing in non-login build contexts, and violation of DRY principles. The issue includes comprehensive problem description covering affected lines 30-37, detailed impact analysis of build reliability and maintenance overhead, three solution options with centralized ARG/ENV approach as recommended, clear acceptance criteria for version centralization and build-only variable cleanup, and specific implementation guidance with code examples, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:0-0
Timestamp: 2025-09-05T12:10:50.856Z
Learning: jiridanek requested GitHub issue creation for Dockerfile environment variable refactoring during PR #2215 review. Issue #2311 was created addressing build-only variables (OPENBLAS_VERSION, ONNX_VERSION, GRPC_PYTHON_BUILD_SYSTEM_OPENSSL) being unnecessarily written to /etc/profile.d/ppc64le.sh in runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu, causing variable duplication across stages, unreliable sourcing in non-login build contexts, and violation of DRY principles. The issue includes comprehensive problem description covering affected lines 30-37, detailed impact analysis of build reliability and maintenance overhead, three solution options with centralized ARG/ENV approach as recommended, clear acceptance criteria for version centralization and build-only variable cleanup, and specific implementation guidance with code examples, assigned to jiridanek, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-05T17:24:08.616Z
Learning: jiridanek requested PR review for #1521 covering s390x architecture support improvements, demonstrating continued focus on systematic multi-architecture compatibility enhancements in the opendatahub-io/notebooks repository through clean implementation with centralized configuration, proper CI integration, and architecture-aware testing patterns.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#2317
File: codeserver/ubi9-python-3.12/get_code_server_rpm.sh:31-37
Timestamp: 2025-09-05T13:16:48.754Z
Learning: jiridanek requested GitHub issue creation for build tools installation unification across builder images during PR #2317 review. Issue #2322 was created addressing inconsistent build dependency management patterns across different builder images, proposing multiple solution approaches including Development Tools group installation, centralized configuration, and layered approaches, with comprehensive acceptance criteria covering auditing, standardization, regression prevention, and multi-architecture support (x86_64, ppc64le, aarch64, s390x), continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#1909
File: runtimes/pytorch+llmcompressor/ubi9-python-3.11/Dockerfile.cuda:11-15
Timestamp: 2025-08-12T08:40:55.286Z
Learning: jiridanek requested GitHub issue creation for redundant CUDA upgrade optimization during PR #1909 review. Analysis revealed all 14 CUDA Dockerfiles contain redundant `yum upgrade -y` commands in cuda-base stages that execute after base stages already performed comprehensive `dnf upgrade` via pre-upgrade blocks, causing unnecessary CI latency and build inefficiency. Issue includes complete scope analysis with specific line numbers, investigation framework requiring NVIDIA upstream documentation review, multiple solution options, comprehensive acceptance criteria covering systematic testing and performance measurement, and proper context linking to PR #1909 review comment.

Learnt from: atheo89
PR: opendatahub-io/notebooks#1258
File: codeserver/ubi9-python-3.11/Dockerfile.cpu:32-32
Timestamp: 2025-07-07T11:08:48.524Z
Learning: atheo89 requested GitHub issue creation for multi-architecture Dockerfile improvements during PR #1258 review, specifically for enhancing structural consistency across Docker stages, replacing $(uname -m) with ${TARGETARCH} for cross-architecture builds, and adding OCI-compliant metadata labels. Issue #1332 was created with comprehensive problem description, phased implementation approach, detailed acceptance criteria, implementation guidance with code examples, and proper context linking, continuing the established pattern of systematic code quality improvements.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#1179
File: jupyter/utils/install_pandoc.sh:1-1
Timestamp: 2025-09-05T07:46:50.781Z
Learning: jiridanek requested GitHub issue creation during PR #1179 review to explore installing Pandoc from EPEL repository for ppc64le architecture as an alternative to building from source, noting that EPEL packages are acceptable unlike CentOS Stream in red-hat-data-services/notebooks. Issue #2281 was successfully created with comprehensive problem description covering build complexity concerns, multiple solution options, clear acceptance criteria, and proper context linking, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#2185
File: jupyter/pytorch/ubi9-python-3.12/Dockerfile.cuda:109-117
Timestamp: 2025-08-29T08:48:55.985Z
Learning: jiridanek prefers to implement systematic cleanup improvements through dedicated GitHub issues (like #2076) rather than applying individual point fixes during PR reviews, maintaining consistency with their established pattern of comprehensive code quality improvements.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#1320
File: jupyter/pytorch/ubi9-python-3.12/Dockerfile.cuda:65-66
Timestamp: 2025-07-09T12:31:02.033Z
Learning: jiridanek requested GitHub issue creation for MSSQL repo file hardcoding problem during PR #1320 review. Issue #1363 was created and updated with comprehensive problem description covering hardcoded x86_64 MSSQL repo files breaking multi-architecture builds across 10 affected Dockerfiles (including datascience, CUDA, ROCm, and TrustyAI variants), detailed root cause analysis, three solution options with code examples, clear acceptance criteria for all image types, implementation guidance following established multi-architecture patterns, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#2145
File: runtimes/rocm-tensorflow/ubi9-python-3.12/pyproject.toml:75-77
Timestamp: 2025-08-27T15:33:28.871Z
Learning: jiridanek requested GitHub issue creation for multi-architecture environment exploration during PR #2145 review, specifically to investigate architecture-specific environments for ROCm TensorFlow wheel optimization across x86_64, aarch64, ppc64le, and s390x architectures, noting uncertainty about implementation benefits but wanting systematic exploration through issue tracking. Issue #2158 was created with comprehensive analysis covering current limitations, investigation areas, multiple solution options, and clear acceptance criteria.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#2145
File: runtimes/rocm-tensorflow/ubi9-python-3.12/pyproject.toml:75-77
Timestamp: 2025-08-27T15:33:28.871Z
Learning: jiridanek requested GitHub issue creation for multi-architecture environment exploration during PR #2145 review, specifically to investigate architecture-specific environments for ROCm TensorFlow wheel optimization across x86_64, aarch64, ppc64le, and s390x architectures, noting uncertainty about implementation benefits but wanting systematic exploration through issue tracking.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#1333
File: runtimes/rocm-tensorflow/ubi9-python-3.12/Dockerfile.rocm:50-50
Timestamp: 2025-07-08T19:30:01.738Z
Learning: jiridanek requested GitHub issue creation for multi-architecture support in ROCm TensorFlow image during PR #1333 review. Issue #1346 was created with comprehensive problem description covering hardcoded x86_64 architecture breaking multi-arch support, detailed impact analysis, three solution options (runtime detection, BuildKit TARGETARCH integration, hybrid approach) with pros/cons analysis, comprehensive acceptance criteria covering core requirements and testing, phased implementation guidance, related files identification, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#1320
File: jupyter/pytorch/ubi9-python-3.12/Dockerfile.cuda:42-52
Timestamp: 2025-07-09T12:29:56.162Z
Learning: jiridanek requested GitHub issue creation for OpenShift client architecture mapping problem affecting 29 Dockerfiles during PR #1320 review. Issue was created with comprehensive analysis covering all affected files using $(uname -m) returning 'aarch64' but OpenShift mirror expecting 'arm64', systematic solution using BuildKit TARGETARCH mapping with proper amd64→x86_64 and arm64→arm64 conversion, detailed acceptance criteria, and implementation guidance, continuing the established pattern of systematic code quality improvements through detailed issue tracking.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@coderabbitai previous answer is truncated, please analyze this again and print all your advice and options

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jiridanek Here's a comprehensive analysis of options for using pre-built wheels from multi-stage Docker builds with uv validation:

Option 1: --find-links with lockfile validation (Recommended)

RUN uv pip install --find-links /onnxwheels --find-links /openblas \
    --requirements=./pylock.toml --verify-hashes

Pros:

  • uv validates wheel versions against pylock.toml
  • Errors if builder stage produced wrong version
  • Works with existing lockfile structure
  • uv prefers local wheels over remote when versions match

Cons:

  • Requires keeping onnx in pylock.toml (with platform_machine != 'ppc64le' marker)
  • May still try to download if local wheel doesn't match exactly

Option 2: --offline + --find-links (Strictest)

RUN uv pip install --offline --find-links /onnxwheels --find-links /openblas \
    --requirements=./pylock.toml

Pros:

  • Forces use of local wheels only
  • No network access prevents fallback downloads
  • Still validates against lockfile requirements

Cons:

  • Requires ALL dependencies as local wheels
  • More complex build orchestration needed

Option 3: Hybrid approach - Exclude + separate install

# Install custom wheels first
RUN pip install /onnxwheels/*.whl
# Then install remaining from lockfile (with onnx excluded for ppc64le)
RUN uv pip install --requirements=./pylock.toml

Pros:

  • Clean separation of custom vs standard packages
  • Simple to understand and debug

Cons:

  • Less validation between custom wheels and lockfile versions
  • Two separate install commands

Option 4: Custom wheel index

RUN python -m http.server 8000 --directory /wheels &
RUN uv pip install --index-url http://localhost:8000/simple \
    --extra-index-url https://pypi.org/simple --requirements=./pylock.toml

Pros:

  • Full index protocol compliance
  • Works with all uv features

Cons:

  • Requires HTTP server in container
  • More complex setup

Option 5: Pre-validation approach

# Validate wheel matches lockfile expectation
RUN python -c "
import re, sys
wheel_name = '$(ls /onnxwheels/*.whl | head -1)'
if not re.search(r'onnx-1\.19\.0-', wheel_name):
    sys.exit('Wheel version mismatch')
"
RUN pip install /onnxwheels/*.whl && \
    uv pip install --requirements=./pylock.toml

Pros:

  • Explicit version validation
  • Fails fast on version mismatches
  • Clear error messages

Cons:

  • Manual validation logic
  • Not DRY with lockfile versions

Recommendation for your use case:

Option 1 with proper pylock.toml markers is ideal because:

  1. You get uv's built-in validation
  2. It errors on version mismatches automatically
  3. Works with your existing multi-stage pattern
  4. Requires minimal changes to current approach

Update pylock.toml to exclude onnx on ppc64le:

[[packages]]
name = "onnx"
version = "1.19.0"
marker = "platform_machine != 'ppc64le'"
# ... rest of entry

Then use --find-links in your Dockerfile:

RUN uv pip install --find-links /onnxwheels \
    --requirements=./pylock.toml --verify-hashes

This gives you the validation you want while leveraging uv's intended workflow patterns.

🐰 thoughtful whiskers twitching, excited about sophisticated build orchestration solutions


🧠 Learnings used
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2145
File: jupyter/trustyai/ubi9-python-3.12/Dockerfile.cpu:134-140
Timestamp: 2025-08-27T15:05:31.656Z
Learning: jiridanek requested GitHub issue creation for CUDA wheel optimization in TrustyAI CPU image during PR #2145 review. The CPU Dockerfile currently uses pylock.toml with CUDA-enabled PyTorch wheels (torch==2.6.0+cu126) which was previously discussed with harshad16 and grdryn but deferred. Issue created with comprehensive problem analysis covering unnecessary CUDA wheels in CPU-only image, multiple solution options including lock regeneration and separate CPU/CUDA files, clear acceptance criteria for wheel optimization verification, and proper context linking to PR #2145 review comment.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#1909
File: runtimes/pytorch+llmcompressor/ubi9-python-3.11/Dockerfile.cuda:11-15
Timestamp: 2025-08-12T08:40:55.286Z
Learning: jiridanek requested GitHub issue creation for redundant CUDA upgrade optimization during PR #1909 review. Analysis revealed all 14 CUDA Dockerfiles contain redundant `yum upgrade -y` commands in cuda-base stages that execute after base stages already performed comprehensive `dnf upgrade` via pre-upgrade blocks, causing unnecessary CI latency and build inefficiency. Issue includes complete scope analysis with specific line numbers, investigation framework requiring NVIDIA upstream documentation review, multiple solution options, comprehensive acceptance criteria covering systematic testing and performance measurement, and proper context linking to PR #1909 review comment.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:126-128
Timestamp: 2025-09-05T12:25:58.344Z
Learning: jiridanek expressed concern about code-server installation complexity in codeserver/ubi9-python-3.12/Dockerfile.cpu during PR #2227 review, describing the multi-stage cache mount approach as a "hack" that worries them. GitHub issue #2315 was created addressing the architectural complexity with comprehensive problem description, four solution options, clear acceptance criteria, and implementation considerations, assigned to jiridanek, continuing the established pattern of systematic technical improvement tracking.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#1320
File: jupyter/pytorch/ubi9-python-3.12/Dockerfile.cuda:65-66
Timestamp: 2025-07-09T12:31:02.033Z
Learning: jiridanek requested GitHub issue creation for MSSQL repo file hardcoding problem during PR #1320 review. Issue #1363 was created and updated with comprehensive problem description covering hardcoded x86_64 MSSQL repo files breaking multi-architecture builds across 10 affected Dockerfiles (including datascience, CUDA, ROCm, and TrustyAI variants), detailed root cause analysis, three solution options with code examples, clear acceptance criteria for all image types, implementation guidance following established multi-architecture patterns, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#1320
File: jupyter/pytorch/ubi9-python-3.12/Dockerfile.cuda:42-52
Timestamp: 2025-07-09T12:29:56.162Z
Learning: jiridanek requested GitHub issue creation for OpenShift client architecture mapping problem affecting 29 Dockerfiles during PR #1320 review. Issue was created with comprehensive analysis covering all affected files using $(uname -m) returning 'aarch64' but OpenShift mirror expecting 'arm64', systematic solution using BuildKit TARGETARCH mapping with proper amd64→x86_64 and arm64→arm64 conversion, detailed acceptance criteria, and implementation guidance, continuing the established pattern of systematic code quality improvements through detailed issue tracking.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:0-0
Timestamp: 2025-09-05T12:10:50.856Z
Learning: jiridanek requested GitHub issue creation for Dockerfile environment variable refactoring during PR #2215 review. Issue #2311 was created addressing build-only variables (OPENBLAS_VERSION, ONNX_VERSION, GRPC_PYTHON_BUILD_SYSTEM_OPENSSL) being unnecessarily written to /etc/profile.d/ppc64le.sh in runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu, causing variable duplication across stages, unreliable sourcing in non-login build contexts, and violation of DRY principles. The issue includes comprehensive problem description covering affected lines 30-37, detailed impact analysis of build reliability and maintenance overhead, three solution options with centralized ARG/ENV approach as recommended, clear acceptance criteria for version centralization and build-only variable cleanup, and specific implementation guidance with code examples, assigned to jiridanek, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:0-0
Timestamp: 2025-09-05T12:10:50.856Z
Learning: jiridanek requested GitHub issue creation for Dockerfile environment variable refactoring during PR #2215 review. Issue addresses build-only variables (OPENBLAS_VERSION, ONNX_VERSION, GRPC_PYTHON_BUILD_SYSTEM_OPENSSL) being unnecessarily written to /etc/profile.d/ppc64le.sh in runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu, causing variable duplication across stages, unreliable sourcing in non-login build contexts, and violation of DRY principles. The issue includes comprehensive problem description covering affected lines 30-37, detailed impact analysis of build reliability and maintenance overhead, three solution options with centralized ARG/ENV approach as recommended, clear acceptance criteria for version centralization and build-only variable cleanup, and specific implementation guidance with code examples, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#2145
File: runtimes/rocm-tensorflow/ubi9-python-3.12/pyproject.toml:75-77
Timestamp: 2025-08-27T15:33:28.871Z
Learning: jiridanek requested GitHub issue creation for multi-architecture environment exploration during PR #2145 review, specifically to investigate architecture-specific environments for ROCm TensorFlow wheel optimization across x86_64, aarch64, ppc64le, and s390x architectures, noting uncertainty about implementation benefits but wanting systematic exploration through issue tracking.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:218-218
Timestamp: 2025-09-05T12:29:07.819Z
Learning: jiridanek requested GitHub issue creation for uv multi-stage Docker build architectural investigation during PR #2227 review. The current implementation uses a three-stage build with whl-cache stage for wheel building/caching, base stage for OS setup, and final codeserver stage for offline installation using --offline flag and cache mounts. The pattern separates build phase (internet access, build tools) from install phase (offline, faster) while supporting multi-architecture builds (x86_64, ppc64le) with sentinel file coordination using /tmp/control files.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#1909
File: runtimes/pytorch+llmcompressor/ubi9-python-3.11/Dockerfile.cuda:11-15
Timestamp: 2025-08-12T08:40:55.286Z
Learning: jiridanek requested GitHub issue creation for redundant CUDA upgrade optimization during PR #1909 review. Issue covers duplicate yum/dnf upgrade commands in cuda-base stages that execute after base stages already performed comprehensive upgrades, causing unnecessary CI latency and build inefficiency across multiple CUDA Dockerfiles. The solution requires investigating NVIDIA upstream documentation requirements before removing redundant upgrades, with systematic testing of all CUDA variants and performance measurement. Issue follows established pattern of comprehensive problem analysis, multiple solution options, detailed acceptance criteria, and proper context linking.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:0-0
Timestamp: 2025-09-05T11:27:31.040Z
Learning: jiridanek requested GitHub issue creation for build toolchain optimization in datascience runtime during PR #2215 review. Issue #2308 was created addressing unnecessary build dependencies (gcc-toolset-13, cmake, ninja-build, rust, cargo) in final runtime image for ppc64le architecture, covering comprehensive problem analysis with specific line numbers, multiple solution options for builder-only toolchains, clear acceptance criteria for size reduction and security improvement, detailed implementation guidance for package segregation, and proper context linking to PR #2215 review comment, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:122-123
Timestamp: 2025-09-05T12:25:09.719Z
Learning: jiridanek requested GitHub issue creation for Docker multi-stage synchronization improvement in codeserver/ubi9-python-3.12/Dockerfile.cpu during PR #2227 review. The issue addresses sentinel file pattern using /tmp/control copied to /dev/null for stage coordination between rpm-base, whl-cache, and codeserver stages, proposing semantic improvements with descriptive file names, inline documentation, and elimination of /dev/null hack while maintaining multi-architecture build functionality for ppc64le support.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-05T17:24:08.616Z
Learning: jiridanek requested PR review for #1521 covering s390x architecture support improvements, demonstrating continued focus on systematic multi-architecture compatibility enhancements in the opendatahub-io/notebooks repository through clean implementation with centralized configuration, proper CI integration, and architecture-aware testing patterns.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#2317
File: codeserver/ubi9-python-3.12/get_code_server_rpm.sh:31-37
Timestamp: 2025-09-05T13:16:48.754Z
Learning: jiridanek requested GitHub issue creation for build tools installation unification across builder images during PR #2317 review. Issue #2322 was created addressing inconsistent build dependency management patterns across different builder images, proposing multiple solution approaches including Development Tools group installation, centralized configuration, and layered approaches, with comprehensive acceptance criteria covering auditing, standardization, regression prevention, and multi-architecture support (x86_64, ppc64le, aarch64, s390x), continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.

Learnt from: atheo89
PR: opendatahub-io/notebooks#1258
File: codeserver/ubi9-python-3.11/Dockerfile.cpu:32-32
Timestamp: 2025-07-07T11:08:48.524Z
Learning: atheo89 requested GitHub issue creation for multi-architecture Dockerfile improvements during PR #1258 review, specifically for enhancing structural consistency across Docker stages, replacing $(uname -m) with ${TARGETARCH} for cross-architecture builds, and adding OCI-compliant metadata labels. Issue #1332 was created with comprehensive problem description, phased implementation approach, detailed acceptance criteria, implementation guidance with code examples, and proper context linking, continuing the established pattern of systematic code quality improvements.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#1179
File: jupyter/utils/install_pandoc.sh:1-1
Timestamp: 2025-09-05T07:46:50.781Z
Learning: jiridanek requested GitHub issue creation during PR #1179 review to explore installing Pandoc from EPEL repository for ppc64le architecture as an alternative to building from source, noting that EPEL packages are acceptable unlike CentOS Stream in red-hat-data-services/notebooks. Issue #2281 was successfully created with comprehensive problem description covering build complexity concerns, multiple solution options, clear acceptance criteria, and proper context linking, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#2185
File: jupyter/pytorch/ubi9-python-3.12/Dockerfile.cuda:109-117
Timestamp: 2025-08-29T08:48:55.985Z
Learning: jiridanek prefers to implement systematic cleanup improvements through dedicated GitHub issues (like #2076) rather than applying individual point fixes during PR reviews, maintaining consistency with their established pattern of comprehensive code quality improvements.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#2145
File: runtimes/rocm-tensorflow/ubi9-python-3.12/pyproject.toml:75-77
Timestamp: 2025-08-27T15:33:28.871Z
Learning: jiridanek requested GitHub issue creation for multi-architecture environment exploration during PR #2145 review, specifically to investigate architecture-specific environments for ROCm TensorFlow wheel optimization across x86_64, aarch64, ppc64le, and s390x architectures, noting uncertainty about implementation benefits but wanting systematic exploration through issue tracking. Issue #2158 was created with comprehensive analysis covering current limitations, investigation areas, multiple solution options, and clear acceptance criteria.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#1333
File: runtimes/rocm-tensorflow/ubi9-python-3.12/Dockerfile.rocm:50-50
Timestamp: 2025-07-08T19:30:01.738Z
Learning: jiridanek requested GitHub issue creation for multi-architecture support in ROCm TensorFlow image during PR #1333 review. Issue #1346 was created with comprehensive problem description covering hardcoded x86_64 architecture breaking multi-arch support, detailed impact analysis, three solution options (runtime detection, BuildKit TARGETARCH integration, hybrid approach) with pros/cons analysis, comprehensive acceptance criteria covering core requirements and testing, phased implementation guidance, related files identification, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this makes no sense,

marker = "platform_machine != 'ppc64le'"

means uv won't try to install onnx ppc64le, but I do want it to install onnx, from wheel provided in /onnxwheels/*.whl

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jiridanek You're absolutely right - my previous suggestion was incorrect. Excluding onnx from pylock.toml would prevent installation entirely on ppc64le.

Here's the correct approach to get uv validation with your custom wheel:

Keep onnx in pylock.toml (no platform exclusion):

[[packages]]
name = "onnx"
version = "1.19.0"
# No marker needed - let uv find and validate the local wheel

Use --find-links to provide the custom wheel:

RUN uv pip install --find-links /onnxwheels \
    --requirements=./pylock.toml --verify-hashes

This way:

  1. pylock.toml declares onnx==1.19.0 as required on all platforms
  2. --find-links /onnxwheels makes uv discover your custom wheel
  3. uv validates the wheel version matches the lockfile requirement
  4. uv will error if your builder stage produced wrong version (e.g., 1.18.0 instead of 1.19.0)
  5. On non-ppc64le, uv falls back to PyPI since no local wheel exists

The current pip install /onnxwheels/*.whl approach bypasses uv's validation entirely, which is why switching to --find-links gives you the validation you want while still installing your custom wheel.

🐰 ears perked up, realizing the validation flow should work seamlessly


🧠 Learnings used
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2145
File: jupyter/trustyai/ubi9-python-3.12/Dockerfile.cpu:134-140
Timestamp: 2025-08-27T15:05:31.656Z
Learning: jiridanek requested GitHub issue creation for CUDA wheel optimization in TrustyAI CPU image during PR #2145 review. The CPU Dockerfile currently uses pylock.toml with CUDA-enabled PyTorch wheels (torch==2.6.0+cu126) which was previously discussed with harshad16 and grdryn but deferred. Issue created with comprehensive problem analysis covering unnecessary CUDA wheels in CPU-only image, multiple solution options including lock regeneration and separate CPU/CUDA files, clear acceptance criteria for wheel optimization verification, and proper context linking to PR #2145 review comment.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:218-218
Timestamp: 2025-09-05T12:29:07.819Z
Learning: jiridanek requested GitHub issue creation for uv multi-stage Docker build architectural investigation during PR #2227 review. The current implementation uses a three-stage build with whl-cache stage for wheel building/caching, base stage for OS setup, and final codeserver stage for offline installation using --offline flag and cache mounts. The pattern separates build phase (internet access, build tools) from install phase (offline, faster) while supporting multi-architecture builds (x86_64, ppc64le) with sentinel file coordination using /tmp/control files.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#2145
File: Makefile:451-457
Timestamp: 2025-08-27T16:04:49.874Z
Learning: jiridanek requested GitHub issue creation for uv command optimization during PR #2145 review, specifically to explore using `uv export -o pylock.toml` instead of `uv pip compile pyproject.toml` for lockfile generation. Issue #2163 was created addressing the technical question of which uv command is more appropriate given the current workflow that uses both `uv lock` and pylock.toml generation. The investigation covers technical differences, workflow integration, performance comparison, and alignment with uv best practices to determine the optimal approach for the project's lockfile generation pipeline across 18+ notebook/runtime environments.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#2145
File: Makefile:451-457
Timestamp: 2025-08-27T16:04:49.874Z
Learning: jiridanek requested GitHub issue creation for uv command optimization during PR #2145 review, specifically to explore using `uv export -o pylock.toml` instead of `uv pip compile pyproject.toml` for lockfile generation. Issue addresses the technical question of which uv command is more appropriate given the current workflow that uses both `uv lock` and pylock.toml generation. The investigation will cover technical differences, workflow integration, performance comparison, and alignment with uv best practices to determine the optimal approach for the project's lockfile generation pipeline.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-30T07:57:33.466Z
Learning: uv's official documentation recommends against using both uv.lock and requirements.txt (or exported formats like pylock.toml) simultaneously. The recommended approach is to use either uv.lock for uv-native workflows OR exported formats for compatibility, but not both. If both are needed, uv suggests opening an issue to discuss the specific use case.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#1396
File: runtimes/tensorflow/ubi9-python-3.12/Dockerfile.cuda:124-127
Timestamp: 2025-07-20T20:47:36.509Z
Learning: jiridanek identified that ARM64 wheels for h5py 3.14.0 are available on PyPI but being ignored due to AMD64-only dependency locking with --platform=linux/amd64. This causes unnecessary hdf5-devel package installation in ARM64 TensorFlow images when the ARM64 wheel h5py-3.14.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl could be used instead. The Pipfile.lock only contains 2 hashes for h5py, confirming limited platform consideration during lock generation.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/pylock.toml:110-110
Timestamp: 2025-08-29T15:18:43.229Z
Learning: When uv compiles dependencies with platform restrictions (like `platform_machine != 'ppc64le'`), it automatically propagates these markers to transitive dependencies in the generated lock files. This is expected behavior and doesn't require manual intervention in source files.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:122-123
Timestamp: 2025-09-05T12:25:09.719Z
Learning: jiridanek requested GitHub issue creation for Docker multi-stage synchronization improvement in codeserver/ubi9-python-3.12/Dockerfile.cpu during PR #2227 review. The issue addresses sentinel file pattern using /tmp/control copied to /dev/null for stage coordination between rpm-base, whl-cache, and codeserver stages, proposing semantic improvements with descriptive file names, inline documentation, and elimination of /dev/null hack while maintaining multi-architecture build functionality for ppc64le support.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#2145
File: runtimes/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda:152-159
Timestamp: 2025-08-27T14:49:24.112Z
Learning: jiridanek requested GitHub issue creation for CUDA version alignment in pytorch+llmcompressor runtime during PR #2145 review. Issue #2148 was created addressing the mismatch between Dockerfile CUDA 12.6 and pylock.toml cu124 PyTorch wheels. The issue includes comprehensive problem description covering affected files (runtimes/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda and pylock.toml), detailed solution with PyTorch index URL update from cu124 to cu126, lock regeneration steps using uv, clear acceptance criteria for wheel alignment verification, and proper context linking to PR #2145 review comment, assigned to jiridanek.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-27T15:12:21.684Z
Learning: jiridanek requested GitHub issue creation for ROCm TensorFlow wheel optimization during PR #2145 review, specifically to move tensorflow-rocm URL from dependencies to tool.uv.sources pattern and add required-environments for x86_64 architecture validation. This follows uv-native patterns for cleaner dependency management and better lock-time guarantees.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:0-0
Timestamp: 2025-09-05T11:27:31.040Z
Learning: jiridanek requested GitHub issue creation for build toolchain optimization in datascience runtime during PR #2215 review. Issue #2308 was created addressing unnecessary build dependencies (gcc-toolset-13, cmake, ninja-build, rust, cargo) in final runtime image for ppc64le architecture, covering comprehensive problem analysis with specific line numbers, multiple solution options for builder-only toolchains, clear acceptance criteria for size reduction and security improvement, detailed implementation guidance for package segregation, and proper context linking to PR #2215 review comment, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:0-0
Timestamp: 2025-09-05T12:10:50.856Z
Learning: jiridanek requested GitHub issue creation for Dockerfile environment variable refactoring during PR #2215 review. Issue addresses build-only variables (OPENBLAS_VERSION, ONNX_VERSION, GRPC_PYTHON_BUILD_SYSTEM_OPENSSL) being unnecessarily written to /etc/profile.d/ppc64le.sh in runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu, causing variable duplication across stages, unreliable sourcing in non-login build contexts, and violation of DRY principles. The issue includes comprehensive problem description covering affected lines 30-37, detailed impact analysis of build reliability and maintenance overhead, three solution options with centralized ARG/ENV approach as recommended, clear acceptance criteria for version centralization and build-only variable cleanup, and specific implementation guidance with code examples, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:0-0
Timestamp: 2025-09-05T12:10:50.856Z
Learning: jiridanek requested GitHub issue creation for Dockerfile environment variable refactoring during PR #2215 review. Issue #2311 was created addressing build-only variables (OPENBLAS_VERSION, ONNX_VERSION, GRPC_PYTHON_BUILD_SYSTEM_OPENSSL) being unnecessarily written to /etc/profile.d/ppc64le.sh in runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu, causing variable duplication across stages, unreliable sourcing in non-login build contexts, and violation of DRY principles. The issue includes comprehensive problem description covering affected lines 30-37, detailed impact analysis of build reliability and maintenance overhead, three solution options with centralized ARG/ENV approach as recommended, clear acceptance criteria for version centralization and build-only variable cleanup, and specific implementation guidance with code examples, assigned to jiridanek, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-05T17:24:08.616Z
Learning: jiridanek requested PR review for #1521 covering s390x architecture support improvements, demonstrating continued focus on systematic multi-architecture compatibility enhancements in the opendatahub-io/notebooks repository through clean implementation with centralized configuration, proper CI integration, and architecture-aware testing patterns.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#2317
File: codeserver/ubi9-python-3.12/get_code_server_rpm.sh:31-37
Timestamp: 2025-09-05T13:16:48.754Z
Learning: jiridanek requested GitHub issue creation for build tools installation unification across builder images during PR #2317 review. Issue #2322 was created addressing inconsistent build dependency management patterns across different builder images, proposing multiple solution approaches including Development Tools group installation, centralized configuration, and layered approaches, with comprehensive acceptance criteria covering auditing, standardization, regression prevention, and multi-architecture support (x86_64, ppc64le, aarch64, s390x), continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#1909
File: runtimes/pytorch+llmcompressor/ubi9-python-3.11/Dockerfile.cuda:11-15
Timestamp: 2025-08-12T08:40:55.286Z
Learning: jiridanek requested GitHub issue creation for redundant CUDA upgrade optimization during PR #1909 review. Analysis revealed all 14 CUDA Dockerfiles contain redundant `yum upgrade -y` commands in cuda-base stages that execute after base stages already performed comprehensive `dnf upgrade` via pre-upgrade blocks, causing unnecessary CI latency and build inefficiency. Issue includes complete scope analysis with specific line numbers, investigation framework requiring NVIDIA upstream documentation review, multiple solution options, comprehensive acceptance criteria covering systematic testing and performance measurement, and proper context linking to PR #1909 review comment.

Learnt from: atheo89
PR: opendatahub-io/notebooks#1258
File: codeserver/ubi9-python-3.11/Dockerfile.cpu:32-32
Timestamp: 2025-07-07T11:08:48.524Z
Learning: atheo89 requested GitHub issue creation for multi-architecture Dockerfile improvements during PR #1258 review, specifically for enhancing structural consistency across Docker stages, replacing $(uname -m) with ${TARGETARCH} for cross-architecture builds, and adding OCI-compliant metadata labels. Issue #1332 was created with comprehensive problem description, phased implementation approach, detailed acceptance criteria, implementation guidance with code examples, and proper context linking, continuing the established pattern of systematic code quality improvements.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#1179
File: jupyter/utils/install_pandoc.sh:1-1
Timestamp: 2025-09-05T07:46:50.781Z
Learning: jiridanek requested GitHub issue creation during PR #1179 review to explore installing Pandoc from EPEL repository for ppc64le architecture as an alternative to building from source, noting that EPEL packages are acceptable unlike CentOS Stream in red-hat-data-services/notebooks. Issue #2281 was successfully created with comprehensive problem description covering build complexity concerns, multiple solution options, clear acceptance criteria, and proper context linking, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#2185
File: jupyter/pytorch/ubi9-python-3.12/Dockerfile.cuda:109-117
Timestamp: 2025-08-29T08:48:55.985Z
Learning: jiridanek prefers to implement systematic cleanup improvements through dedicated GitHub issues (like #2076) rather than applying individual point fixes during PR reviews, maintaining consistency with their established pattern of comprehensive code quality improvements.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#1320
File: jupyter/pytorch/ubi9-python-3.12/Dockerfile.cuda:65-66
Timestamp: 2025-07-09T12:31:02.033Z
Learning: jiridanek requested GitHub issue creation for MSSQL repo file hardcoding problem during PR #1320 review. Issue #1363 was created and updated with comprehensive problem description covering hardcoded x86_64 MSSQL repo files breaking multi-architecture builds across 10 affected Dockerfiles (including datascience, CUDA, ROCm, and TrustyAI variants), detailed root cause analysis, three solution options with code examples, clear acceptance criteria for all image types, implementation guidance following established multi-architecture patterns, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#2145
File: runtimes/rocm-tensorflow/ubi9-python-3.12/pyproject.toml:75-77
Timestamp: 2025-08-27T15:33:28.871Z
Learning: jiridanek requested GitHub issue creation for multi-architecture environment exploration during PR #2145 review, specifically to investigate architecture-specific environments for ROCm TensorFlow wheel optimization across x86_64, aarch64, ppc64le, and s390x architectures, noting uncertainty about implementation benefits but wanting systematic exploration through issue tracking. Issue #2158 was created with comprehensive analysis covering current limitations, investigation areas, multiple solution options, and clear acceptance criteria.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#2145
File: runtimes/rocm-tensorflow/ubi9-python-3.12/pyproject.toml:75-77
Timestamp: 2025-08-27T15:33:28.871Z
Learning: jiridanek requested GitHub issue creation for multi-architecture environment exploration during PR #2145 review, specifically to investigate architecture-specific environments for ROCm TensorFlow wheel optimization across x86_64, aarch64, ppc64le, and s390x architectures, noting uncertainty about implementation benefits but wanting systematic exploration through issue tracking.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#1333
File: runtimes/rocm-tensorflow/ubi9-python-3.12/Dockerfile.rocm:50-50
Timestamp: 2025-07-08T19:30:01.738Z
Learning: jiridanek requested GitHub issue creation for multi-architecture support in ROCm TensorFlow image during PR #1333 review. Issue #1346 was created with comprehensive problem description covering hardcoded x86_64 architecture breaking multi-arch support, detailed impact analysis, three solution options (runtime detection, BuildKit TARGETARCH integration, hybrid approach) with pros/cons analysis, comprehensive acceptance criteria covering core requirements and testing, phased implementation guidance, related files identification, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#1320
File: jupyter/pytorch/ubi9-python-3.12/Dockerfile.cuda:42-52
Timestamp: 2025-07-09T12:29:56.162Z
Learning: jiridanek requested GitHub issue creation for OpenShift client architecture mapping problem affecting 29 Dockerfiles during PR #1320 review. Issue was created with comprehensive analysis covering all affected files using $(uname -m) returning 'aarch64' but OpenShift mirror expecting 'arm64', systematic solution using BuildKit TARGETARCH mapping with proper amd64→x86_64 and arm64→arm64 conversion, detailed acceptance criteria, and implementation guidance, continuing the established pattern of systematic code quality improvements through detailed issue tracking.

Copy link
Member

@jiridanek jiridanek Sep 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how may I provide --verify-hashes when I'm building the onnx wheel as part of the dockerfile build and i don't know what hash it will have when creating pylock.toml

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jiridanek You're hitting a fundamental challenge with hash verification for build-time generated wheels. Here are the practical approaches:

Option 1: Split installation (Recommended)

# Install custom wheels without hash verification
RUN uv pip install --find-links /onnxwheels --no-deps onnx
# Then install remaining packages with hash verification
RUN uv pip install --verify-hashes --requirements=./pylock.toml

This maintains hash verification for PyPI packages while allowing your custom wheel.

Option 2: Selective hash verification

# Create a lockfile subset without onnx for hash verification
RUN grep -v "name.*onnx" pylock.toml > pylock_no_onnx.toml
RUN uv pip install /onnxwheels/*.whl  # Custom wheel
RUN uv pip install --verify-hashes --requirements=pylock_no_onnx.toml

Option 3: Generate hash during build

# Build wheel and capture hash
RUN cd /onnxwheels && python -c "
import hashlib
import glob
wheel = glob.glob('*.whl')[0]
with open(wheel, 'rb') as f:
    hash_val = 'sha256:' + hashlib.sha256(f.read()).hexdigest()
print(f'{wheel} has hash {hash_val}')
" > wheel_hash.txt

# Then use the hash in installation (more complex)

Option 4: Use --no-deps for find-links

RUN uv pip install --find-links /onnxwheels --no-deps \
    --verify-hashes --requirements=./pylock.toml

I'd recommend Option 1 as it's clean, maintains security for PyPI packages, and works reliably with your multi-stage build pattern. You get hash verification for 99% of packages while allowing the few custom-built ones to install without it.


✏️ Learnings added
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2406
File: jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu:162-169
Timestamp: 2025-09-11T07:45:16.965Z
Learning: jiridanek asked about using --verify-hashes with custom-built wheels in Docker multi-stage builds where the wheel hash is unknown until build completion. The solution involves splitting installation into two phases: first install custom wheels without hash verification, then install remaining packages from lockfile with --verify-hashes to maintain security for PyPI packages while allowing custom wheels.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.


🧠 Learnings used
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2145
File: jupyter/trustyai/ubi9-python-3.12/Dockerfile.cpu:134-140
Timestamp: 2025-08-27T15:05:31.656Z
Learning: jiridanek requested GitHub issue creation for CUDA wheel optimization in TrustyAI CPU image during PR #2145 review. The CPU Dockerfile currently uses pylock.toml with CUDA-enabled PyTorch wheels (torch==2.6.0+cu126) which was previously discussed with harshad16 and grdryn but deferred. Issue created with comprehensive problem analysis covering unnecessary CUDA wheels in CPU-only image, multiple solution options including lock regeneration and separate CPU/CUDA files, clear acceptance criteria for wheel optimization verification, and proper context linking to PR #2145 review comment.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:218-218
Timestamp: 2025-09-05T12:29:07.819Z
Learning: jiridanek requested GitHub issue creation for uv multi-stage Docker build architectural investigation during PR #2227 review. The current implementation uses a three-stage build with whl-cache stage for wheel building/caching, base stage for OS setup, and final codeserver stage for offline installation using --offline flag and cache mounts. The pattern separates build phase (internet access, build tools) from install phase (offline, faster) while supporting multi-architecture builds (x86_64, ppc64le) with sentinel file coordination using /tmp/control files.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#2145
File: Makefile:451-457
Timestamp: 2025-08-27T16:04:49.874Z
Learning: jiridanek requested GitHub issue creation for uv command optimization during PR #2145 review, specifically to explore using `uv export -o pylock.toml` instead of `uv pip compile pyproject.toml` for lockfile generation. Issue #2163 was created addressing the technical question of which uv command is more appropriate given the current workflow that uses both `uv lock` and pylock.toml generation. The investigation covers technical differences, workflow integration, performance comparison, and alignment with uv best practices to determine the optimal approach for the project's lockfile generation pipeline across 18+ notebook/runtime environments.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#2145
File: Makefile:451-457
Timestamp: 2025-08-27T16:04:49.874Z
Learning: jiridanek requested GitHub issue creation for uv command optimization during PR #2145 review, specifically to explore using `uv export -o pylock.toml` instead of `uv pip compile pyproject.toml` for lockfile generation. Issue addresses the technical question of which uv command is more appropriate given the current workflow that uses both `uv lock` and pylock.toml generation. The investigation will cover technical differences, workflow integration, performance comparison, and alignment with uv best practices to determine the optimal approach for the project's lockfile generation pipeline.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-30T07:57:33.466Z
Learning: uv's official documentation recommends against using both uv.lock and requirements.txt (or exported formats like pylock.toml) simultaneously. The recommended approach is to use either uv.lock for uv-native workflows OR exported formats for compatibility, but not both. If both are needed, uv suggests opening an issue to discuss the specific use case.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#1396
File: runtimes/tensorflow/ubi9-python-3.12/Dockerfile.cuda:124-127
Timestamp: 2025-07-20T20:47:36.509Z
Learning: jiridanek identified that ARM64 wheels for h5py 3.14.0 are available on PyPI but being ignored due to AMD64-only dependency locking with --platform=linux/amd64. This causes unnecessary hdf5-devel package installation in ARM64 TensorFlow images when the ARM64 wheel h5py-3.14.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl could be used instead. The Pipfile.lock only contains 2 hashes for h5py, confirming limited platform consideration during lock generation.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#2145
File: runtimes/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda:152-159
Timestamp: 2025-08-27T14:49:24.112Z
Learning: jiridanek requested GitHub issue creation for CUDA version alignment in pytorch+llmcompressor runtime during PR #2145 review. Issue #2148 was created addressing the mismatch between Dockerfile CUDA 12.6 and pylock.toml cu124 PyTorch wheels. The issue includes comprehensive problem description covering affected files (runtimes/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda and pylock.toml), detailed solution with PyTorch index URL update from cu124 to cu126, lock regeneration steps using uv, clear acceptance criteria for wheel alignment verification, and proper context linking to PR #2145 review comment, assigned to jiridanek.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:0-0
Timestamp: 2025-09-05T12:10:50.856Z
Learning: jiridanek requested GitHub issue creation for Dockerfile environment variable refactoring during PR #2215 review. Issue #2311 was created addressing build-only variables (OPENBLAS_VERSION, ONNX_VERSION, GRPC_PYTHON_BUILD_SYSTEM_OPENSSL) being unnecessarily written to /etc/profile.d/ppc64le.sh in runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu, causing variable duplication across stages, unreliable sourcing in non-login build contexts, and violation of DRY principles. The issue includes comprehensive problem description covering affected lines 30-37, detailed impact analysis of build reliability and maintenance overhead, three solution options with centralized ARG/ENV approach as recommended, clear acceptance criteria for version centralization and build-only variable cleanup, and specific implementation guidance with code examples, assigned to jiridanek, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:0-0
Timestamp: 2025-09-05T12:10:50.856Z
Learning: jiridanek requested GitHub issue creation for Dockerfile environment variable refactoring during PR #2215 review. Issue addresses build-only variables (OPENBLAS_VERSION, ONNX_VERSION, GRPC_PYTHON_BUILD_SYSTEM_OPENSSL) being unnecessarily written to /etc/profile.d/ppc64le.sh in runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu, causing variable duplication across stages, unreliable sourcing in non-login build contexts, and violation of DRY principles. The issue includes comprehensive problem description covering affected lines 30-37, detailed impact analysis of build reliability and maintenance overhead, three solution options with centralized ARG/ENV approach as recommended, clear acceptance criteria for version centralization and build-only variable cleanup, and specific implementation guidance with code examples, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#2145
File: runtimes/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda:152-159
Timestamp: 2025-08-27T14:49:24.112Z
Learning: jiridanek requested GitHub issue creation for CUDA version alignment in pytorch+llmcompressor runtime during PR #2145 review. Issue addresses mismatch between Dockerfile CUDA 12.6 and pylock.toml cu124 PyTorch wheels. Comprehensive issue created with detailed problem description covering affected files, current state analysis, step-by-step solution including PyTorch index URL update and lock regeneration, clear acceptance criteria for cu126 wheel verification, and proper context linking to PR #2145 review comment.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#1998
File: runtimes/pytorch+llmcompressor/ubi9-python-3.11/Dockerfile.cuda:159-161
Timestamp: 2025-08-19T11:45:12.501Z
Learning: jiridanek requested GitHub issue creation for duplicated micropipenv installation cleanup in pytorch+llmcompressor images during PR #1998 review. Analysis confirmed duplication exists in both pytorch+llmcompressor Dockerfiles with micropipenv installed twice: unpinned early install (lines 23/36) for Pipfile.lock deployment and pinned later install (lines 160/248) in requirements.txt block. Issue #1999 created with comprehensive problem analysis covering exact line numbers and affected files, three solution options (remove early install, consolidate installations, conditional logic), detailed acceptance criteria covering build testing and functionality verification, implementation notes for coordination with version pinning efforts, and proper context linking to PR #1998 review comment.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#1320
File: jupyter/pytorch/ubi9-python-3.12/Dockerfile.cuda:42-52
Timestamp: 2025-07-09T12:29:56.162Z
Learning: jiridanek requested GitHub issue creation for OpenShift client architecture mapping problem affecting 29 Dockerfiles during PR #1320 review. Issue was created with comprehensive analysis covering all affected files using $(uname -m) returning 'aarch64' but OpenShift mirror expecting 'arm64', systematic solution using BuildKit TARGETARCH mapping with proper amd64→x86_64 and arm64→arm64 conversion, detailed acceptance criteria, and implementation guidance, continuing the established pattern of systematic code quality improvements through detailed issue tracking.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-22T12:15:38.859Z
Learning: jiridanek requested automation for digest resolution using Registry v2 API during PR #2103 review. A comprehensive GitHub issue was created covering Python and bash implementation approaches for resolving image tags to sha256 digests in manifests/base/params-latest.env, with detailed acceptance criteria, CI/CD integration options, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#1320
File: jupyter/pytorch/ubi9-python-3.12/Dockerfile.cuda:65-66
Timestamp: 2025-07-09T12:31:02.033Z
Learning: jiridanek requested GitHub issue creation for MSSQL repo file hardcoding problem during PR #1320 review. Issue #1363 was created and updated with comprehensive problem description covering hardcoded x86_64 MSSQL repo files breaking multi-architecture builds across 10 affected Dockerfiles (including datascience, CUDA, ROCm, and TrustyAI variants), detailed root cause analysis, three solution options with code examples, clear acceptance criteria for all image types, implementation guidance following established multi-architecture patterns, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-22T12:15:38.859Z
Learning: jiridanek requested automation for digest resolution using Registry v2 API during PR #2103 review. A comprehensive GitHub issue #2104 was successfully created covering Python and bash implementation approaches for resolving image tags to sha256 digests in manifests/base/params-latest.env, with detailed acceptance criteria, CI/CD integration options, authentication handling for quay.io registry, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#1909
File: runtimes/pytorch+llmcompressor/ubi9-python-3.11/Dockerfile.cuda:11-15
Timestamp: 2025-08-12T08:40:55.286Z
Learning: jiridanek requested GitHub issue creation for redundant CUDA upgrade optimization during PR #1909 review. Analysis revealed all 14 CUDA Dockerfiles contain redundant `yum upgrade -y` commands in cuda-base stages that execute after base stages already performed comprehensive `dnf upgrade` via pre-upgrade blocks, causing unnecessary CI latency and build inefficiency. Issue includes complete scope analysis with specific line numbers, investigation framework requiring NVIDIA upstream documentation review, multiple solution options, comprehensive acceptance criteria covering systematic testing and performance measurement, and proper context linking to PR #1909 review comment.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:122-123
Timestamp: 2025-09-05T12:25:09.719Z
Learning: jiridanek requested GitHub issue creation for Docker multi-stage synchronization improvement in codeserver/ubi9-python-3.12/Dockerfile.cpu during PR #2227 review. The issue addresses sentinel file pattern using /tmp/control copied to /dev/null for stage coordination between rpm-base, whl-cache, and codeserver stages, proposing semantic improvements with descriptive file names, inline documentation, and elimination of /dev/null hack while maintaining multi-architecture build functionality for ppc64le support.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-09-10T21:02:52.909Z
Learning: jiridanek requested GitHub issue creation for external tool version pinning in codeserver/ubi9-python-3.12/get_code_server_rpm.sh during PR #2356 review. The issue addresses floating "latest" versions for NFPM and NVM tools causing build flakiness and SBOM drift, proposing pinned versions with environment variable overrides and optional checksum validation, continuing the established pattern of systematic code quality improvements through detailed issue tracking.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:0-0
Timestamp: 2025-09-05T11:27:31.040Z
Learning: jiridanek requested GitHub issue creation for build toolchain optimization in datascience runtime during PR #2215 review. Issue #2308 was created addressing unnecessary build dependencies (gcc-toolset-13, cmake, ninja-build, rust, cargo) in final runtime image for ppc64le architecture, covering comprehensive problem analysis with specific line numbers, multiple solution options for builder-only toolchains, clear acceptance criteria for size reduction and security improvement, detailed implementation guidance for package segregation, and proper context linking to PR #2215 review comment, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-05T17:24:08.616Z
Learning: jiridanek requested PR review for #1521 covering s390x architecture support improvements, demonstrating continued focus on systematic multi-architecture compatibility enhancements in the opendatahub-io/notebooks repository through clean implementation with centralized configuration, proper CI integration, and architecture-aware testing patterns.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#2317
File: codeserver/ubi9-python-3.12/get_code_server_rpm.sh:31-37
Timestamp: 2025-09-05T13:16:48.754Z
Learning: jiridanek requested GitHub issue creation for build tools installation unification across builder images during PR #2317 review. Issue #2322 was created addressing inconsistent build dependency management patterns across different builder images, proposing multiple solution approaches including Development Tools group installation, centralized configuration, and layered approaches, with comprehensive acceptance criteria covering auditing, standardization, regression prevention, and multi-architecture support (x86_64, ppc64le, aarch64, s390x), continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.

Learnt from: atheo89
PR: opendatahub-io/notebooks#1258
File: codeserver/ubi9-python-3.11/Dockerfile.cpu:32-32
Timestamp: 2025-07-07T11:08:48.524Z
Learning: atheo89 requested GitHub issue creation for multi-architecture Dockerfile improvements during PR #1258 review, specifically for enhancing structural consistency across Docker stages, replacing $(uname -m) with ${TARGETARCH} for cross-architecture builds, and adding OCI-compliant metadata labels. Issue #1332 was created with comprehensive problem description, phased implementation approach, detailed acceptance criteria, implementation guidance with code examples, and proper context linking, continuing the established pattern of systematic code quality improvements.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#1179
File: jupyter/utils/install_pandoc.sh:1-1
Timestamp: 2025-09-05T07:46:50.781Z
Learning: jiridanek requested GitHub issue creation during PR #1179 review to explore installing Pandoc from EPEL repository for ppc64le architecture as an alternative to building from source, noting that EPEL packages are acceptable unlike CentOS Stream in red-hat-data-services/notebooks. Issue #2281 was successfully created with comprehensive problem description covering build complexity concerns, multiple solution options, clear acceptance criteria, and proper context linking, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#2185
File: jupyter/pytorch/ubi9-python-3.12/Dockerfile.cuda:109-117
Timestamp: 2025-08-29T08:48:55.985Z
Learning: jiridanek prefers to implement systematic cleanup improvements through dedicated GitHub issues (like #2076) rather than applying individual point fixes during PR reviews, maintaining consistency with their established pattern of comprehensive code quality improvements.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#2145
File: runtimes/rocm-tensorflow/ubi9-python-3.12/pyproject.toml:75-77
Timestamp: 2025-08-27T15:33:28.871Z
Learning: jiridanek requested GitHub issue creation for multi-architecture environment exploration during PR #2145 review, specifically to investigate architecture-specific environments for ROCm TensorFlow wheel optimization across x86_64, aarch64, ppc64le, and s390x architectures, noting uncertainty about implementation benefits but wanting systematic exploration through issue tracking. Issue #2158 was created with comprehensive analysis covering current limitations, investigation areas, multiple solution options, and clear acceptance criteria.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#2145
File: runtimes/rocm-tensorflow/ubi9-python-3.12/pyproject.toml:75-77
Timestamp: 2025-08-27T15:33:28.871Z
Learning: jiridanek requested GitHub issue creation for multi-architecture environment exploration during PR #2145 review, specifically to investigate architecture-specific environments for ROCm TensorFlow wheel optimization across x86_64, aarch64, ppc64le, and s390x architectures, noting uncertainty about implementation benefits but wanting systematic exploration through issue tracking.

Learnt from: jiridanek
PR: opendatahub-io/notebooks#1333
File: runtimes/rocm-tensorflow/ubi9-python-3.12/Dockerfile.rocm:50-50
Timestamp: 2025-07-08T19:30:01.738Z
Learning: jiridanek requested GitHub issue creation for multi-architecture support in ROCm TensorFlow image during PR #1333 review. Issue #1346 was created with comprehensive problem description covering hardcoded x86_64 architecture breaking multi-arch support, detailed impact analysis, three solution options (runtime detection, BuildKit TARGETARCH integration, hybrid approach) with pros/cons analysis, comprehensive acceptance criteria covering core requirements and testing, phased implementation guidance, related files identification, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.

Comment on lines +165 to +172
# Power-specific ONNX/OpenBLAS installation
RUN if [ "$(uname -m)" = "ppc64le" ]; then \
pip install /onnxwheels/*.whl && \
PREFIX=/usr/local make -C /openblas install && \
rm -rf /onnxwheels /openblas; \
else \
echo "Skipping ONNX/OpenBLAS install on non-Power"; \
fi
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

⚠️ Potential issue

Fix: OpenBLAS “make install” runs in final image without make installed

The final stage calls make -C /openblas install, but make isn’t installed in the runtime image. Install artifacts in the builder and copy them into the final image instead to avoid toolchain bleed and build failures.

Apply this diff in the openblas-builder stage and final stage:

@@
 FROM common-builder AS openblas-builder
-ARG OPENBLAS_VERSION=0.3.30
+ARG OPENBLAS_VERSION=0.3.30
 WORKDIR /root
 RUN if [ "$(uname -m)" = "ppc64le" ]; then \
     wget https://github.com/OpenMathLib/OpenBLAS/releases/download/v${OPENBLAS_VERSION}/OpenBLAS-${OPENBLAS_VERSION}.zip && \
     unzip OpenBLAS-${OPENBLAS_VERSION}.zip && \
     cd OpenBLAS-${OPENBLAS_VERSION} && \
-    make -j$(nproc) TARGET=POWER9 BINARY=64 USE_OPENMP=1 USE_THREAD=1 NUM_THREADS=120 DYNAMIC_ARCH=1 INTERFACE64=0; \
+    make -j$(nproc) TARGET=POWER9 BINARY=64 USE_OPENMP=1 USE_THREAD=1 NUM_THREADS=120 DYNAMIC_ARCH=1 INTERFACE64=0 && \
+    make install PREFIX=/usr/local DESTDIR=/root/openblas-stage; \
     else \
-    mkdir -p OpenBLAS-${OPENBLAS_VERSION}; \
+    mkdir -p openblas-stage; \
     echo "Skipping OpenBLAS build on non-Power"; \
     fi
@@
-# Copy OpenBLAS,ONNX wheels for Power
-COPY --from=openblas-builder /root/OpenBLAS-${OPENBLAS_VERSION} /openblas
+# Copy OpenBLAS,ONNX wheels for Power
+COPY --from=openblas-builder /root/openblas-stage/usr/local/ /usr/local/
 COPY --from=onnx-builder /root/onnx_wheel/ /onnxwheels/
@@
-# Power-specific ONNX/OpenBLAS installation
-RUN if [ "$(uname -m)" = "ppc64le" ]; then \
-    pip install /onnxwheels/*.whl && \
-    PREFIX=/usr/local make -C /openblas install && \
-    rm -rf /onnxwheels /openblas; \
-    else \
-    echo "Skipping ONNX/OpenBLAS install on non-Power"; \
-    fi
+# Power-specific ONNX installation
+RUN if [ "$(uname -m)" = "ppc64le" ]; then \
+    pip install /onnxwheels/*.whl && \
+    rm -rf /onnxwheels; \
+  else \
+    echo "Skipping ONNX install on non-Power"; \
+  fi

Committable suggestion skipped: line range outside the PR's diff.

@jiridanek
Copy link
Member

it looks legit, I think I will apply the TARGETARCH suggestion from coderabbit, and maybe indent the bash scripts a bit

@jiridanek
Copy link
Member

trying a slightly modified version in https://github.com/jiridanek/notebooks/actions/runs/17637719754

reformatted the bash a little

jiridanek pushed a commit to jiridanek/notebooks that referenced this pull request Sep 11, 2025
jiridanek added a commit to jiridanek/notebooks that referenced this pull request Sep 11, 2025
@bhagyashrigai
Copy link
Contributor Author

Hi @jiridanek
It looks like some test cases are failing ,
could you please guide me on that

@openshift-ci openshift-ci bot added size/l and removed size/l labels Sep 11, 2025
@openshift-ci openshift-ci bot added size/l and removed size/l labels Sep 11, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 15

🧹 Nitpick comments (2)
jupyter/datascience/ubi9-python-3.12/pylock.toml (2)

1107-1113: grpcio gating: confirm intent and downstreams.

If you exclude grpcio on ppc64le, ensure no remaining deps (e.g., opentelemetry exporters, ray) require it. Alternative is building from source with OpenSSL on ppc64le in the builder stage.

If building is feasible:

- marker = "python_full_version >= '3.10' and platform_machine != 'ppc64le'"
+ marker = "python_full_version >= '3.10'"

…and ensure builder has:
ENV GRPC_PYTHON_BUILD_SYSTEM_OPENSSL=1


3989-3995: wrapt gating: verify dependents (e.g., opentelemetry-instrumentation).

wrapt builds from sdist on Power; excluding it plus telemetry packages significantly reduces functionality. Prefer allowing build-from-source or keep telemetry optional.

If keeping telemetry, remove the marker:

- marker = "platform_machine != 'ppc64le'"
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between bad1322 and 9cfa50f.

📒 Files selected for processing (2)
  • jupyter/datascience/ubi9-python-3.12/pylock.toml (22 hunks)
  • jupyter/datascience/ubi9-python-3.12/pyproject.toml (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • jupyter/datascience/ubi9-python-3.12/pyproject.toml
🧰 Additional context used
🧠 Learnings (4)
📓 Common learnings
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-05T17:24:08.616Z
Learning: jiridanek requested PR review for #1521 covering s390x architecture support improvements, demonstrating continued focus on systematic multi-architecture compatibility enhancements in the opendatahub-io/notebooks repository through clean implementation with centralized configuration, proper CI integration, and architecture-aware testing patterns.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:0-0
Timestamp: 2025-09-05T11:27:31.040Z
Learning: jiridanek requested GitHub issue creation for build toolchain optimization in datascience runtime during PR #2215 review. Issue #2308 was created addressing unnecessary build dependencies (gcc-toolset-13, cmake, ninja-build, rust, cargo) in final runtime image for ppc64le architecture, covering comprehensive problem analysis with specific line numbers, multiple solution options for builder-only toolchains, clear acceptance criteria for size reduction and security improvement, detailed implementation guidance for package segregation, and proper context linking to PR #2215 review comment, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:0-0
Timestamp: 2025-09-05T12:10:50.856Z
Learning: jiridanek requested GitHub issue creation for Dockerfile environment variable refactoring during PR #2215 review. Issue addresses build-only variables (OPENBLAS_VERSION, ONNX_VERSION, GRPC_PYTHON_BUILD_SYSTEM_OPENSSL) being unnecessarily written to /etc/profile.d/ppc64le.sh in runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu, causing variable duplication across stages, unreliable sourcing in non-login build contexts, and violation of DRY principles. The issue includes comprehensive problem description covering affected lines 30-37, detailed impact analysis of build reliability and maintenance overhead, three solution options with centralized ARG/ENV approach as recommended, clear acceptance criteria for version centralization and build-only variable cleanup, and specific implementation guidance with code examples, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:0-0
Timestamp: 2025-09-05T12:10:50.856Z
Learning: jiridanek requested GitHub issue creation for Dockerfile environment variable refactoring during PR #2215 review. Issue #2311 was created addressing build-only variables (OPENBLAS_VERSION, ONNX_VERSION, GRPC_PYTHON_BUILD_SYSTEM_OPENSSL) being unnecessarily written to /etc/profile.d/ppc64le.sh in runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu, causing variable duplication across stages, unreliable sourcing in non-login build contexts, and violation of DRY principles. The issue includes comprehensive problem description covering affected lines 30-37, detailed impact analysis of build reliability and maintenance overhead, three solution options with centralized ARG/ENV approach as recommended, clear acceptance criteria for version centralization and build-only variable cleanup, and specific implementation guidance with code examples, assigned to jiridanek, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2145
File: runtimes/rocm-tensorflow/ubi9-python-3.12/pyproject.toml:75-77
Timestamp: 2025-08-27T15:33:28.871Z
Learning: jiridanek requested GitHub issue creation for multi-architecture environment exploration during PR #2145 review, specifically to investigate architecture-specific environments for ROCm TensorFlow wheel optimization across x86_64, aarch64, ppc64le, and s390x architectures, noting uncertainty about implementation benefits but wanting systematic exploration through issue tracking. Issue #2158 was created with comprehensive analysis covering current limitations, investigation areas, multiple solution options, and clear acceptance criteria.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1320
File: jupyter/pytorch/ubi9-python-3.12/Dockerfile.cuda:42-52
Timestamp: 2025-07-09T12:29:56.162Z
Learning: jiridanek requested GitHub issue creation for OpenShift client architecture mapping problem affecting 29 Dockerfiles during PR #1320 review. Issue was created with comprehensive analysis covering all affected files using $(uname -m) returning 'aarch64' but OpenShift mirror expecting 'arm64', systematic solution using BuildKit TARGETARCH mapping with proper amd64→x86_64 and arm64→arm64 conversion, detailed acceptance criteria, and implementation guidance, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1320
File: jupyter/pytorch/ubi9-python-3.12/Dockerfile.cuda:65-66
Timestamp: 2025-07-09T12:31:02.033Z
Learning: jiridanek requested GitHub issue creation for MSSQL repo file hardcoding problem during PR #1320 review. Issue #1363 was created and updated with comprehensive problem description covering hardcoded x86_64 MSSQL repo files breaking multi-architecture builds across 10 affected Dockerfiles (including datascience, CUDA, ROCm, and TrustyAI variants), detailed root cause analysis, three solution options with code examples, clear acceptance criteria for all image types, implementation guidance following established multi-architecture patterns, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2145
File: runtimes/rocm-tensorflow/ubi9-python-3.12/pyproject.toml:75-77
Timestamp: 2025-08-27T15:33:28.871Z
Learning: jiridanek requested GitHub issue creation for multi-architecture environment exploration during PR #2145 review, specifically to investigate architecture-specific environments for ROCm TensorFlow wheel optimization across x86_64, aarch64, ppc64le, and s390x architectures, noting uncertainty about implementation benefits but wanting systematic exploration through issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:122-123
Timestamp: 2025-09-05T12:25:09.719Z
Learning: jiridanek requested GitHub issue creation for Docker multi-stage synchronization improvement in codeserver/ubi9-python-3.12/Dockerfile.cpu during PR #2227 review. The issue addresses sentinel file pattern using /tmp/control copied to /dev/null for stage coordination between rpm-base, whl-cache, and codeserver stages, proposing semantic improvements with descriptive file names, inline documentation, and elimination of /dev/null hack while maintaining multi-architecture build functionality for ppc64le support.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1333
File: runtimes/rocm-tensorflow/ubi9-python-3.12/Dockerfile.rocm:50-50
Timestamp: 2025-07-08T19:30:01.738Z
Learning: jiridanek requested GitHub issue creation for multi-architecture support in ROCm TensorFlow image during PR #1333 review. Issue #1346 was created with comprehensive problem description covering hardcoded x86_64 architecture breaking multi-arch support, detailed impact analysis, three solution options (runtime detection, BuildKit TARGETARCH integration, hybrid approach) with pros/cons analysis, comprehensive acceptance criteria covering core requirements and testing, phased implementation guidance, related files identification, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:218-218
Timestamp: 2025-09-05T12:29:07.819Z
Learning: jiridanek requested GitHub issue creation for uv multi-stage Docker build architectural investigation during PR #2227 review. The current implementation uses a three-stage build with whl-cache stage for wheel building/caching, base stage for OS setup, and final codeserver stage for offline installation using --offline flag and cache mounts. The pattern separates build phase (internet access, build tools) from install phase (offline, faster) while supporting multi-architecture builds (x86_64, ppc64le) with sentinel file coordination using /tmp/control files.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1720
File: jupyter/pytorch+llmcompressor/ubi9-python-3.11/requirements.txt:2659-2680
Timestamp: 2025-08-07T12:41:48.997Z
Learning: For opendatahub-io/notebooks, rpds-py 0.27.0 provides manylinux wheels for Python 3.11 and 3.12 on x86_64 and aarch64, so no Rust build is needed for these platforms. For s390x and ppc64le, wheels are not available, so a Rust build stage or version pinning is required if those images are built.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1396
File: runtimes/tensorflow/ubi9-python-3.12/Dockerfile.cuda:124-127
Timestamp: 2025-07-20T20:47:36.509Z
Learning: jiridanek identified that ARM64 wheels for h5py 3.14.0 are available on PyPI but being ignored due to AMD64-only dependency locking with --platform=linux/amd64. This causes unnecessary hdf5-devel package installation in ARM64 TensorFlow images when the ARM64 wheel h5py-3.14.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl could be used instead. The Pipfile.lock only contains 2 hashes for h5py, confirming limited platform consideration during lock generation.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2180
File: .tekton/odh-base-image-cuda-py312-ubi9-pull-request.yaml:36-39
Timestamp: 2025-08-28T12:42:23.404Z
Learning: Konflux build system in opendatahub-io/notebooks repository requires "linux/x86_64" platform identifier format for Tekton PipelineRun build-platforms parameter, not the OCI-standard "linux/amd64". This is consistently used across all .tekton/ pipeline files and enforced by scripts/generate_pull_request_pipelineruns.py type definitions. Docker/buildah contexts use "linux/amd64" while Konflux/Tekton contexts use "linux/x86_64".
📚 Learning: 2025-09-05T10:05:35.575Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1513
File: runtimes/datascience/ubi9-python-3.12/pylock.toml:180-180
Timestamp: 2025-09-05T10:05:35.575Z
Learning: In Python lock files for the datascience runtime, both bcrypt and paramiko packages are excluded from s390x platform using the marker "platform_machine != 's390x'" due to compatibility issues on IBM System z mainframe architecture.

Applied to files:

  • jupyter/datascience/ubi9-python-3.12/pylock.toml
📚 Learning: 2025-07-20T20:47:36.509Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1396
File: runtimes/tensorflow/ubi9-python-3.12/Dockerfile.cuda:124-127
Timestamp: 2025-07-20T20:47:36.509Z
Learning: jiridanek identified that ARM64 wheels for h5py 3.14.0 are available on PyPI but being ignored due to AMD64-only dependency locking with --platform=linux/amd64. This causes unnecessary hdf5-devel package installation in ARM64 TensorFlow images when the ARM64 wheel h5py-3.14.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl could be used instead. The Pipfile.lock only contains 2 hashes for h5py, confirming limited platform consideration during lock generation.

Applied to files:

  • jupyter/datascience/ubi9-python-3.12/pylock.toml
📚 Learning: 2025-08-29T15:18:43.229Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/pylock.toml:110-110
Timestamp: 2025-08-29T15:18:43.229Z
Learning: When uv compiles dependencies with platform restrictions (like `platform_machine != 'ppc64le'`), it automatically propagates these markers to transitive dependencies in the generated lock files. This is expected behavior and doesn't require manual intervention in source files.

Applied to files:

  • jupyter/datascience/ubi9-python-3.12/pylock.toml
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-datascience-cpu-py312-ubi9-on-pull-request
  • GitHub Check: build (jupyter-datascience-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: code-static-analysis
🔇 Additional comments (6)
jupyter/datascience/ubi9-python-3.12/pylock.toml (6)

1759-1765: msgpack gating: verify necessity.

msgpack can usually build from sdist on ppc64le. Excluding it may impact libraries like ray (already gated) but also others.

If you keep it, ensure dependent stacks won’t degrade; otherwise remove the marker to allow source builds.


2621-2627: py-spy gating on ppc64le looks correct.

py-spy lacks Power wheels; excluding it prevents build pain without critical functionality loss.


2636-2642: pyarrow gating: document user-visible behavior change.

Excluding PyArrow on ppc64le is acceptable if intended, but please note that parquet/arrow features will be unavailable on Power images.

Optionally consider building from source in a builder stage and copying wheels if size/time is acceptable.


2765-2771: pydantic-core may require Rust on ppc64le.

No marker here is fine, but ensure the ppc64le builder stage includes a Rust toolchain (maturin/cargo) or a prebuilt wheel source. Otherwise installs will fail.


3227-3233: ray gating on ppc64le is reasonable.

Ray wheels for Power are not commonly published; excluding avoids heavy, fragile builds.


534-536: Align pylock markers with pyproject and Dockerfile install path

Dockerfile installs from pylock.toml (uv pip install --requirements=./pylock.toml), so pyproject markers are ignored — ensure every intended platform exclusion is encoded in jupyter/datascience/ubi9-python-3.12/pylock.toml and do not exclude pure-Python packages you still need. Verified pylock.toml already contains many "platform_machine != 'ppc64le'" markers (examples: aiohttp-cors, bcrypt, grpcio, paramiko, pyarrow, ray, py-spy, virtualenv, wrapt). Update pyproject markers if necessary and regenerate the lock so markers propagate to transitives.

Comment on lines 105 to 111
[[packages]]
name = "aiohttp-cors"
version = "0.8.1"
marker = "platform_machine != 'ppc64le'"
sdist = { url = "https://files.pythonhosted.org/packages/6f/6d/d89e846a5444b3d5eb8985a6ddb0daef3774928e1bfbce8e84ec97b0ffa7/aiohttp_cors-0.8.1.tar.gz", upload-time = 2025-03-31T14:16:20Z, size = 38626, hashes = { sha256 = "ccacf9cb84b64939ea15f859a146af1f662a6b1d68175754a07315e305fb1403" } }
wheels = [{ url = "https://files.pythonhosted.org/packages/98/3b/40a68de458904bcc143622015fff2352b6461cd92fd66d3527bf1c6f5716/aiohttp_cors-0.8.1-py3-none-any.whl", upload-time = 2025-03-31T14:16:18Z, size = 25231, hashes = { sha256 = "3180cf304c5c712d626b9162b195b1db7ddf976a2a25172b35bb2448b890a80d" } }]

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Do not gate pure-Python aiohttp-cors on ppc64le.

This exclusion is unnecessary and may drop a transitive dep on Power builds.

Apply:

- marker = "platform_machine != 'ppc64le'"

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In jupyter/datascience/ubi9-python-3.12/pylock.toml around lines 105 to 111 the
aiohttp-cors package is incorrectly marked with marker = "platform_machine !=
'ppc64le'", which excludes the pure-Python package from ppc64le builds and can
drop transitive dependencies; remove the marker line from this [[packages]]
entry so aiohttp-cors is available on all platforms (leave sdist and wheels
entries unchanged).

Comment on lines 222 to 228
[[packages]]
name = "bcrypt"
version = "4.3.0"
marker = "platform_machine != 'ppc64le'"
sdist = { url = "https://files.pythonhosted.org/packages/bb/5d/6d7433e0f3cd46ce0b43cd65e1db465ea024dbb8216fb2404e919c2ad77b/bcrypt-4.3.0.tar.gz", upload-time = 2025-02-28T01:24:09Z, size = 25697, hashes = { sha256 = "3a3fd2204178b6d2adcf09cb4f6426ffef54762577a7c9b54c159008cb288c18" } }
wheels = [
{ url = "https://files.pythonhosted.org/packages/bf/2c/3d44e853d1fe969d229bd58d39ae6902b3d924af0e2b5a60d17d4b809ded/bcrypt-4.3.0-cp313-cp313t-macosx_10_12_universal2.whl", upload-time = 2025-02-28T01:22:34Z, size = 483719, hashes = { sha256 = "f01e060f14b6b57bbb72fc5b4a83ac21c443c9a2ee708e04a10e9192f90a6281" } },
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

bcrypt should not be excluded on ppc64le; past exclusions targeted s390x.

This likely regressed from the s390x-only workaround noted historically. Excluding here may break auth flows that rely on bcrypt.

Fix:

- marker = "platform_machine != 'ppc64le'"
+ # Keep available on ppc64le. If needed, only exclude on s390x:
+ # marker = "platform_machine != 's390x'"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
[[packages]]
name = "bcrypt"
version = "4.3.0"
marker = "platform_machine != 'ppc64le'"
sdist = { url = "https://files.pythonhosted.org/packages/bb/5d/6d7433e0f3cd46ce0b43cd65e1db465ea024dbb8216fb2404e919c2ad77b/bcrypt-4.3.0.tar.gz", upload-time = 2025-02-28T01:24:09Z, size = 25697, hashes = { sha256 = "3a3fd2204178b6d2adcf09cb4f6426ffef54762577a7c9b54c159008cb288c18" } }
wheels = [
{ url = "https://files.pythonhosted.org/packages/bf/2c/3d44e853d1fe969d229bd58d39ae6902b3d924af0e2b5a60d17d4b809ded/bcrypt-4.3.0-cp313-cp313t-macosx_10_12_universal2.whl", upload-time = 2025-02-28T01:22:34Z, size = 483719, hashes = { sha256 = "f01e060f14b6b57bbb72fc5b4a83ac21c443c9a2ee708e04a10e9192f90a6281" } },
[[packages]]
name = "bcrypt"
version = "4.3.0"
# Keep available on ppc64le. If needed, only exclude on s390x:
# marker = "platform_machine != 's390x'"
sdist = { url = "https://files.pythonhosted.org/packages/bb/5d/6d7433e0f3cd46ce0b43cd65e1db465ea024dbb8216fb2404e919c2ad77b/bcrypt-4.3.0.tar.gz", upload-time = 2025-02-28T01:24:09Z, size = 25697, hashes = { sha256 = "3a3fd2204178b6d2adcf09cb4f6426ffef54762577a7c9b54c159008cb288c18" } }
wheels = [
{ url = "https://files.pythonhosted.org/packages/bf/2c/3d44e853d1fe969d229bd58d39ae6902b3d924af0e2b5a60d17d4b809ded/bcrypt-4.3.0-cp313-cp313t-macosx_10_12_universal2.whl", upload-time = 2025-02-28T01:22:34Z, size = 483719, hashes = { sha256 = "f01e060f14b6b57bbb72fc5b4a83ac21c443c9a2ee708e04a10e9192f90a6281" } },
🤖 Prompt for AI Agents
In jupyter/datascience/ubi9-python-3.12/pylock.toml around lines 222 to 228, the
bcrypt package is incorrectly marked as excluded for platform_machine ==
'ppc64le' (marker = "platform_machine != 'ppc64le'"), which should not be
excluded — the historical workaround targeted s390x; change the marker so bcrypt
is not excluded on ppc64le (either remove the marker entirely or replace it with
the correct condition for s390x, e.g. platform_machine != 's390x'), and keep the
sdist/wheel entries unchanged so bcrypt remains available on ppc64le while still
avoiding the intended problematic platform.

Comment on lines 534 to +537
sdist = { url = "https://files.pythonhosted.org/packages/d4/72/7991b33a0aeabee250bedb9a1fa0f30aa49f5739718df6f4b18aeb62df90/codeflare_sdk-0.31.0.tar.gz", upload-time = 2025-09-11T08:19:58Z, size = 89503, hashes = { sha256 = "f68e4cbb60e5a0b55fd14b55addf45eaee6c5bf49f0dd7d30ca340f4b6fd564c" } }
wheels = [{ url = "https://files.pythonhosted.org/packages/ad/84/8005beea8a1773be2860bb90fc8d2cbd4826a72ad2d3f29172f38caa0aca/codeflare_sdk-0.31.0-py3-none-any.whl", upload-time = 2025-09-11T08:19:56Z, size = 140633, hashes = { sha256 = "f477b7f382c3b85002aff6e566a76e54405a4ade4211041adbfc11f798896276" } }]


Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

codeflare-sdk missing platform marker despite pyproject exclusion.

pylock.toml is the source of truth during uv --requirements installs; without a marker, ppc64le will still try to install codeflare-sdk.

Add the marker:

 [[packages]]
 name = "codeflare-sdk"
 version = "0.31.0"
+marker = "platform_machine != 'ppc64le'"
 sdist = { url = ".../codeflare_sdk-0.31.0.tar.gz", ... }
 wheels = [{ url = ".../codeflare_sdk-0.31.0-py3-none-any.whl", ... }]

Also re-run:
uv pip compile pyproject.toml --output-file pylock.toml --format pylock.toml --generate-hashes --emit-index-url --python-version=3.12 --python-platform linux

Comment on lines 544 to 550
[[packages]]
name = "colorful"
version = "0.5.7"
marker = "platform_machine != 'ppc64le'"
sdist = { url = "https://files.pythonhosted.org/packages/0c/0c/d180ebf230b771907f46981023a80f62cf592d49673cc5f8a5993aa67bb6/colorful-0.5.7.tar.gz", upload-time = 2025-06-30T15:24:03Z, size = 209487, hashes = { sha256 = "c5452179b56601c178b03d468a5326cc1fe37d9be81d24d0d6bdab36c4b93ad8" } }
wheels = [{ url = "https://files.pythonhosted.org/packages/e2/98/0d791b3d1eaed89d7d370b5cf9b8079b124da0545559417f394ba21b5532/colorful-0.5.7-py2.py3-none-any.whl", upload-time = 2025-06-30T15:24:02Z, size = 201475, hashes = { sha256 = "495dd3a23151a9568cee8a90fc1174c902ad7ef06655f50b6bddf9e80008da69" } }]

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

colorful is pure-Python; avoid excluding on ppc64le.

Remove:

- marker = "platform_machine != 'ppc64le'"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
[[packages]]
name = "colorful"
version = "0.5.7"
marker = "platform_machine != 'ppc64le'"
sdist = { url = "https://files.pythonhosted.org/packages/0c/0c/d180ebf230b771907f46981023a80f62cf592d49673cc5f8a5993aa67bb6/colorful-0.5.7.tar.gz", upload-time = 2025-06-30T15:24:03Z, size = 209487, hashes = { sha256 = "c5452179b56601c178b03d468a5326cc1fe37d9be81d24d0d6bdab36c4b93ad8" } }
wheels = [{ url = "https://files.pythonhosted.org/packages/e2/98/0d791b3d1eaed89d7d370b5cf9b8079b124da0545559417f394ba21b5532/colorful-0.5.7-py2.py3-none-any.whl", upload-time = 2025-06-30T15:24:02Z, size = 201475, hashes = { sha256 = "495dd3a23151a9568cee8a90fc1174c902ad7ef06655f50b6bddf9e80008da69" } }]
[[packages]]
name = "colorful"
version = "0.5.7"
sdist = { url = "https://files.pythonhosted.org/packages/0c/0c/d180ebf230b771907f46981023a80f62cf592d49673cc5f8a5993aa67bb6/colorful-0.5.7.tar.gz", upload-time = 2025-06-30T15:24:03Z, size = 209487, hashes = { sha256 = "c5452179b56601c178b03d468a5326cc1fe37d9be81d24d0d6bdab36c4b93ad8" } }
wheels = [{ url = "https://files.pythonhosted.org/packages/e2/98/0d791b3d1eaed89d7d370b5cf9b8079b124da0545559417f394ba21b5532/colorful-0.5.7-py2.py3-none-any.whl", upload-time = 2025-06-30T15:24:02Z, size = 201475, hashes = { sha256 = "495dd3a23151a9568cee8a90fc1174c902ad7ef06655f50b6bddf9e80008da69" } }]
🤖 Prompt for AI Agents
In jupyter/datascience/ubi9-python-3.12/pylock.toml around lines 544 to 550, the
`colorful` package is marked with `marker = "platform_machine != 'ppc64le'"`
which incorrectly excludes ppc64le despite being pure-Python; remove the
`marker` entry for this package so the package applies to all platforms (delete
the marker key from the [[packages]] block for colorful and keep the
sdist/wheels entries intact).

Comment on lines 731 to 737
[[packages]]
name = "distlib"
version = "0.4.0"
marker = "platform_machine != 'ppc64le'"
sdist = { url = "https://files.pythonhosted.org/packages/96/8e/709914eb2b5749865801041647dc7f4e6d00b549cfe88b65ca192995f07c/distlib-0.4.0.tar.gz", upload-time = 2025-07-17T16:52:00Z, size = 614605, hashes = { sha256 = "feec40075be03a04501a973d81f633735b4b69f98b05450592310c0f401a4e0d" } }
wheels = [{ url = "https://files.pythonhosted.org/packages/33/6b/e0547afaf41bf2c42e52430072fa5658766e3d65bd4b03a563d1b6336f57/distlib-0.4.0-py2.py3-none-any.whl", upload-time = 2025-07-17T16:51:58Z, size = 469047, hashes = { sha256 = "9659f7d87e46584a30b5780e43ac7a2143098441670ff0a49d5f9034c54a6c16" } }]

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

distlib is pure-Python; do not gate it.

Remove:

- marker = "platform_machine != 'ppc64le'"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
[[packages]]
name = "distlib"
version = "0.4.0"
marker = "platform_machine != 'ppc64le'"
sdist = { url = "https://files.pythonhosted.org/packages/96/8e/709914eb2b5749865801041647dc7f4e6d00b549cfe88b65ca192995f07c/distlib-0.4.0.tar.gz", upload-time = 2025-07-17T16:52:00Z, size = 614605, hashes = { sha256 = "feec40075be03a04501a973d81f633735b4b69f98b05450592310c0f401a4e0d" } }
wheels = [{ url = "https://files.pythonhosted.org/packages/33/6b/e0547afaf41bf2c42e52430072fa5658766e3d65bd4b03a563d1b6336f57/distlib-0.4.0-py2.py3-none-any.whl", upload-time = 2025-07-17T16:51:58Z, size = 469047, hashes = { sha256 = "9659f7d87e46584a30b5780e43ac7a2143098441670ff0a49d5f9034c54a6c16" } }]
[[packages]]
name = "distlib"
version = "0.4.0"
sdist = { url = "https://files.pythonhosted.org/packages/96/8e/709914eb2b5749865801041647dc7f4e6d00b549cfe88b65ca192995f07c/distlib-0.4.0.tar.gz", upload-time = 2025-07-17T16:52:00Z, size = 614605, hashes = { sha256 = "feec40075be03a04501a973d81f633735b4b69f98b05450592310c0f401a4e0d" } }
wheels = [{ url = "https://files.pythonhosted.org/packages/33/6b/e0547afaf41bf2c42e52430072fa5658766e3d65bd4b03a563d1b6336f57/distlib-0.4.0-py2.py3-none-any.whl", upload-time = 2025-07-17T16:51:58Z, size = 469047, hashes = { sha256 = "9659f7d87e46584a30b5780e43ac7a2143098441670ff0a49d5f9034c54a6c16" } }]
🤖 Prompt for AI Agents
In jupyter/datascience/ubi9-python-3.12/pylock.toml around lines 731-737 the
distlib package is incorrectly conditioned with marker = "platform_machine !=
'ppc64le'"; distlib is pure-Python and must not be gated. Edit the distlib
[[packages]] entry to remove the marker line entirely (leave name, version,
sdist and wheels intact) so the package is applied unconditionally.

Comment on lines 2159 to 2214
[[packages]]
name = "opencensus"
version = "0.11.4"
marker = "platform_machine != 'ppc64le'"
sdist = { url = "https://files.pythonhosted.org/packages/15/a7/a46dcffa1b63084f9f17fe3c8cb20724c4c8f91009fd0b2cfdb27d5d2b35/opencensus-0.11.4.tar.gz", upload-time = 2024-01-03T18:04:07Z, size = 64966, hashes = { sha256 = "cbef87d8b8773064ab60e5c2a1ced58bbaa38a6d052c41aec224958ce544eff2" } }
wheels = [{ url = "https://files.pythonhosted.org/packages/b5/ed/9fbdeb23a09e430d87b7d72d430484b88184633dc50f6bfb792354b6f661/opencensus-0.11.4-py2.py3-none-any.whl", upload-time = 2024-01-03T18:04:05Z, size = 128225, hashes = { sha256 = "a18487ce68bc19900336e0ff4655c5a116daf10c1b3685ece8d971bddad6a864" } }]

[[packages]]
name = "opencensus-context"
version = "0.1.3"
marker = "platform_machine != 'ppc64le'"
sdist = { url = "https://files.pythonhosted.org/packages/4c/96/3b6f638f6275a8abbd45e582448723bffa29c1fb426721dedb5c72f7d056/opencensus-context-0.1.3.tar.gz", upload-time = 2022-08-03T22:20:22Z, size = 4066, hashes = { sha256 = "a03108c3c10d8c80bb5ddf5c8a1f033161fa61972a9917f9b9b3a18517f0088c" } }
wheels = [{ url = "https://files.pythonhosted.org/packages/10/68/162c97ea78c957d68ecf78a5c5041d2e25bd5562bdf5d89a6cbf7f8429bf/opencensus_context-0.1.3-py2.py3-none-any.whl", upload-time = 2022-08-03T22:20:20Z, size = 5060, hashes = { sha256 = "073bb0590007af276853009fac7e4bab1d523c3f03baf4cb4511ca38967c6039" } }]

[[packages]]
name = "openshift-client"
version = "1.0.18"
marker = "platform_machine != 'ppc64le'"
sdist = { url = "https://files.pythonhosted.org/packages/61/4d/96d40621a88e430127f98467b6ef67ff2e39f2d33b9740ea0e470cf2b8bc/openshift-client-1.0.18.tar.gz", upload-time = 2022-09-13T22:08:36Z, size = 78332, hashes = { sha256 = "be3979440cfd96788146a3a1650dabe939d4d516eea0b39f87e66d2ab39495b1" } }
wheels = [{ url = "https://files.pythonhosted.org/packages/1d/23/a84f274a534dfcd8454f20fc19d129a7d832e5816830946670b8c954438c/openshift_client-1.0.18-py2.py3-none-any.whl", upload-time = 2022-09-13T22:08:35Z, size = 75076, hashes = { sha256 = "d8a84080307ccd9556f6c62a3707a3e6507baedee36fa425754f67db9ded528b" } }]

[[packages]]
name = "opentelemetry-api"
version = "1.36.0"
marker = "platform_machine != 'ppc64le'"
sdist = { url = "https://files.pythonhosted.org/packages/27/d2/c782c88b8afbf961d6972428821c302bd1e9e7bc361352172f0ca31296e2/opentelemetry_api-1.36.0.tar.gz", upload-time = 2025-07-29T15:12:06Z, size = 64780, hashes = { sha256 = "9a72572b9c416d004d492cbc6e61962c0501eaf945ece9b5a0f56597d8348aa0" } }
wheels = [{ url = "https://files.pythonhosted.org/packages/bb/ee/6b08dde0a022c463b88f55ae81149584b125a42183407dc1045c486cc870/opentelemetry_api-1.36.0-py3-none-any.whl", upload-time = 2025-07-29T15:11:47Z, size = 65564, hashes = { sha256 = "02f20bcacf666e1333b6b1f04e647dc1d5111f86b8e510238fcc56d7762cda8c" } }]

[[packages]]
name = "opentelemetry-exporter-prometheus"
version = "0.57b0"
marker = "platform_machine != 'ppc64le'"
sdist = { url = "https://files.pythonhosted.org/packages/d6/d8/5f04c6d51c0823c3d8ac973a2a38db6fcf2d040ca3f08fc66b3c14b6e164/opentelemetry_exporter_prometheus-0.57b0.tar.gz", upload-time = 2025-07-29T15:12:09Z, size = 14906, hashes = { sha256 = "9eb15bdc189235cf03c3f93abf56f8ff0ab57a493a189263bd7fe77a4249e689" } }
wheels = [{ url = "https://files.pythonhosted.org/packages/c1/1c/40fb93a7b7e495985393bbc734104d5d20e470811644dd56c2402d683739/opentelemetry_exporter_prometheus-0.57b0-py3-none-any.whl", upload-time = 2025-07-29T15:11:54Z, size = 12922, hashes = { sha256 = "c5b893d1cdd593fb022af2c7de3258c2d5a4d04402ae80d9fa35675fed77f05c" } }]

[[packages]]
name = "opentelemetry-proto"
version = "1.27.0"
marker = "platform_machine != 'ppc64le'"
sdist = { url = "https://files.pythonhosted.org/packages/9a/59/959f0beea798ae0ee9c979b90f220736fbec924eedbefc60ca581232e659/opentelemetry_proto-1.27.0.tar.gz", upload-time = 2024-08-28T21:35:45Z, size = 34749, hashes = { sha256 = "33c9345d91dafd8a74fc3d7576c5a38f18b7fdf8d02983ac67485386132aedd6" } }
wheels = [{ url = "https://files.pythonhosted.org/packages/94/56/3d2d826834209b19a5141eed717f7922150224d1a982385d19a9444cbf8d/opentelemetry_proto-1.27.0-py3-none-any.whl", upload-time = 2024-08-28T21:35:21Z, size = 52464, hashes = { sha256 = "b133873de5581a50063e1e4b29cdcf0c5e253a8c2d8dc1229add20a4c3830ace" } }]

[[packages]]
name = "opentelemetry-sdk"
version = "1.36.0"
marker = "platform_machine != 'ppc64le'"
sdist = { url = "https://files.pythonhosted.org/packages/4c/85/8567a966b85a2d3f971c4d42f781c305b2b91c043724fa08fd37d158e9dc/opentelemetry_sdk-1.36.0.tar.gz", upload-time = 2025-07-29T15:12:16Z, size = 162557, hashes = { sha256 = "19c8c81599f51b71670661ff7495c905d8fdf6976e41622d5245b791b06fa581" } }
wheels = [{ url = "https://files.pythonhosted.org/packages/0b/59/7bed362ad1137ba5886dac8439e84cd2df6d087be7c09574ece47ae9b22c/opentelemetry_sdk-1.36.0-py3-none-any.whl", upload-time = 2025-07-29T15:12:03Z, size = 119995, hashes = { sha256 = "19fe048b42e98c5c1ffe85b569b7073576ad4ce0bcb6e9b4c6a39e890a6c45fb" } }]

[[packages]]
name = "opentelemetry-semantic-conventions"
version = "0.57b0"
marker = "platform_machine != 'ppc64le'"
sdist = { url = "https://files.pythonhosted.org/packages/7e/31/67dfa252ee88476a29200b0255bda8dfc2cf07b56ad66dc9a6221f7dc787/opentelemetry_semantic_conventions-0.57b0.tar.gz", upload-time = 2025-07-29T15:12:17Z, size = 124225, hashes = { sha256 = "609a4a79c7891b4620d64c7aac6898f872d790d75f22019913a660756f27ff32" } }
wheels = [{ url = "https://files.pythonhosted.org/packages/05/75/7d591371c6c39c73de5ce5da5a2cc7b72d1d1cd3f8f4638f553c01c37b11/opentelemetry_semantic_conventions-0.57b0-py3-none-any.whl", upload-time = 2025-07-29T15:12:04Z, size = 201627, hashes = { sha256 = "757f7e76293294f124c827e514c2a3144f191ef175b069ce8d1211e1e38e9e78" } }]

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

OpenCensus/OpenTelemetry packages are pure-Python; gating them removes tracing from ppc64le.

Unless you intend to drop telemetry on Power, these should stay enabled. grpcio can be handled separately.

Apply to each block:

- marker = "platform_machine != 'ppc64le'"

If grpc exporters become an issue, replace exporter packages with HTTP/OTLP alternatives that don’t require grpcio.

🤖 Prompt for AI Agents
In jupyter/datascience/ubi9-python-3.12/pylock.toml around lines 2159 to 2214
the OpenCensus/OpenTelemetry package blocks include marker="platform_machine !=
'ppc64le'" which prevents installation on ppc64le and removes telemetry there;
remove the marker (or change it to allow ppc64le) for each
OpenCensus/OpenTelemetry package block so these pure-Python packages remain
enabled on Power, and if grpcio causes build issues, replace grpc-based
exporters with HTTP/OTLP variants (or gate only grpcio) rather than gating the
telemetry packages themselves.

Comment on lines 2287 to 2293
[[packages]]
name = "paramiko"
version = "4.0.0"
marker = "platform_machine != 'ppc64le'"
sdist = { url = "https://files.pythonhosted.org/packages/1f/e7/81fdcbc7f190cdb058cffc9431587eb289833bdd633e2002455ca9bb13d4/paramiko-4.0.0.tar.gz", upload-time = 2025-08-04T01:02:03Z, size = 1630743, hashes = { sha256 = "6a25f07b380cc9c9a88d2b920ad37167ac4667f8d9886ccebd8f90f654b5d69f" } }
wheels = [{ url = "https://files.pythonhosted.org/packages/a9/90/a744336f5af32c433bd09af7854599682a383b37cfd78f7de263de6ad6cb/paramiko-4.0.0-py3-none-any.whl", upload-time = 2025-08-04T01:02:02Z, size = 223932, hashes = { sha256 = "0e20e00ac666503bf0b4eda3b6d833465a2b7aff2e2b3d79a8bba5ef144ee3b9" } }]

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

paramiko should not be excluded on ppc64le; past workaround was s390x-only.

This likely came from a copy/edit error similar to bcrypt.

Fix:

- marker = "platform_machine != 'ppc64le'"
+ # marker = "platform_machine != 's390x'"

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In jupyter/datascience/ubi9-python-3.12/pylock.toml around lines 2287 to 2293,
the paramiko package has an incorrect marker excluding ppc64le (likely
copy/paste from bcrypt); update the marker to match the intended s390x-only
workaround by changing marker = "platform_machine != 'ppc64le'" to marker =
"platform_machine != 's390x'". Ensure no other fields are modified.

Comment on lines 3306 to 3312
[[packages]]
name = "rich"
version = "13.9.4"
marker = "platform_machine != 'ppc64le'"
sdist = { url = "https://files.pythonhosted.org/packages/ab/3a/0316b28d0761c6734d6bc14e770d85506c986c85ffb239e688eeaab2c2bc/rich-13.9.4.tar.gz", upload-time = 2024-11-01T16:43:57Z, size = 223149, hashes = { sha256 = "439594978a49a09530cff7ebc4b5c7103ef57baf48d5ea3184f21d9a2befa098" } }
wheels = [{ url = "https://files.pythonhosted.org/packages/19/71/39c7c0d87f8d4e6c020a393182060eaefeeae6c01dab6a84ec346f2567df/rich-13.9.4-py3-none-any.whl", upload-time = 2024-11-01T16:43:55Z, size = 242424, hashes = { sha256 = "6049d5e6ec054bf2779ab3358186963bac2ea89175919d699e378b99738c2a90" } }]

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

rich is pure-Python; avoid excluding.

Remove:

- marker = "platform_machine != 'ppc64le'"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
[[packages]]
name = "rich"
version = "13.9.4"
marker = "platform_machine != 'ppc64le'"
sdist = { url = "https://files.pythonhosted.org/packages/ab/3a/0316b28d0761c6734d6bc14e770d85506c986c85ffb239e688eeaab2c2bc/rich-13.9.4.tar.gz", upload-time = 2024-11-01T16:43:57Z, size = 223149, hashes = { sha256 = "439594978a49a09530cff7ebc4b5c7103ef57baf48d5ea3184f21d9a2befa098" } }
wheels = [{ url = "https://files.pythonhosted.org/packages/19/71/39c7c0d87f8d4e6c020a393182060eaefeeae6c01dab6a84ec346f2567df/rich-13.9.4-py3-none-any.whl", upload-time = 2024-11-01T16:43:55Z, size = 242424, hashes = { sha256 = "6049d5e6ec054bf2779ab3358186963bac2ea89175919d699e378b99738c2a90" } }]
[[packages]]
name = "rich"
version = "13.9.4"
sdist = { url = "https://files.pythonhosted.org/packages/ab/3a/0316b28d0761c6734d6bc14e770d85506c986c85ffb239e688eeaab2c2bc/rich-13.9.4.tar.gz", upload-time = 2024-11-01T16:43:57Z, size = 223149, hashes = { sha256 = "439594978a49a09530cff7ebc4b5c7103ef57baf48d5ea3184f21d9a2befa098" } }
wheels = [{ url = "https://files.pythonhosted.org/packages/19/71/39c7c0d87f8d4e6c020a393182060eaefeeae6c01dab6a84ec346f2567df/rich-13.9.4-py3-none-any.whl", upload-time = 2024-11-01T16:43:55Z, size = 242424, hashes = { sha256 = "6049d5e6ec054bf2779ab3358186963bac2ea89175919d699e378b99738c2a90" } }]
🤖 Prompt for AI Agents
In jupyter/datascience/ubi9-python-3.12/pylock.toml around lines 3306 to 3312,
the rich package entry includes the marker "platform_machine != 'ppc64le'" which
wrongly excludes a pure-Python package on ppc64le; remove the marker (delete the
marker line or set it to an empty value) so rich is not platform-restricted,
leaving the sdist and wheels entries intact.

Comment on lines 3669 to 3675
[[packages]]
name = "smart-open"
version = "7.3.1"
marker = "platform_machine != 'ppc64le'"
sdist = { url = "https://files.pythonhosted.org/packages/16/be/bf2d60280a9d7fac98ece2150a22538fa4332cda67d04d9618c8406f791e/smart_open-7.3.1.tar.gz", upload-time = 2025-09-08T10:03:53Z, size = 51405, hashes = { sha256 = "b33fee8dffd206f189d5e704106a8723afb4210d2ff47e0e1f7fbe436187a990" } }
wheels = [{ url = "https://files.pythonhosted.org/packages/e5/d9/460cf1d58945dd771c228c29d5664f431dfc4060d3d092fed40546b11472/smart_open-7.3.1-py3-none-any.whl", upload-time = 2025-09-08T10:03:52Z, size = 61722, hashes = { sha256 = "e243b2e7f69d6c0c96dd763d6fbbedbb4e0e4fc6d74aa007acc5b018d523858c" } }]

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

smart-open is pure-Python; avoid excluding.

Remove:

- marker = "platform_machine != 'ppc64le'"
🤖 Prompt for AI Agents
jupyter/datascience/ubi9-python-3.12/pylock.toml around lines 3669 to 3675: the
smart-open package stanza includes a platform marker excluding ppc64le even
though smart-open is pure-Python; remove the marker line from this [[packages]]
block so the package is not platform-restricted and remains available on
ppc64le, keeping the sdist and wheels entries unchanged.

Comment on lines 3904 to 3910
[[packages]]
name = "virtualenv"
version = "20.34.0"
marker = "platform_machine != 'ppc64le'"
sdist = { url = "https://files.pythonhosted.org/packages/1c/14/37fcdba2808a6c615681cd216fecae00413c9dab44fb2e57805ecf3eaee3/virtualenv-20.34.0.tar.gz", upload-time = 2025-08-13T14:24:07Z, size = 6003808, hashes = { sha256 = "44815b2c9dee7ed86e387b842a84f20b93f7f417f95886ca1996a72a4138eb1a" } }
wheels = [{ url = "https://files.pythonhosted.org/packages/76/06/04c8e804f813cf972e3262f3f8584c232de64f0cde9f703b46cf53a45090/virtualenv-20.34.0-py3-none-any.whl", upload-time = 2025-08-13T14:24:05Z, size = 5983279, hashes = { sha256 = "341f5afa7eee943e4984a9207c025feedd768baff6753cd660c857ceb3e36026" } }]

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

virtualenv is pure-Python; avoid excluding.

This exclusion could break tooling in image builds.

Remove:

- marker = "platform_machine != 'ppc64le'"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
[[packages]]
name = "virtualenv"
version = "20.34.0"
marker = "platform_machine != 'ppc64le'"
sdist = { url = "https://files.pythonhosted.org/packages/1c/14/37fcdba2808a6c615681cd216fecae00413c9dab44fb2e57805ecf3eaee3/virtualenv-20.34.0.tar.gz", upload-time = 2025-08-13T14:24:07Z, size = 6003808, hashes = { sha256 = "44815b2c9dee7ed86e387b842a84f20b93f7f417f95886ca1996a72a4138eb1a" } }
wheels = [{ url = "https://files.pythonhosted.org/packages/76/06/04c8e804f813cf972e3262f3f8584c232de64f0cde9f703b46cf53a45090/virtualenv-20.34.0-py3-none-any.whl", upload-time = 2025-08-13T14:24:05Z, size = 5983279, hashes = { sha256 = "341f5afa7eee943e4984a9207c025feedd768baff6753cd660c857ceb3e36026" } }]
[[packages]]
name = "virtualenv"
version = "20.34.0"
sdist = { url = "https://files.pythonhosted.org/packages/1c/14/37fcdba2808a6c615681cd216fecae00413c9dab44fb2e57805ecf3eaee3/virtualenv-20.34.0.tar.gz", upload-time = 2025-08-13T14:24:07Z, size = 6003808, hashes = { sha256 = "44815b2c9dee7ed86e387b842a84f20b93f7f417f95886ca1996a72a4138eb1a" } }
wheels = [{ url = "https://files.pythonhosted.org/packages/76/06/04c8e804f813cf972e3262f3f8584c232de64f0cde9f703b46cf53a45090/virtualenv-20.34.0-py3-none-any.whl", upload-time = 2025-08-13T14:24:05Z, size = 5983279, hashes = { sha256 = "341f5afa7eee943e4984a9207c025feedd768baff6753cd660c857ceb3e36026" } }]
🤖 Prompt for AI Agents
In jupyter/datascience/ubi9-python-3.12/pylock.toml around lines 3904-3910, the
virtualenv package block includes a marker excluding it for ppc64le (marker =
"platform_machine != 'ppc64le'"); since virtualenv is pure-Python this exclusion
should be removed to avoid breaking image build tooling; edit the package entry
to delete the marker line (and any related conditional) so the virtualenv
package is available on all platforms, leaving the rest of the package metadata
unchanged.

updated docker file to fix test case
@openshift-ci openshift-ci bot added size/l and removed size/l labels Sep 12, 2025
Copy link
Contributor

openshift-ci bot commented Sep 12, 2025

@bhagyashrigai: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/notebook-jupyter-ds-ubi9-python-3-12-pr-image-mirror 7ca6955 link true /test notebook-jupyter-ds-ubi9-python-3-12-pr-image-mirror
ci/prow/images 7ca6955 link true /test images
ci/prow/notebooks-py312-ubi9-e2e-tests 7ca6955 link true /test notebooks-py312-ubi9-e2e-tests

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

jiridanek pushed a commit to jiridanek/notebooks that referenced this pull request Sep 15, 2025
…tahub-io#2406)

# Conflicts:
#	jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu
#	jupyter/datascience/ubi9-python-3.12/pylock.toml
#	jupyter/datascience/ubi9-python-3.12/pyproject.toml
jiridanek added a commit to jiridanek/notebooks that referenced this pull request Sep 15, 2025
jiridanek pushed a commit to jiridanek/notebooks that referenced this pull request Sep 15, 2025
…tahub-io#2406)

# Conflicts:
#	jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu
#	jupyter/datascience/ubi9-python-3.12/pylock.toml
#	jupyter/datascience/ubi9-python-3.12/pyproject.toml
jiridanek added a commit to jiridanek/notebooks that referenced this pull request Sep 15, 2025
jiridanek pushed a commit to jiridanek/notebooks that referenced this pull request Sep 15, 2025
…tahub-io#2406)

# Conflicts:
#	jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu
#	jupyter/datascience/ubi9-python-3.12/pylock.toml
#	jupyter/datascience/ubi9-python-3.12/pyproject.toml
jiridanek added a commit to jiridanek/notebooks that referenced this pull request Sep 15, 2025
jiridanek pushed a commit to jiridanek/notebooks that referenced this pull request Sep 16, 2025
…tahub-io#2406)

# Conflicts:
#	jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu
#	jupyter/datascience/ubi9-python-3.12/pylock.toml
#	jupyter/datascience/ubi9-python-3.12/pyproject.toml
jiridanek added a commit to jiridanek/notebooks that referenced this pull request Sep 16, 2025
jiridanek pushed a commit that referenced this pull request Sep 16, 2025
# Conflicts:
#	jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu
#	jupyter/datascience/ubi9-python-3.12/pylock.toml
#	jupyter/datascience/ubi9-python-3.12/pyproject.toml
jiridanek added a commit that referenced this pull request Sep 16, 2025
@jiridanek
Copy link
Member

jiridanek commented Sep 16, 2025

The work here could not be merged directly because of conflicts with the main branch, so it's been merged through

@jiridanek jiridanek closed this Sep 16, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

review-requested GitHub Bot creates notification on #pr-review-ai-ide-team slack channel size/l

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants