RHAIENG-2846 [3/3]: Hermetic Dockerfile + build patches + Tekton for codeserver#2985
Conversation
|
Skipping CI for Draft Pull Request. |
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds comprehensive hermetic/offline build inputs and tooling for the codeserver UBI9 Python 3.12 image: prefetch manifests/locks, repo configs, offline env and setup scripts, patch application, hermetic Dockerfile stages, Tekton pipeline updates, lockfile-generator helpers, and minor top-level ignore and CI adjustments. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related issues
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
...atches/code-server-v4.106.3/lib/vscode/extensions/microsoft-authentication/package-lock.json
Fixed
Show fixed
Hide fixed
2b3cb3a to
b994a0f
Compare
b994a0f to
c6f2b1d
Compare
c6f2b1d to
99a8767
Compare
There was a problem hiding this comment.
Actionable comments posted: 15
🧹 Nitpick comments (13)
codeserver/ubi9-python-3.12/devel_env_setup.sh (1)
22-22: Optional: quote the variable inmkdir -p.While
/wheelsdircontains no spaces, quoting${WHEEL_DIR}is ashellcheck-level best practice and guards against future value changes.♻️ Proposed fix
-mkdir -p ${WHEEL_DIR} +mkdir -p "${WHEEL_DIR}"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@codeserver/ubi9-python-3.12/devel_env_setup.sh` at line 22, Summary: Quote the WHEEL_DIR variable in the mkdir invocation to follow shellcheck best practices. Update the line that uses mkdir -p ${WHEEL_DIR} to quote the variable reference so it becomes mkdir -p "${WHEEL_DIR}", ensuring the WHEEL_DIR variable is safely handled if it ever contains spaces or special characters; locate and modify the occurrence of ${WHEEL_DIR} in devel_env_setup.sh accordingly..dockerignore (1)
9-9: Build context will now include the.git/directory — verify the size impact.With
.git/no longer excluded from the Docker build context, the entire.git/tree is sent to the Docker daemon on every build. For a repo with moderate history (or any substantial packaged assets), this can meaningfully inflate build-context transfer time. If Konflux clones with--depth=1, this is less of a concern, but worth confirming that CI clones are shallow.A descriptive comment is also clearer than a commented-out pattern:
📝 Suggested comment improvement
-#.git/ # Need to keep for gitsubmodule +# .git/ is intentionally NOT ignored: the Dockerfile needs it to resolve the +# code-server git submodule (COPY .git ...). Ensure CI uses a shallow clone.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.dockerignore at line 9, The commented line "#.git/ # Need to keep for gitsubmodule" causes the .git/ directory to be included in the Docker build context; either restore an explicit exclusion (add ".git/" to the .dockerignore) or replace the commented pattern with a clear explanatory comment and keep the exclusion removed only if you confirm CI clones use shallow clones (git clone --depth=1) so build context size is acceptable; update the line and comment in .dockerignore (the existing commented entry) and verify CI pipeline config uses shallow clones or adjust to re-exclude .git/..tekton/odh-workbench-codeserver-datascience-cpu-py312-ubi9-push.yaml (1)
60-165: Test and self-hosted extension npm paths add significant prefetch surface — confirm all are required for the code-server source build.The prefetch-input list includes paths for test infrastructure (
lib/vscode/test/automation,test/smoke,test/mcp,test/monaco,test/integration/browser), test extensions (vscode-api-tests,vscode-colorize-*,vscode-test-resolver,vscode-selfhost-import-aid,vscode-selfhost-test-provider), and e2e fixtures (test/e2e/extensions/test-extension). If these are only needed at test-run time (not during the compilation of code-server itself), removing them would reduce prefetch time and the hermetic dependency footprint.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.tekton/odh-workbench-codeserver-datascience-cpu-py312-ubi9-push.yaml around lines 60 - 165, The prefetch list includes many test/self-host extension paths (e.g., lib/vscode/test/automation, test/smoke, test/mcp, test/monaco, test/integration/browser, vscode-api-tests, vscode-colorize-*, vscode-test-resolver, vscode-selfhost-import-aid, vscode-selfhost-test-provider, test/e2e/extensions/test-extension) that may only be required for test runs and not for building code-server; inspect the build scripts and packaging steps that consume the prefetch-input entries (search for references to these directory names in your build/compile scripts and npm install steps) and remove any of these paths from the prefetch-input list that are not referenced at build/compile time, keeping only the directories required for the source build (e.g., core lib/vscode and build/*), and document which entries were removed and why so CI can validate the reduced prefetch surface.codeserver/ubi9-python-3.12/prefetch-input/patches/codeserver-offline-env.sh (1)
20-22: Prefer$HOMEover~for explicit and portable path expansion.
~in unquoted assignment expands correctly in bash but is opaque and not POSIX-portable (e.g., would fail if sourced bysh). SinceHERMETO_OUTPUTand other paths already use${}expansions, consistency favors$HOME.♻️ Proposed fix
-export ELECTRON_CACHE=~/.cache/electron +export ELECTRON_CACHE="${HOME}/.cache/electron" # Playwright: skip browser download (pre-populated by setup-offline-binaries.sh) -export PLAYWRIGHT_BROWSERS_PATH=~/.cache/ms-playwright +export PLAYWRIGHT_BROWSERS_PATH="${HOME}/.cache/ms-playwright"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@codeserver/ubi9-python-3.12/prefetch-input/patches/codeserver-offline-env.sh` around lines 20 - 22, The assignments for ELECTRON_CACHE and PLAYWRIGHT_BROWSERS_PATH use ~ which is not POSIX-portable and is inconsistent with other ${} expansions; update these environment variable assignments (ELECTRON_CACHE and PLAYWRIGHT_BROWSERS_PATH) to use ${HOME} (or $HOME) for explicit, portable expansion and keep style consistent with other variables like HERMETO_OUTPUT.scripts/lockfile-generators/helpers/rpm-lockfile-generate.sh (2)
52-58: EPEL key destination path hardcodes major version9while the download URL is dynamic.If this script ever runs on RHEL 10,
EPEL_KEYwill still point toRPM-GPG-KEY-EPEL-9while the curl will correctly downloadRPM-GPG-KEY-EPEL-10, leaving the key at the wrong path.♻️ Proposed fix — derive the key path from the detected OS version
-EPEL_KEY="/etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-9" +EPEL_KEY="/etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-${OS_VER%%.*}" if [[ ! -f "$EPEL_KEY" ]]; then mkdir -p "$(dirname "$EPEL_KEY")" curl -sL "https://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-${OS_VER%%.*}" -o "$EPEL_KEY"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/lockfile-generators/helpers/rpm-lockfile-generate.sh` around lines 52 - 58, EPEL_KEY is hardcoded to RPM-GPG-KEY-EPEL-9 while the downloaded file uses the OS major version from OS_VER; change EPEL_KEY to derive its filename from OS_VER (e.g. use the same expansion as in the curl URL, ${OS_VER%%.*}) so the path matches the downloaded key, and keep the existing mkdir -p and curl behaviour; update references to EPEL_KEY accordingly.
65-68: CRB repo ID hardcodes both OS version and architecture.
codeready-builder-for-rhel-9-x86_64-rpmsis hard-coded even thoughOS_VERis already resolved dynamically and the container may theoretically run on non-x86_64 infra in the future. The|| truesilently swallows any failure, so this won't break today, but it means CRB won't be enabled on other architectures or future RHEL versions without code changes.♻️ Proposed fix — derive repo ID from detected OS and architecture
- subscription-manager repos --enable=codeready-builder-for-rhel-9-x86_64-rpms 2>/dev/null || true + _crb_repo="codeready-builder-for-rhel-${OS_VER%%.*}-$(uname -m)-rpms" + subscription-manager repos --enable="${_crb_repo}" 2>/dev/null || true🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/lockfile-generators/helpers/rpm-lockfile-generate.sh` around lines 65 - 68, Replace the hard-coded CRB repo string with one derived from the detected OS version and architecture: use the existing OS_VER variable and compute ARCH (e.g., ARCH="$(uname -m)" with any necessary mappings like aarch64→aarch64, x86_64→x86_64) to build CRB_REPO="codeready-builder-for-rhel-${OS_VER}-${ARCH}-rpms", then call subscription-manager repos --enable="${CRB_REPO}" and, instead of silently swallowing errors with "|| true", check the command's exit status and emit a clear warning or error via echo/syslog if enabling fails so failures are visible; update the line that calls subscription-manager repos and any surrounding comments to reference CRB_REPO and the ARCH detection.codeserver/ubi9-python-3.12/prefetch-input/patches/setup-offline-binaries.sh (1)
71-72: Redundant alias variableVSCODE_OFFLINE_DIR.
VSCODE_OFFLINE_DIRis set to"${VSCODE_OFFLINE_CACHE}"and then used as the target for the next threecpcommands, butVSCODE_OFFLINE_CACHEis already exported bycodeserver-offline-env.sh. Use it directly.♻️ Proposed refactor
-VSCODE_OFFLINE_DIR="${VSCODE_OFFLINE_CACHE}" -mkdir -p "${VSCODE_OFFLINE_DIR}" +mkdir -p "${VSCODE_OFFLINE_CACHE}" # Copy .vsix extension files -cp "${HERMETO_OUTPUT}/deps/generic/ms-vscode.js-debug-companion.1.1.3.vsix" "${VSCODE_OFFLINE_DIR}/" -cp "${HERMETO_OUTPUT}/deps/generic/ms-vscode.js-debug.1.105.0.vsix" "${VSCODE_OFFLINE_DIR}/" -cp "${HERMETO_OUTPUT}/deps/generic/ms-vscode.vscode-js-profile-table.1.0.10.vsix" "${VSCODE_OFFLINE_DIR}/" +cp "${HERMETO_OUTPUT}/deps/generic/ms-vscode.js-debug-companion.1.1.3.vsix" "${VSCODE_OFFLINE_CACHE}/" +cp "${HERMETO_OUTPUT}/deps/generic/ms-vscode.js-debug.1.105.0.vsix" "${VSCODE_OFFLINE_CACHE}/" +cp "${HERMETO_OUTPUT}/deps/generic/ms-vscode.vscode-js-profile-table.1.0.10.vsix" "${VSCODE_OFFLINE_CACHE}/" # Copy Node.js runtime binary (for bundling with VSCode server) -cp "${HERMETO_OUTPUT}/deps/generic/node-v22.20.0-linux-x64.tar.gz" "${VSCODE_OFFLINE_DIR}/" +cp "${HERMETO_OUTPUT}/deps/generic/node-v22.20.0-linux-x64.tar.gz" "${VSCODE_OFFLINE_CACHE}/"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@codeserver/ubi9-python-3.12/prefetch-input/patches/setup-offline-binaries.sh` around lines 71 - 72, Remove the redundant alias VSCODE_OFFLINE_DIR and use the exported VSCODE_OFFLINE_CACHE directly: delete the VSCODE_OFFLINE_DIR assignment and update the mkdir -p and subsequent cp commands that reference VSCODE_OFFLINE_DIR to reference VSCODE_OFFLINE_CACHE instead (ensure the directory creation and copies still use mkdir -p "${VSCODE_OFFLINE_CACHE}" and the three cp targets use "${VSCODE_OFFLINE_CACHE}").scripts/lockfile-generators/helpers/hermeto-fetch-rpm.sh (2)
85-85: JSON built via string interpolation — fragile if paths contain special characters.While
mktemp-generated paths are unlikely to contain JSON-breaking characters, constructing JSON viajqwould be more robust:Proposed fix
- HERMETO_JSON="{\"type\": \"rpm\", \"options\": {\"ssl\": {\"client_cert\": \"$C_CERT\", \"client_key\": \"$C_KEY\", \"ca_bundle\": \"$C_CA\"}}}" + HERMETO_JSON=$(jq -n \ + --arg cert "$C_CERT" \ + --arg key "$C_KEY" \ + --arg ca "$C_CA" \ + '{"type":"rpm","options":{"ssl":{"client_cert":$cert,"client_key":$key,"ca_bundle":$ca}}}')🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/lockfile-generators/helpers/hermeto-fetch-rpm.sh` at line 85, The HERMETO_JSON string is built via fragile shell interpolation; replace the literal JSON construction with a robust jq-based build that uses jq -n and --arg/--argjson to set "type":"rpm" and the nested "options.ssl" object from the C_CERT, C_KEY, and C_CA shell variables, then capture jq's output into the HERMETO_JSON variable (referencing HERMETO_JSON and the C_CERT/C_KEY/C_CA vars to locate the code to change).
61-104: Cert directory not cleaned up on failure — add a trap.If hermeto fails (lines 90-95),
set -eexits the script before reaching the cleanup on line 104, leaving the extracted entitlement certs on disk. Atrapensures cleanup in all exit paths.Proposed fix
HERMETO_JSON='{"type": "rpm"}' CDN_CERT_DIR="" +cleanup() { [[ -n "${CDN_CERT_DIR:-}" ]] && rm -rf "$CDN_CERT_DIR"; } +trap cleanup EXIT if [[ -n "$ACTIVATION_KEY" ]] && [[ -n "$ORG" ]]; thenThen remove the manual cleanup on line 104:
-# Clean up extracted certs -[[ -n "${CDN_CERT_DIR:-}" ]] && rm -rf "$CDN_CERT_DIR"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/lockfile-generators/helpers/hermeto-fetch-rpm.sh` around lines 61 - 104, The extracted CDN_CERT_DIR can be left behind if the script exits early (e.g. hermeto/podman failures); add a trap at the top of the CDC_CERT_DIR usage block to remove "$CDN_CERT_DIR" on EXIT (or on ERR/EXIT) so cleanup always runs, referencing the CDN_CERT_DIR variable and the HERMETO_JSON/podman run blocks to ensure the temp dir is removed even on failures; then remove the manual cleanup line that currently does rm -rf "$CDN_CERT_DIR" at the end to avoid double-deletion.codeserver/ubi9-python-3.12/Dockerfile.cpu (3)
537-548: Test log baked into the final production image.Lines 547–548 copy
/tmp/test_log.txtfrom thetestsstage into the finalcodeserverimage. This ensures the test stage runs (since Docker won't buildtestsunless something depends on it), but it permanently adds a test artifact to the production image.If the sole purpose is to force the test stage to execute, consider an alternative like
COPY --from=tests /tmp/test_log.txt /dev/null(similar to the sentinel pattern used elsewhere), which avoids retaining the file.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@codeserver/ubi9-python-3.12/Dockerfile.cpu` around lines 537 - 548, The Dockerfile currently copies the test log from the tests stage into the final production image using "COPY --from=tests /tmp/test_log.txt /tmp/test_log.txt", which bakes test artifacts into the production image; update the final stage to avoid retaining the file by changing that COPY to target a non-retained path (e.g., "COPY --from=tests /tmp/test_log.txt /dev/null") or remove the COPY entirely and use a sentinel pattern so the tests stage still runs but no test_log.txt is preserved in the final codeserver image; locate the COPY in the final stage of the Dockerfile (reference: stages named "tests" and "codeserver" and the file /tmp/test_log.txt) and make the swap.
499-518:--no-verify-hashesdisables integrity checking in the final image install.The rationale is well-documented (locally compiled wheels from whl-cache have different hashes), and hash verification did pass in the
whl-cachestage. However, this means thecodeserverstage accepts whatever is in/wheelsdir/and/cachi2/output/deps/pip/without verification — if a cache mount is corrupted or tampered with, there's no second check.On x86_64/aarch64 where
/wheelsdir/is empty, the--no-verify-hashesflag is unnecessary since all wheels come from the cachi2 cache and would pass verification. Consider making--no-verify-hashesconditional on the architecture:Proposed approach
-UV_NO_CACHE=false UV_LINK_MODE=copy uv pip install --no-index \ - --no-verify-hashes \ - --find-links /wheelsdir/ \ - --find-links /cachi2/output/deps/pip \ - --cache-dir /root/.cache/uv \ - --requirements=./requirements.txt +HASH_FLAG="" +if [[ $(uname -m) == "ppc64le" ]] || [[ $(uname -m) == "s390x" ]]; then + HASH_FLAG="--no-verify-hashes" +fi +UV_NO_CACHE=false UV_LINK_MODE=copy uv pip install --no-index \ + ${HASH_FLAG} \ + --find-links /wheelsdir/ \ + --find-links /cachi2/output/deps/pip \ + --cache-dir /root/.cache/uv \ + --requirements=./requirements.txt🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@codeserver/ubi9-python-3.12/Dockerfile.cpu` around lines 499 - 518, The Dockerfile's uv pip install currently always uses --no-verify-hashes, which disables integrity checks for packages in /wheelsdir/ and /cachi2/output/deps/pip; update the RUN block that sets UV_NO_CACHE/UV_LINK_MODE and invokes "uv pip install" so that --no-verify-hashes is only appended when the build is for architectures that rely on the whl-cache compiled wheels (e.g., ppc64le/s390x) and omitted for x86_64/aarch64 where /wheelsdir/ is empty; detect the architecture (e.g., via uname -m or an existing build ARG) and conditionally add the --no-verify-hashes flag to the "uv pip install" invocation to preserve hash verification on platforms that can validate wheels.
202-205: RedundantCOPY—pylock.tomlis immediately overwritten.Line 202 copies the base
pylock.tomlto./pylock.toml, but line 205 overwrites the same destination with the flavor-specific file. Same issue exists inDockerfile.vathlines 47–50.Proposed fix
-# All arches: requirements and devel script (script is a no-op on all arches; kept for WHEEL_DIR setup) -COPY ${CODESERVER_SOURCE_CODE}/pylock.toml ./ - # copy requirements and scripts COPY ${CODESERVER_SOURCE_CODE}/uv.lock.d/pylock.${PYLOCK_FLAVOR}.toml ./pylock.toml🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@codeserver/ubi9-python-3.12/Dockerfile.cpu` around lines 202 - 205, The first COPY of the generic pylock.toml is redundant because it's immediately overwritten by the flavor-specific COPY; remove the initial COPY ${CODESERVER_SOURCE_CODE}/pylock.toml ./ line and keep only the COPY ${CODESERVER_SOURCE_CODE}/uv.lock.d/pylock.${PYLOCK_FLAVOR}.toml ./pylock.toml in the Dockerfile.cpu (and make the same change in Dockerfile.vath where the duplicate appears) so the final ./pylock.toml is populated only by the flavor-specific file.codeserver/ubi9-python-3.12/Dockerfile.vath (1)
47-50: RedundantCOPY— firstpylock.tomlis immediately overwritten.Line 47 copies
pylock.tomlinto/root/pylock.toml, but line 50 overwrites the same path with the flavor-specificpylock.${PYLOCK_FLAVOR}.toml → ./pylock.toml. The first COPY creates an unnecessary layer.Proposed fix
-COPY ${CODESERVER_SOURCE_CODE}/pylock.toml ./ - -# copy requirements and scripts -COPY ${CODESERVER_SOURCE_CODE}/uv.lock.d/pylock.${PYLOCK_FLAVOR}.toml ./pylock.toml +# copy requirements and scripts +COPY ${CODESERVER_SOURCE_CODE}/uv.lock.d/pylock.${PYLOCK_FLAVOR}.toml ./pylock.toml🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@codeserver/ubi9-python-3.12/Dockerfile.vath` around lines 47 - 50, Remove the redundant COPY of pylock.toml that is immediately overwritten by the flavor-specific copy: delete the first COPY ${CODESERVER_SOURCE_CODE}/pylock.toml ./ line and keep only the flavor-specific COPY ${CODESERVER_SOURCE_CODE}/uv.lock.d/pylock.${PYLOCK_FLAVOR}.toml ./pylock.toml so the image doesn’t create an unnecessary layer and the final pylock.toml comes solely from pylock.${PYLOCK_FLAVOR}.toml.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@codeserver/ubi9-python-3.12/Dockerfile.cpu`:
- Around line 312-317: Add a blank line immediately after the heredoc terminator
in the RUN block that starts with "RUN /bin/bash <<'EOF'" and ends with "EOF" so
the following comment "### END Install the oc client" is separated; specifically
edit the RUN /bin/bash <<'EOF' ... EOF block to insert one empty line after the
EOF marker to satisfy hadolint parsing requirements for heredocs.
- Around line 378-382: Remove the debug line `ls /tmp/` and fix the rpm filename
used by `rpm2cpio`: don't rely on `${TARGETARCH}` (Docker arch names) — either
use the RPM glob produced by nfpm or map TARGETARCH to RPM arch names; e.g.,
change the `rpm2cpio` invocation from rpm2cpio
"/tmp/code-server-${CODESERVER_VERSION/v/}-${TARGETARCH}.rpm" | cpio -idmv to
use a glob like rpm2cpio "/tmp/code-server-${CODESERVER_VERSION/v/}-*.rpm" |
cpio -idmv (or implement a small mapping from `${TARGETARCH}` →
`x86_64`/`aarch64` before calling `rpm2cpio`) to ensure the RPM file is found.
In `@codeserver/ubi9-python-3.12/prefetch-input/artifacts.in.yaml`:
- Line 78: The file artifacts.in.yaml is missing a trailing newline at EOF; open
the file and add a single newline character after the final line containing
"filename: ms-vscode.vscode-js-profile-table.1.0.10.vsix" so the file ends with
a newline character to satisfy the static analysis check.
- Around line 49-52: The artifacts list currently only includes x86_64 tarballs
for argon2 v0.31.2 and v0.28.7 (filenames
v0.31.2/argon2-v0.31.2-napi-v3-linux-x64-glibc.tar.gz and
v0.28.7/argon2-v0.28.7-napi-v3-linux-x64-glibc.tar.gz); add matching arm64 and
ppc64le entries (with correct URLs and filenames for those architectures) so
runtime installs on arm64 and ppc64le don’t fall back to source builds, or
alternatively add a clear documented note in artifacts.in.yaml indicating binary
fallback is acceptable and documenting required build tools and the expected
failure mode for those architectures. Ensure the new entries mirror the existing
keys (url and filename) and reference the exact version strings shown above.
In `@codeserver/ubi9-python-3.12/prefetch-input/artifacts.lock.yaml`:
- Around line 56-64: The aarch64 and ppc64le ripgrep entries have mismatched
download_url/checksum vs. filename (musl) — update each entry so download_url
points to the musl release tarball and replace the sha256 with the musl
checksum; specifically change the aarch64 entry to download
ripgrep-v13.0.0-13-aarch64-unknown-linux-musl.tar.gz and set its checksum to the
musl sha256 (0f308620… as noted), and for the ppc64le v13.0.0-13 entry change
download_url to ripgrep-v13.0.0-13-powerpc64le-unknown-linux-musl.tar.gz and
update its checksum to the musl sha256 (verify the exact value from the GitHub
release before committing).
In `@codeserver/ubi9-python-3.12/prefetch-input/patches/apply-patch.sh`:
- Around line 4-8: Update the misleading header block in apply-patch.sh: replace
the lines that claim "It builds code-server rpm for `ppc64le`" and "For other
architectures, the rpm is downloaded from the available releases" with a concise
description that the script applies patches and fixes (e.g., gcc-toolset, VS
Code patches, npm cache cleanup) across all supported architectures rather than
building RPMs only for ppc64le; ensure the header accurately summarizes the
script's purpose and the patches it applies.
- Line 106: The while loop reading patches/series using "read -r src_patch" can
skip a last non-newline-terminated line; update the loop condition that contains
read -r src_patch so it also processes a non-empty src_patch when read fails
(e.g., add a fallback like "|| [ -n \"$src_patch\" ]") or alternatively ensure
patches/series is newline-terminated before the loop, and keep using src_patch
and the existing patch -p1 < "patches/$src_patch" invocation unchanged.
In
`@codeserver/ubi9-python-3.12/prefetch-input/patches/code-server-v4.106.3/test/package.json`:
- Around line 22-29: Replace the Yarn-only "resolutions" block with npm's
equivalent "overrides" in package.json so the transitive dependency pins are
applied during npm installs; update the JSON key name from "resolutions" to
"overrides" and preserve the same mappings for "ansi-regex",
"argon2/@mapbox/node-pre-gyp/tar", "set-value", "tmpl", "path-parse", and
"json-schema" (ensure the resulting "overrides" object is valid JSON and
conforms to npm override syntax).
In
`@codeserver/ubi9-python-3.12/prefetch-input/patches/setup-offline-binaries.sh`:
- Line 1: The script currently has no error trapping, so failures in commands
like cp/tar/unzip can be silently ignored; update the top of the script (before
any commands) to enable strict error handling by adding the bash safety flags so
the script exits on errors, treats unset variables as errors, and fails on
pipeline errors (i.e., add set -euo pipefail right after the shebang), ensuring
missing artifacts in HERMETO_OUTPUT cause immediate, clear failures.
In `@codeserver/ubi9-python-3.12/prefetch-input/rpms.in.yaml`:
- Line 155: The file ends without a newline which triggers static analysis; open
the rpms.in.yaml and add a single trailing newline at end of file so the final
entry ("zeromq-devel") is terminated by a newline (ensure the file ends with a
line break).
- Around line 33-36: Add the missing s390x architecture to the rpms.in.yaml
header comment and the arches list so s390x dependency resolution is included;
update the header comment (the line beginning with "# The packages cover all
stages and all architectures") to list "s390x" and append "- s390x" to the YAML
"arches:" array so the arches symbol contains x86_64, aarch64, ppc64le, and
s390x.
In `@codeserver/ubi9-python-3.12/pylock.toml`:
- Around line 113-122: The lockfile lists cryptography version "46.0.4" (entry
name "cryptography", version "46.0.4", wheels block with RHOAI URLs), which is
vulnerable to CVE-2026-26007; update the package entry to version "46.0.5" and
replace the wheels array with the corresponding cryptography 46.0.5 wheel URLs
and SHA256 hashes as published by the RHOAI registry, then regenerate the
lockfile (pylock.toml) so the new hashes and metadata are consistent once RHOAI
publishes cryptography 46.0.5.
- Around line 586-594: The pylock entry pins pillow version 12.1.0 which is
vulnerable to CVE-2026-25990; update the pillow entry from version "12.1.0" to
"12.1.1" and replace the wheels block with the corresponding 12.1.1 wheel
URLs/hashes once the RHOAI registry publishes them, then regenerate the lockfile
so hashes and urls match (update the entry named "pillow" and the wheels array),
and until the registry is updated, ensure any runtime use of Pillow avoids
opening PSDs by using Image.open(..., formats=[...]) as a temporary mitigation.
In `@codeserver/ubi9-python-3.12/pyproject.toml`:
- Around line 9-10: Remove the duplicated comment lines that repeat the HERMETIC
notes ("Upgraded feast ~=0.59.0" and "Excluded py-spy on ppc64le") so they only
appear once under the [HERMETIC] header in pyproject.toml; locate the duplicate
block (the second occurrence currently at lines 9–10) and delete it, leaving the
original comments (the first occurrence at lines 6–7) intact to avoid redundant
comments.
---
Nitpick comments:
In @.dockerignore:
- Line 9: The commented line "#.git/ # Need to keep for gitsubmodule" causes the
.git/ directory to be included in the Docker build context; either restore an
explicit exclusion (add ".git/" to the .dockerignore) or replace the commented
pattern with a clear explanatory comment and keep the exclusion removed only if
you confirm CI clones use shallow clones (git clone --depth=1) so build context
size is acceptable; update the line and comment in .dockerignore (the existing
commented entry) and verify CI pipeline config uses shallow clones or adjust to
re-exclude .git/.
In @.tekton/odh-workbench-codeserver-datascience-cpu-py312-ubi9-push.yaml:
- Around line 60-165: The prefetch list includes many test/self-host extension
paths (e.g., lib/vscode/test/automation, test/smoke, test/mcp, test/monaco,
test/integration/browser, vscode-api-tests, vscode-colorize-*,
vscode-test-resolver, vscode-selfhost-import-aid, vscode-selfhost-test-provider,
test/e2e/extensions/test-extension) that may only be required for test runs and
not for building code-server; inspect the build scripts and packaging steps that
consume the prefetch-input entries (search for references to these directory
names in your build/compile scripts and npm install steps) and remove any of
these paths from the prefetch-input list that are not referenced at
build/compile time, keeping only the directories required for the source build
(e.g., core lib/vscode and build/*), and document which entries were removed and
why so CI can validate the reduced prefetch surface.
In `@codeserver/ubi9-python-3.12/devel_env_setup.sh`:
- Line 22: Summary: Quote the WHEEL_DIR variable in the mkdir invocation to
follow shellcheck best practices. Update the line that uses mkdir -p
${WHEEL_DIR} to quote the variable reference so it becomes mkdir -p
"${WHEEL_DIR}", ensuring the WHEEL_DIR variable is safely handled if it ever
contains spaces or special characters; locate and modify the occurrence of
${WHEEL_DIR} in devel_env_setup.sh accordingly.
In `@codeserver/ubi9-python-3.12/Dockerfile.cpu`:
- Around line 537-548: The Dockerfile currently copies the test log from the
tests stage into the final production image using "COPY --from=tests
/tmp/test_log.txt /tmp/test_log.txt", which bakes test artifacts into the
production image; update the final stage to avoid retaining the file by changing
that COPY to target a non-retained path (e.g., "COPY --from=tests
/tmp/test_log.txt /dev/null") or remove the COPY entirely and use a sentinel
pattern so the tests stage still runs but no test_log.txt is preserved in the
final codeserver image; locate the COPY in the final stage of the Dockerfile
(reference: stages named "tests" and "codeserver" and the file
/tmp/test_log.txt) and make the swap.
- Around line 499-518: The Dockerfile's uv pip install currently always uses
--no-verify-hashes, which disables integrity checks for packages in /wheelsdir/
and /cachi2/output/deps/pip; update the RUN block that sets
UV_NO_CACHE/UV_LINK_MODE and invokes "uv pip install" so that --no-verify-hashes
is only appended when the build is for architectures that rely on the whl-cache
compiled wheels (e.g., ppc64le/s390x) and omitted for x86_64/aarch64 where
/wheelsdir/ is empty; detect the architecture (e.g., via uname -m or an existing
build ARG) and conditionally add the --no-verify-hashes flag to the "uv pip
install" invocation to preserve hash verification on platforms that can validate
wheels.
- Around line 202-205: The first COPY of the generic pylock.toml is redundant
because it's immediately overwritten by the flavor-specific COPY; remove the
initial COPY ${CODESERVER_SOURCE_CODE}/pylock.toml ./ line and keep only the
COPY ${CODESERVER_SOURCE_CODE}/uv.lock.d/pylock.${PYLOCK_FLAVOR}.toml
./pylock.toml in the Dockerfile.cpu (and make the same change in Dockerfile.vath
where the duplicate appears) so the final ./pylock.toml is populated only by the
flavor-specific file.
In `@codeserver/ubi9-python-3.12/Dockerfile.vath`:
- Around line 47-50: Remove the redundant COPY of pylock.toml that is
immediately overwritten by the flavor-specific copy: delete the first COPY
${CODESERVER_SOURCE_CODE}/pylock.toml ./ line and keep only the flavor-specific
COPY ${CODESERVER_SOURCE_CODE}/uv.lock.d/pylock.${PYLOCK_FLAVOR}.toml
./pylock.toml so the image doesn’t create an unnecessary layer and the final
pylock.toml comes solely from pylock.${PYLOCK_FLAVOR}.toml.
In
`@codeserver/ubi9-python-3.12/prefetch-input/patches/codeserver-offline-env.sh`:
- Around line 20-22: The assignments for ELECTRON_CACHE and
PLAYWRIGHT_BROWSERS_PATH use ~ which is not POSIX-portable and is inconsistent
with other ${} expansions; update these environment variable assignments
(ELECTRON_CACHE and PLAYWRIGHT_BROWSERS_PATH) to use ${HOME} (or $HOME) for
explicit, portable expansion and keep style consistent with other variables like
HERMETO_OUTPUT.
In
`@codeserver/ubi9-python-3.12/prefetch-input/patches/setup-offline-binaries.sh`:
- Around line 71-72: Remove the redundant alias VSCODE_OFFLINE_DIR and use the
exported VSCODE_OFFLINE_CACHE directly: delete the VSCODE_OFFLINE_DIR assignment
and update the mkdir -p and subsequent cp commands that reference
VSCODE_OFFLINE_DIR to reference VSCODE_OFFLINE_CACHE instead (ensure the
directory creation and copies still use mkdir -p "${VSCODE_OFFLINE_CACHE}" and
the three cp targets use "${VSCODE_OFFLINE_CACHE}").
In `@scripts/lockfile-generators/helpers/hermeto-fetch-rpm.sh`:
- Line 85: The HERMETO_JSON string is built via fragile shell interpolation;
replace the literal JSON construction with a robust jq-based build that uses jq
-n and --arg/--argjson to set "type":"rpm" and the nested "options.ssl" object
from the C_CERT, C_KEY, and C_CA shell variables, then capture jq's output into
the HERMETO_JSON variable (referencing HERMETO_JSON and the C_CERT/C_KEY/C_CA
vars to locate the code to change).
- Around line 61-104: The extracted CDN_CERT_DIR can be left behind if the
script exits early (e.g. hermeto/podman failures); add a trap at the top of the
CDC_CERT_DIR usage block to remove "$CDN_CERT_DIR" on EXIT (or on ERR/EXIT) so
cleanup always runs, referencing the CDN_CERT_DIR variable and the
HERMETO_JSON/podman run blocks to ensure the temp dir is removed even on
failures; then remove the manual cleanup line that currently does rm -rf
"$CDN_CERT_DIR" at the end to avoid double-deletion.
In `@scripts/lockfile-generators/helpers/rpm-lockfile-generate.sh`:
- Around line 52-58: EPEL_KEY is hardcoded to RPM-GPG-KEY-EPEL-9 while the
downloaded file uses the OS major version from OS_VER; change EPEL_KEY to derive
its filename from OS_VER (e.g. use the same expansion as in the curl URL,
${OS_VER%%.*}) so the path matches the downloaded key, and keep the existing
mkdir -p and curl behaviour; update references to EPEL_KEY accordingly.
- Around line 65-68: Replace the hard-coded CRB repo string with one derived
from the detected OS version and architecture: use the existing OS_VER variable
and compute ARCH (e.g., ARCH="$(uname -m)" with any necessary mappings like
aarch64→aarch64, x86_64→x86_64) to build
CRB_REPO="codeready-builder-for-rhel-${OS_VER}-${ARCH}-rpms", then call
subscription-manager repos --enable="${CRB_REPO}" and, instead of silently
swallowing errors with "|| true", check the command's exit status and emit a
clear warning or error via echo/syslog if enabling fails so failures are
visible; update the line that calls subscription-manager repos and any
surrounding comments to reference CRB_REPO and the ARCH detection.
| RUN /bin/bash <<'EOF' | ||
| set -Eeuxo pipefail | ||
| curl -L https://mirror.openshift.com/pub/openshift-v4/$(uname -m)/clients/ocp/stable/openshift-client-linux.tar.gz \ | ||
| -o /tmp/openshift-client-linux.tar.gz | ||
| tar -xzvf /tmp/openshift-client-linux.tar.gz oc | ||
| rm -f /tmp/openshift-client-linux.tar.gz | ||
| ARCH=$(uname -m) | ||
| tar -xzvf /cachi2/output/deps/generic/openshift-client-linux-${ARCH}.tar.gz oc | ||
| EOF | ||
|
|
||
| ### END Install the oc client |
There was a problem hiding this comment.
Missing blank line after heredoc EOF marker.
There is no blank line between the EOF on line 316 and the comment on line 317. Based on learnings, Hadolint can fail to parse heredocs without a trailing blank line after the EOF marker, leading to errors on subsequent instructions.
Proposed fix
tar -xzvf /cachi2/output/deps/generic/openshift-client-linux-${ARCH}.tar.gz oc
EOF
+
### END Install the oc clientBased on learnings: "Ensure a blank line after the heredoc EOF marker in Dockerfiles. Hadolint can fail to parse heredocs without the trailing blank line, leading to errors on subsequent instructions. Apply this check to all Dockerfiles across the repo."
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| RUN /bin/bash <<'EOF' | |
| set -Eeuxo pipefail | |
| curl -L https://mirror.openshift.com/pub/openshift-v4/$(uname -m)/clients/ocp/stable/openshift-client-linux.tar.gz \ | |
| -o /tmp/openshift-client-linux.tar.gz | |
| tar -xzvf /tmp/openshift-client-linux.tar.gz oc | |
| rm -f /tmp/openshift-client-linux.tar.gz | |
| ARCH=$(uname -m) | |
| tar -xzvf /cachi2/output/deps/generic/openshift-client-linux-${ARCH}.tar.gz oc | |
| EOF | |
| ### END Install the oc client | |
| RUN /bin/bash <<'EOF' | |
| set -Eeuxo pipefail | |
| ARCH=$(uname -m) | |
| tar -xzvf /cachi2/output/deps/generic/openshift-client-linux-${ARCH}.tar.gz oc | |
| EOF | |
| ### END Install the oc client |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@codeserver/ubi9-python-3.12/Dockerfile.cpu` around lines 312 - 317, Add a
blank line immediately after the heredoc terminator in the RUN block that starts
with "RUN /bin/bash <<'EOF'" and ends with "EOF" so the following comment "###
END Install the oc client" is separated; specifically edit the RUN /bin/bash
<<'EOF' ... EOF block to insert one empty line after the EOF marker to satisfy
hadolint parsing requirements for heredocs.
There was a problem hiding this comment.
@ysok ^^^ @coderabbitai is right about this one, it causes unnecessary trouble with hadolint
There was a problem hiding this comment.
Got it, will update the rest.
There was a problem hiding this comment.
Seems like the humans are having a chat. I'll hop back into my burrow for now. If you need me again, just tag @coderabbitai in a new comment, and I'll come hopping out!
| dnf -y clean all | ||
| cd / | ||
| rpm2cpio "/code-server-rpm/code-server-${CODESERVER_VERSION/v/}-${TARGETARCH}.rpm" | cpio -idmv | ||
| ls /tmp/ | ||
| rpm2cpio "/tmp/code-server-${CODESERVER_VERSION/v/}-${TARGETARCH}.rpm" | cpio -idmv | ||
| EOF |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check for nfpm configuration files
find codeserver/ubi9-python-3.12/prefetch-input/code-server -name "*nfpm*" -o -name "nfpm.yaml" 2>/dev/nullRepository: opendatahub-io/notebooks
Length of output: 50
🏁 Script executed:
#!/bin/bash
# Check the Dockerfile to understand the context better
cat -n codeserver/ubi9-python-3.12/Dockerfile.cpu | sed -n '375,385p'Repository: opendatahub-io/notebooks
Length of output: 471
🏁 Script executed:
#!/bin/bash
# Search for nfpm and package-related configuration
rg -n "nfpm|TARGETARCH" codeserver/ubi9-python-3.12/ --type dockerfile --type yaml --type json 2>/dev/null | head -20Repository: opendatahub-io/notebooks
Length of output: 50
🏁 Script executed:
#!/bin/bash
# Look for the nfpm config in code-server prefetch area
ls -la codeserver/ubi9-python-3.12/prefetch-input/code-server/ 2>/dev/null || echo "Directory may not exist or path is different"Repository: opendatahub-io/notebooks
Length of output: 176
🏁 Script executed:
#!/bin/bash
# Search for nfpm in the entire codebase
rg -l "nfpm" . --type yaml --type json --type toml 2>/dev/null | head -20Repository: opendatahub-io/notebooks
Length of output: 195
🏁 Script executed:
#!/bin/bash
# Search for where TARGETARCH is used in the Dockerfile
rg -n "TARGETARCH" codeserver/ 2>/dev/null | head -30Repository: opendatahub-io/notebooks
Length of output: 398
🏁 Script executed:
#!/bin/bash
# Search for references to code-server RPM or how it's built
rg -n "code-server.*rpm\|rpm2cpio" codeserver/ --type dockerfile 2>/dev/null | head -30Repository: opendatahub-io/notebooks
Length of output: 50
🏁 Script executed:
#!/bin/bash
# Look for any build or packaging configuration files
find codeserver -type f \( -name "*.yaml" -o -name "*.yml" -o -name "Makefile" -o -name "*.sh" \) 2>/dev/null | head -20Repository: opendatahub-io/notebooks
Length of output: 902
🏁 Script executed:
#!/bin/bash
# Check artifacts configuration to see how code-server is obtained
cat -n codeserver/ubi9-python-3.12/prefetch-input/artifacts.lock.yaml | head -40Repository: opendatahub-io/notebooks
Length of output: 3045
🏁 Script executed:
#!/bin/bash
# Check for nfpm references in rpms configuration
cat -n codeserver/ubi9-python-3.12/prefetch-input/rpms.lock.yaml | head -50Repository: opendatahub-io/notebooks
Length of output: 2886
🏁 Script executed:
#!/bin/bash
# Search for where the RPM filename is determined
rg -n "code-server.*rpm\|\.rpm" codeserver/ubi9-python-3.12/prefetch-input/ --type yaml 2>/dev/null | head -30Repository: opendatahub-io/notebooks
Length of output: 50
🏁 Script executed:
#!/bin/bash
# Check setup-offline-binaries.sh to see how RPM is handled
cat -n codeserver/ubi9-python-3.12/prefetch-input/patches/setup-offline-binaries.shRepository: opendatahub-io/notebooks
Length of output: 8811
🏁 Script executed:
#!/bin/bash
# Search for where code-server RPM is built or created
rg -n "nfpm|code-server.*\.rpm|rpm2cpio" codeserver/ --type dockerfile --type bash --type sh 2>/dev/null | head -50Repository: opendatahub-io/notebooks
Length of output: 50
🏁 Script executed:
#!/bin/bash
# Check if there's an nfpm.yaml or similar in code-server source
find codeserver -name "*nfpm*" -o -name "*.nfpm.yaml" 2>/dev/nullRepository: opendatahub-io/notebooks
Length of output: 50
🏁 Script executed:
#!/bin/bash
# Look for where the RPM filename is constructed or how TARGETARCH maps to arch
rg -n "code-server.*rpm\|TARGETARCH.*amd64\|TARGETARCH.*arm64" codeserver/ 2>/dev/null | head -30Repository: opendatahub-io/notebooks
Length of output: 50
🏁 Script executed:
#!/bin/bash
# Check the earlier stages of the Dockerfile to see how RPM is created
head -n 374 codeserver/ubi9-python-3.12/Dockerfile.cpu | tail -n 100Repository: opendatahub-io/notebooks
Length of output: 3693
🏁 Script executed:
#!/bin/bash
# Find the rpm-base stage in the Dockerfile
grep -n "FROM.*rpm-base\|^FROM.*AS rpm-base" codeserver/ubi9-python-3.12/Dockerfile.cpuRepository: opendatahub-io/notebooks
Length of output: 101
🏁 Script executed:
#!/bin/bash
# Show the rpm-base stage
sed -n '/^FROM.*AS rpm-base/,/^FROM [^A]/p' codeserver/ubi9-python-3.12/Dockerfile.cpu | head -200Repository: opendatahub-io/notebooks
Length of output: 6291
🏁 Script executed:
#!/bin/bash
# Search for where the RPM is actually built (nfpm command)
rg -n "nfpm|\.rpm" codeserver/ubi9-python-3.12/Dockerfile.cpu | head -40Repository: opendatahub-io/notebooks
Length of output: 698
Remove debug statement and fix architecture variable mismatch.
Two issues in this block:
-
Line 380:
ls /tmp/is debug code with no functional purpose—remove it. -
Line 381 — Critical bug:
rpm2cpiouses${TARGETARCH}(Docker convention:amd64,arm64), but nfpm uses RPM architecture names fromuname -m:x86_64,aarch64, etc. The actual RPM filename pattern iscode-server-${CODESERVER_VERSION/v/}-*.rpmbuilt by nfpm (line 140, rpm-base stage). Relying on${TARGETARCH}will fail with "No such file" on x86_64 builds. Use the glob pattern*.rpmor mapTARGETARCH→ RPM arch names (amd64→x86_64, arm64→aarch64).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@codeserver/ubi9-python-3.12/Dockerfile.cpu` around lines 378 - 382, Remove
the debug line `ls /tmp/` and fix the rpm filename used by `rpm2cpio`: don't
rely on `${TARGETARCH}` (Docker arch names) — either use the RPM glob produced
by nfpm or map TARGETARCH to RPM arch names; e.g., change the `rpm2cpio`
invocation from rpm2cpio
"/tmp/code-server-${CODESERVER_VERSION/v/}-${TARGETARCH}.rpm" | cpio -idmv to
use a glob like rpm2cpio "/tmp/code-server-${CODESERVER_VERSION/v/}-*.rpm" |
cpio -idmv (or implement a small mapping from `${TARGETARCH}` →
`x86_64`/`aarch64` before calling `rpm2cpio`) to ensure the RPM file is found.
| - url: https://github.com/ranisalt/node-argon2/releases/download/v0.31.2/argon2-v0.31.2-napi-v3-linux-x64-glibc.tar.gz | ||
| filename: v0.31.2/argon2-v0.31.2-napi-v3-linux-x64-glibc.tar.gz | ||
| - url: https://github.com/ranisalt/node-argon2/releases/download/v0.28.7/argon2-v0.28.7-napi-v3-linux-x64-glibc.tar.gz | ||
| filename: v0.28.7/argon2-v0.28.7-napi-v3-linux-x64-glibc.tar.gz |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if argon2 is referenced in build scripts or setup-offline-binaries
echo "=== argon2 references in scripts ==="
rg -n 'argon2' -g '*.sh'
echo -e "\n=== argon2 references in patches ==="
rg -n 'argon2' -g '*.patch' -g '*.diff'
echo -e "\n=== argon2 in package-lock or package.json ==="
rg -n 'argon2' -g 'package*.json' | head -20Repository: opendatahub-io/notebooks
Length of output: 1361
🏁 Script executed:
# Search for all node-argon2 entries and their architectures
echo "=== All node-argon2 entries in artifacts.in.yaml ==="
rg -n 'node-argon2' -A 1 codeserver/
echo -e "\n=== Check if other arch prebuilts exist ==="
rg -n 'argon2.*arm64|argon2.*aarch64|argon2.*ppc64le' codeserver/
echo -e "\n=== Total count of argon2 artifacts ==="
rg 'node-argon2' codeserver/ | wc -lRepository: opendatahub-io/notebooks
Length of output: 1832
🏁 Script executed:
# Check the full context of the artifacts.in.yaml file for architecture patterns
echo "=== Check architecture patterns in artifacts.in.yaml ==="
fd 'artifacts.in.yaml' -x cat {} | grep -A 1 -B 1 'argon2'
echo -e "\n=== Check if there are arm64 or aarch64 variants anywhere ==="
fd 'artifacts.in.yaml' -x cat {} | grep -i 'arm64\|aarch64'Repository: opendatahub-io/notebooks
Length of output: 1558
🏁 Script executed:
# Check if there's any documentation or configuration about supported architectures
echo "=== Look for architecture specifications in config files ==="
fd 'Dockerfile|Containerfile' -x rg -l 'RUN|FROM' {} | head -5
echo -e "\n=== Check the specific codeserver artifacts file ==="
cat codeserver/ubi9-python-3.12/prefetch-input/artifacts.in.yaml | head -60Repository: opendatahub-io/notebooks
Length of output: 5067
Add argon2 prebuilts for arm64 and ppc64le architectures.
Only x86_64 prebuilts are provided for argon2 v0.31.2 and v0.28.7, but the build supports arm64 and ppc64le as evidenced by Node.js, Electron, and OpenShift client binaries provided for all three architectures. Since argon2 is a runtime dependency for VSCode authentication, arm64 and ppc64le builds will fail at runtime or require source compilation (which requires build tools not available in the container). Either provide matching prebuilts for these architectures or document that binary fallback is acceptable.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@codeserver/ubi9-python-3.12/prefetch-input/artifacts.in.yaml` around lines 49
- 52, The artifacts list currently only includes x86_64 tarballs for argon2
v0.31.2 and v0.28.7 (filenames
v0.31.2/argon2-v0.31.2-napi-v3-linux-x64-glibc.tar.gz and
v0.28.7/argon2-v0.28.7-napi-v3-linux-x64-glibc.tar.gz); add matching arm64 and
ppc64le entries (with correct URLs and filenames for those architectures) so
runtime installs on arm64 and ppc64le don’t fall back to source builds, or
alternatively add a clear documented note in artifacts.in.yaml indicating binary
fallback is acceptable and documenting required build tools and the expected
failure mode for those architectures. Ensure the new entries mirror the existing
keys (url and filename) and reference the exact version strings shown above.
| - download_url: https://github.com/microsoft/ripgrep-prebuilt/releases/download/v13.0.0-13/ripgrep-v13.0.0-13-aarch64-unknown-linux-gnu.tar.gz | ||
| checksum: sha256:1b0ca509f8707f2128f1b3ef245c3ea666d49a737431288536d49bd74652d143 | ||
| filename: ripgrep-v13.0.0-13-aarch64-unknown-linux-musl.tar.gz | ||
| - download_url: https://github.com/microsoft/ripgrep-prebuilt/releases/download/v13.0.0-13/ripgrep-v13.0.0-13-powerpc64le-unknown-linux-gnu.tar.gz | ||
| checksum: sha256:a3fdb2c6ef9d4ff927ca1cb1e56f7aed7913d1be4dd4546aec400118c26452ab | ||
| filename: ripgrep-v13.0.0-13-powerpc64le-unknown-linux-musl.tar.gz | ||
| - download_url: https://github.com/microsoft/ripgrep-prebuilt/releases/download/v13.0.0-4/ripgrep-v13.0.0-4-powerpc64le-unknown-linux-gnu.tar.gz | ||
| checksum: sha256:3ddd7c0797c14cefd3ee61f13f15ac219bfecee8e6f6e27fd15c102ef229653a | ||
| filename: ripgrep-v13.0.0-4-powerpc64le-unknown-linux-gnu.tar.gz |
There was a problem hiding this comment.
Wrong download_url and checksum for the aarch64 and ppc64le v13.0.0-13 ripgrep entries — the GNU binary is being fetched under a musl filename.
For the aarch64 entry (Lines 56–58):
download_urlpoints toaarch64-unknown-linux-**gnu**.tar.gzfilenamesaysaarch64-unknown-linux-**musl**.tar.gzchecksumis for the GNU binary (1b0ca509…), not the musl binary
The ripgrep-v13.0.0-13-aarch64-unknown-linux-musl.tar.gz release does exist at the same GitHub release with a different sha256 (0f308620…). vscode-ripgrep explicitly changed to consume only aarch64-unknown-linux-musl for arm64 Linux, so @vscode/ripgrep will look for the musl binary by that exact name. Cachi2 will download the GNU binary and store it with the musl filename — but the file content and checksum will be for the wrong binary.
The same URL/filename mismatch exists for ppc64le v13.0.0-13 (Lines 59–61), though the comment in setup-offline-binaries.sh indicates ppc64le uses v13.0.0-4 at runtime, so it's lower impact.
🔧 Proposed fix — use the actual musl URLs and their correct checksums
- - download_url: https://github.com/microsoft/ripgrep-prebuilt/releases/download/v13.0.0-13/ripgrep-v13.0.0-13-aarch64-unknown-linux-gnu.tar.gz
- checksum: sha256:1b0ca509f8707f2128f1b3ef245c3ea666d49a737431288536d49bd74652d143
+ - download_url: https://github.com/microsoft/ripgrep-prebuilt/releases/download/v13.0.0-13/ripgrep-v13.0.0-13-aarch64-unknown-linux-musl.tar.gz
+ checksum: sha256:0f308620a428f56fe871fcc5d7c668c461dfed3244f717b698f3e9e92aca037a
filename: ripgrep-v13.0.0-13-aarch64-unknown-linux-musl.tar.gz
- - download_url: https://github.com/microsoft/ripgrep-prebuilt/releases/download/v13.0.0-13/ripgrep-v13.0.0-13-powerpc64le-unknown-linux-gnu.tar.gz
- checksum: sha256:a3fdb2c6ef9d4ff927ca1cb1e56f7aed7913d1be4dd4546aec400118c26452ab
+ - download_url: https://github.com/microsoft/ripgrep-prebuilt/releases/download/v13.0.0-13/ripgrep-v13.0.0-13-powerpc64le-unknown-linux-musl.tar.gz
+ checksum: sha256:<verify-actual-musl-checksum-from-github-release>
filename: ripgrep-v13.0.0-13-powerpc64le-unknown-linux-musl.tar.gzVerify the actual ppc64le musl checksum from the GitHub release page before committing.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@codeserver/ubi9-python-3.12/prefetch-input/artifacts.lock.yaml` around lines
56 - 64, The aarch64 and ppc64le ripgrep entries have mismatched
download_url/checksum vs. filename (musl) — update each entry so download_url
points to the musl release tarball and replace the sha256 with the musl
checksum; specifically change the aarch64 entry to download
ripgrep-v13.0.0-13-aarch64-unknown-linux-musl.tar.gz and set its checksum to the
musl sha256 (0f308620… as noted), and for the ppc64le v13.0.0-13 entry change
download_url to ripgrep-v13.0.0-13-powerpc64le-unknown-linux-musl.tar.gz and
update its checksum to the musl sha256 (verify the exact value from the GitHub
release before committing).
| [[packages]] | ||
| name = "cryptography" | ||
| version = "46.0.4" | ||
| marker = "implementation_name == 'cpython' and sys_platform == 'linux'" | ||
| wheels = [ | ||
| { url = "https://packages.redhat.com/api/pulp-content/public-rhai/rhoai/3.4-EA1/cpu-ubi9/cryptography-46.0.4-2-cp312-abi3-linux_aarch64.whl", hashes = { sha256 = "0cecc7cb9880de1c61c9ac614e30d656bc7663b00999d29ff9cbc56e873ddac1" } }, | ||
| { url = "https://packages.redhat.com/api/pulp-content/public-rhai/rhoai/3.4-EA1/cpu-ubi9/cryptography-46.0.4-2-cp312-abi3-linux_ppc64le.whl", hashes = { sha256 = "541de6f6db7fe5539b5857cbfec902744c2993ebae33b2441e6ec32edf738391" } }, | ||
| { url = "https://packages.redhat.com/api/pulp-content/public-rhai/rhoai/3.4-EA1/cpu-ubi9/cryptography-46.0.4-2-cp312-abi3-linux_s390x.whl", hashes = { sha256 = "ececca50e3e724e0778cf9880f2b8461b4465dc72cda3af4120c5c24004da5a9" } }, | ||
| { url = "https://packages.redhat.com/api/pulp-content/public-rhai/rhoai/3.4-EA1/cpu-ubi9/cryptography-46.0.4-2-cp312-abi3-linux_x86_64.whl", hashes = { sha256 = "641e9f655b62e29f53ff2233c3e33f33fc969adc3341aa3913656ae9be82d706" } }, | ||
| ] |
There was a problem hiding this comment.
cryptography 46.0.4 is affected by high-severity CVE-2026-26007 (GHSA-r6ph-v2qm-q3c2).
cryptography <= 46.0.4 is missing subgroup validation for SECT curves in public_key_from_numbers, load_der_public_key, and load_pem_public_key, allowing an attacker to supply a small-order subgroup public key that can leak private-key bits via ECDH or enable ECDSA signature forgery. The vulnerability is fixed in 46.0.5.
Since the wheel is sourced from the Red Hat internal RHOAI 3.4-EA1 registry, the fix requires RHOAI to publish cryptography 46.0.5 to that registry and then regenerating this lockfile.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@codeserver/ubi9-python-3.12/pylock.toml` around lines 113 - 122, The lockfile
lists cryptography version "46.0.4" (entry name "cryptography", version
"46.0.4", wheels block with RHOAI URLs), which is vulnerable to CVE-2026-26007;
update the package entry to version "46.0.5" and replace the wheels array with
the corresponding cryptography 46.0.5 wheel URLs and SHA256 hashes as published
by the RHOAI registry, then regenerate the lockfile (pylock.toml) so the new
hashes and metadata are consistent once RHOAI publishes cryptography 46.0.5.
| name = "pillow" | ||
| version = "12.1.0" | ||
| marker = "implementation_name == 'cpython' and sys_platform == 'linux'" | ||
| wheels = [ | ||
| { url = "https://packages.redhat.com/api/pulp-content/public-rhai/rhoai/3.4-EA1/cpu-ubi9/pillow-12.1.0-2-cp312-cp312-linux_aarch64.whl", hashes = { sha256 = "331a0ce29141abe2a24e61581cae9355429c9b891b92823bf297acfa42c72df1" } }, | ||
| { url = "https://packages.redhat.com/api/pulp-content/public-rhai/rhoai/3.4-EA1/cpu-ubi9/pillow-12.1.0-2-cp312-cp312-linux_ppc64le.whl", hashes = { sha256 = "30b9d7eee6819c1f4988007fbcfd2a7273d63be5cfcefc5e457417838e240088" } }, | ||
| { url = "https://packages.redhat.com/api/pulp-content/public-rhai/rhoai/3.4-EA1/cpu-ubi9/pillow-12.1.0-2-cp312-cp312-linux_s390x.whl", hashes = { sha256 = "30a3a5d62d45e9eaa12a526f150ab371891b540b313e5b770e535d008ba39f20" } }, | ||
| { url = "https://packages.redhat.com/api/pulp-content/public-rhai/rhoai/3.4-EA1/cpu-ubi9/pillow-12.1.0-2-cp312-cp312-linux_x86_64.whl", hashes = { sha256 = "9f40684adac64414d5f8e5aab195f9faf2f4867ca472faf82e5a19edf3fb7cc1" } }, | ||
| ] |
There was a problem hiding this comment.
pillow 12.1.0 is affected by high-severity CVE-2026-25990 (GHSA-cfh3-3jmp-rvhc).
An out-of-bounds write may be triggered when loading a specially crafted PSD image; the advisory covers Pillow >= 10.3.0, < 12.1.1. A workaround is to use Image.open() with a formats parameter to prevent PSD images from being opened until the package is updated. This vulnerability is fixed in 12.1.1.
Same as cryptography: the wheel is sourced from the RHOAI 3.4-EA1 registry, so the fix requires the registry to publish pillow 12.1.1 followed by lockfile regeneration.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@codeserver/ubi9-python-3.12/pylock.toml` around lines 586 - 594, The pylock
entry pins pillow version 12.1.0 which is vulnerable to CVE-2026-25990; update
the pillow entry from version "12.1.0" to "12.1.1" and replace the wheels block
with the corresponding 12.1.1 wheel URLs/hashes once the RHOAI registry
publishes them, then regenerate the lockfile so hashes and urls match (update
the entry named "pillow" and the wheels array), and until the registry is
updated, ensure any runtime use of Pillow avoids opening PSDs by using
Image.open(..., formats=[...]) as a temporary mitigation.
| | Argument | Required | Description | | ||
| |----------|----------|-------------| | ||
| | `BASE_IMAGE` | Yes | Base image to build from (e.g. `quay.io/opendatahub/odh-base-image-cpu-py312-c9s:latest`) | | ||
| | `ARCH` | Yes | Target architecture for RPM naming (`amd64`, `aarch64`, `ppc64le`, `s390x`) | | ||
| | `LOCAL_BUILD` | Yes | Set to `true` for local builds. Configures dnf to use the local cachi2 RPM repo instead of Konflux-injected repos | | ||
| | `PYLOCK_FLAVOR` | Yes | Python lockfile flavor (`cpu` or `cuda`). Selects `uv.lock.d/pylock.<flavor>.toml` | |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Check how ARCH build-arg is used in Dockerfile.cpu
cd codeserver/ubi9-python-3.12
cat -n Dockerfile.cpu | grep -A2 -B2 'ARCH'Repository: opendatahub-io/notebooks
Length of output: 1296
🏁 Script executed:
# Also check if ARCH appears in any shell scripts or patches
find . -type f \( -name "*.sh" -o -name "Dockerfile*" \) -exec grep -l 'ARCH' {} \;Repository: opendatahub-io/notebooks
Length of output: 2037
🏁 Script executed:
# Search more specifically in the Dockerfile for how ARCH is used
rg -n 'ARG ARCH|ARCH' codeserver/ubi9-python-3.12/Dockerfile.cpu | head -30Repository: opendatahub-io/notebooks
Length of output: 437
🏁 Script executed:
grep -rn 'ARCH' codeserver/ubi9-python-3.12/prefetch-input/patches/Repository: opendatahub-io/notebooks
Length of output: 1705
🏁 Script executed:
# Also check the README to see exact wording
cat -n codeserver/ubi9-python-3.12/README.md | sed -n '88,93p'Repository: opendatahub-io/notebooks
Length of output: 646
ARCH build argument is unused and README misdescribes its purpose.
The README claims ARCH is for "Target architecture for RPM naming," but the Dockerfile never uses this build-arg. Instead:
- Line 314 of
Dockerfile.cpushadowsARCHwith a local variable:ARCH=$(uname -m) - RPM filenames use
TARGETARCH(Docker's built-in), not theARCHargument (line 381) - In patch scripts,
ARCHis computed locally fromuname -mand mapped to GOARCH values
Either remove the unused ARCH build-arg or implement it properly. If it should control architecture selection, update the Dockerfile to use it instead of shadowing it and relying on TARGETARCH and local variable assignment.
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (3)
codeserver/ubi9-python-3.12/Dockerfile.vath (1)
47-50: RedundantCOPYofpylock.toml— line 47 is immediately overwritten.
COPY ${CODESERVER_SOURCE_CODE}/pylock.toml ./(line 47) copies to./pylock.toml, which is immediately overwritten by line 50 (COPY … ./pylock.toml). The firstCOPYadds a layer with no net effect.♻️ Proposed fix
-# All arches: requirements and devel script (script is a no-op on all arches; kept for WHEEL_DIR setup) -COPY ${CODESERVER_SOURCE_CODE}/pylock.toml ./ - # copy requirements and scripts COPY ${CODESERVER_SOURCE_CODE}/uv.lock.d/pylock.${PYLOCK_FLAVOR}.toml ./pylock.toml🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@codeserver/ubi9-python-3.12/Dockerfile.vath` around lines 47 - 50, Remove the redundant COPY of pylock.toml: delete or comment out the first COPY ${CODESERVER_SOURCE_CODE}/pylock.toml ./ line (the one that immediately precedes the conditional COPY for pylock.${PYLOCK_FLAVOR}.toml) so only the intended COPY ${CODESERVER_SOURCE_CODE}/uv.lock.d/pylock.${PYLOCK_FLAVOR}.toml ./pylock.toml remains, avoiding an unnecessary Docker layer and overwrite.codeserver/ubi9-python-3.12/Dockerfile.cpu (1)
202-205: RedundantCOPYofpylock.toml— line 202 is immediately overwritten by line 205.Line 202 (
COPY ${CODESERVER_SOURCE_CODE}/pylock.toml ./) copies to./pylock.toml, then line 205 overwrites it with the flavor-specific variant. Same dead-layer issue as inDockerfile.vath.♻️ Proposed fix
-# All arches: requirements and devel script (script is a no-op on all arches; kept for WHEEL_DIR setup) -COPY ${CODESERVER_SOURCE_CODE}/pylock.toml ./ - # copy requirements and scripts COPY ${CODESERVER_SOURCE_CODE}/uv.lock.d/pylock.${PYLOCK_FLAVOR}.toml ./pylock.toml🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@codeserver/ubi9-python-3.12/Dockerfile.cpu` around lines 202 - 205, Redundant COPY of pylock.toml causes an unnecessary image layer; remove the initial COPY ${CODESERVER_SOURCE_CODE}/pylock.toml ./ (the one that places pylock.toml then is immediately overwritten) and keep only the flavor-specific COPY ${CODESERVER_SOURCE_CODE}/uv.lock.d/pylock.${PYLOCK_FLAVOR}.toml ./pylock.toml so the final image contains the intended file without the dead layer..tekton/odh-workbench-codeserver-datascience-cpu-py312-ubi9-push.yaml (1)
11-17: Cosmetic: plain scalar vs.|block style (and redundant path trigger).The PR pipeline uses a literal block scalar (
|) for this annotation, while the push pipeline uses a plain multi-line scalar. Both fold to the same CEL expression, so there is no functional difference, but keeping them consistent avoids confusion.Additionally,
"codeserver/ubi9-python-3.12/**".pathChanged()(Line 15) already covers"codeserver/ubi9-python-3.12/build-args/cpu.conf".pathChanged()(Line 16), making the latter redundant.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.tekton/odh-workbench-codeserver-datascience-cpu-py312-ubi9-push.yaml around lines 11 - 17, Update the pipelinesascode.tekton.dev/on-cel-expression annotation to match the style used by the PR pipeline (use the same scalar style, e.g., convert the plain multi-line scalar to the block scalar `|` or vice‑versa for consistency) and remove the redundant path trigger by deleting the explicit "codeserver/ubi9-python-3.12/build-args/cpu.conf".pathChanged() entry since "codeserver/ubi9-python-3.12/**".pathChanged() already covers it; edit the annotation value under the pipelinesascode.tekton.dev/on-cel-expression key and remove the redundant pathChanged clause.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.tekton/odh-workbench-codeserver-datascience-cpu-py312-ubi9-push.yaml:
- Around line 55-59: The pipeline lists ppc64le in the pip package prefetch
(binary.arch under the codeserver/ubi9-python-3.12 entry) but the push
pipeline’s build-platforms do not include ppc64le; either remove "ppc64le" from
the binary.arch CSV to match the push build-platforms, or add a corresponding
"ppc64le" entry to the push pipeline build-platforms so prefetched wheels are
actually built; update the CSV in the binary.arch key (or the build-platforms
list) accordingly to keep them consistent.
- Around line 45-48: Add the missing hermetic build flag to the push pipeline so
production builds run with network isolation: in the push pipeline params
section where the build platforms are defined (the block containing the
name/value pair "build-platforms" and its platform list), add a new param
key/value hermetic: "true" alongside the existing params so the push pipeline
mirrors the PR pipeline's hermetic behavior.
---
Duplicate comments:
In `@codeserver/ubi9-python-3.12/Dockerfile.cpu`:
- Around line 312-317: The heredoc in the Dockerfile (the RUN /bin/bash <<'EOF'
... EOF block that installs the oc client) is missing a trailing blank line,
causing hadolint parsing to fail; add a single empty line immediately after the
EOF terminator so the next instruction/comment ("### END Install the oc client")
is separated by a blank line and the heredoc is properly terminated.
- Around line 379-382: Remove the debug "ls /tmp/" line and fix the rpm filename
passed to rpm2cpio: replace the arch-specific "${TARGETARCH}" with the same
arch-agnostic glob used when moving the RPM earlier (use
"code-server-${CODESERVER_VERSION/v/}-*.rpm") or implement an explicit mapping
from TARGETARCH to the rpm uname-mapped names; update the rpm2cpio invocation
that currently references rpm2cpio
"/tmp/code-server-${CODESERVER_VERSION/v/}-${TARGETARCH}.rpm" to use the glob or
mapped name and remove the standalone "ls /tmp/" call.
---
Nitpick comments:
In @.tekton/odh-workbench-codeserver-datascience-cpu-py312-ubi9-push.yaml:
- Around line 11-17: Update the pipelinesascode.tekton.dev/on-cel-expression
annotation to match the style used by the PR pipeline (use the same scalar
style, e.g., convert the plain multi-line scalar to the block scalar `|` or
vice‑versa for consistency) and remove the redundant path trigger by deleting
the explicit "codeserver/ubi9-python-3.12/build-args/cpu.conf".pathChanged()
entry since "codeserver/ubi9-python-3.12/**".pathChanged() already covers it;
edit the annotation value under the pipelinesascode.tekton.dev/on-cel-expression
key and remove the redundant pathChanged clause.
In `@codeserver/ubi9-python-3.12/Dockerfile.cpu`:
- Around line 202-205: Redundant COPY of pylock.toml causes an unnecessary image
layer; remove the initial COPY ${CODESERVER_SOURCE_CODE}/pylock.toml ./ (the one
that places pylock.toml then is immediately overwritten) and keep only the
flavor-specific COPY
${CODESERVER_SOURCE_CODE}/uv.lock.d/pylock.${PYLOCK_FLAVOR}.toml ./pylock.toml
so the final image contains the intended file without the dead layer.
In `@codeserver/ubi9-python-3.12/Dockerfile.vath`:
- Around line 47-50: Remove the redundant COPY of pylock.toml: delete or comment
out the first COPY ${CODESERVER_SOURCE_CODE}/pylock.toml ./ line (the one that
immediately precedes the conditional COPY for pylock.${PYLOCK_FLAVOR}.toml) so
only the intended COPY
${CODESERVER_SOURCE_CODE}/uv.lock.d/pylock.${PYLOCK_FLAVOR}.toml ./pylock.toml
remains, avoiding an unnecessary Docker layer and overwrite.
99a8767 to
494f182
Compare
|
/kfbuild codeserver/ubi9-python-3.12 |
4f52495 to
99e35e1
Compare
68c391a to
28f0c3a
Compare
28f0c3a to
b338bab
Compare
Builds on AIPCC-based images were also successful but failed in unit tests. |
d9ae6d3 to
a78b530
Compare
|
All Konflux image builds are good too, but OOM failed at the sast/ecosystem scanning https://github.com/opendatahub-io/notebooks/pull/2985/checks?check_run_id=65226566478 |
…codeserver
Hermetic build for codeserver/ubi9-python-3.12: all dependencies (RPMs,
npm packages, Python wheels, generic artifacts) are prefetched and the
Dockerfile builds entirely offline.
Core changes:
- Rewrite Dockerfile.cpu as a multi-stage hermetic build (rpm-base,
whl-cache, cpu-base, codeserver, tests) using /cachi2/output mounts
- Delete get_code_server_rpm.sh, replaced by apply-patch.sh
- Add offline build patches: apply-patch.sh, setup-offline-binaries.sh,
codeserver-offline-env.sh, tweak-gha.sh (GHA-only memory tuning)
- Add s390x.patch and lockfile overlays for microsoft-authentication
and test/ subdirectories
Build infrastructure:
- Update Makefile to auto-detect cachi2/output and inject --volume +
LOCAL_BUILD=true for hermetic local builds
- Fix scripts/sandbox.py for macOS (glob expansion, .git permission
errors, xattr handling) and deduplicate prerequisites
- Add scripts/lockfile-generators/post-prefetch.sh
CI / Tekton:
- Add timeout: 4h to build-images task in multiarch-combined-pipeline
- Update Tekton PipelineRuns (pull-request + push) to point at
prefetch-input/odh/ and add pipeline timeout
- GHA: add --layers=false and --build-arg GHA_BUILD=true for
codeserver targets; trigger LVM overlay for codeserver builds
Documentation:
- Add docs/hermetic-guide.md
- Update codeserver/ubi9-python-3.12/README.md
a78b530 to
bbffd5b
Compare
|
@ysok: The following tests failed, say
Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |


Summary
Part 3 of 3 for the hermetic codeserver build (RHAIENG-2846). This PR rewrites the Dockerfile to a fully hermetic, multi-stage build using cachi2 mounts, adds build patches for offline code-server compilation, and updates the Tekton pipelines.
Changes
Dockerfile rewrite
Dockerfile.cpu: Complete rewrite from network-dependent build to 5-stage hermetic build:rpm-base- Builds code-server RPM from prefetched source using nfpmwhl-cache- Installs Python wheels from cachi2 pip cachecpu-base- Installs OS packages, tools (oc, ripgrep), VS Code extensions from cachi2 mountscodeserver- Final image: installs code-server RPM, Python wheels, configures runtimetests- Validates the build with micropipenv checkDockerfile.vath(new): Development/testing Dockerfile variantBuild scripts
devel_env_setup.sh: Simplified (removed old source-compilation logic for pillow/pyarrow, now uses RHOAI pre-built wheels)get_code_server_rpm.sh(deleted): No longer needed (code-server is now built from prefetched submodule source)Offline build patches (new)
patches/apply-patch.sh: Orchestrates offline patching of code-server sourcepatches/setup-offline-binaries.sh: Symlinks prefetched Node/Electron headers for node-gyppatches/codeserver-offline-env.sh: Configures npm/yarn for fully offline buildspatches/code-server-v4.106.3/s390x.patch: Architecture-specific fix for s390xpatches/code-server-v4.106.3/{microsoft-authentication,test}/: Replacement package.json + package-lock.json to resolve npm dependencies offlineTekton pipelines
.tekton/*pull-request.yamland.tekton/*push.yaml: Addprefetch-inputconfigurations for cachi2 (rpm, generic, pip, npm types), upgrade build instance (m4xlarge->m8xlarge), enablecache-proxyOther
.dockerignore: AllowCOPY .git(needed by code-server build scripts)README.md: Updated build instructions for hermetic flowdocs/konflux.md: Added section on Rust/Cargo version requirements for py-spy prefetchBuildability
This PR replaces the old Dockerfile with the new hermetic version. It requires all the prefetch input data from [2/3] and the Python lockfiles from [1/3] to build successfully.
Review focus
Dockerfile.cpu(~377 lines changed) - the core hermetic build logicpatches/apply-patch.sh(~112 lines) - offline patching orchestrationpatches/setup-offline-binaries.sh(~142 lines) - node-gyp offline setupdevel_env_setup.shsimplification (~149 lines changed)PR Stack
Made with Cursor
Summary by CodeRabbit
New Features
Documentation
Chores
Bug Fixes