-
Notifications
You must be signed in to change notification settings - Fork 113
[WIP] RHOAIENG-9707 ci: dynamic testing of container images with pytest #629
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Things to check or do * [ ] how this works with podman * [ ] how to test both kubernetes and docker with the same tests * [ ] how to run the notebook server and check that that works * [ ] generate test_* methods for every target in Makefile * [ ] commandline parameter to specify image to test * [ ] without parameter with image to test it can build the image as part of test For now it uses images from https://github.com/jiridanek/notebooks/actions/runs/10006075164/job/27658080202 and not from the current build.
a0163e2
to
4b32eb2
Compare
PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
@jiridanek: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
…lux/component-updates/component-update-odh-workbench-jupyter-trustyai-cpu-py311-ubi9-n-v2-22 Update odh-workbench-jupyter-trustyai-cpu-py311-ubi9-n-v2-22 to 269b6f1
@coderabbitai review |
✅ Actions performedReview triggered.
|
WalkthroughThe changes introduce containerized testing infrastructure using Dagger.io and Testcontainers, update development dependencies, and add new test utilities and test cases for validating container images and notebook execution. The README is updated with details about container self-tests, and new configuration files and helper scripts are added to support robust and flexible container-based test workflows. Changes
Sequence Diagram(s)sequenceDiagram
participant Tester
participant Pytest
participant Testcontainers
participant Docker
participant Notebook
Tester->>Pytest: Run tests
Pytest->>Testcontainers: Start Docker container
Testcontainers->>Docker: Launch container
Pytest->>Docker: Copy test data into container
Pytest->>Docker: Install dependencies (e.g., papermill)
Pytest->>Docker: Execute notebook via papermill
Docker->>Notebook: Run notebook
Notebook-->>Docker: Execution complete
Docker-->>Pytest: Return results
Pytest->>Testcontainers: Stop and cleanup container
Suggested labels
Poem
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
🧹 Nitpick comments (5)
tests/__init__.py (1)
9-10
: Fix spacing around assignment operators.Missing spaces around the
=
operators in environment variable assignments.-os.environ["DO_NOT_TRACK"]= "1" -os.environ["NOTHANKS"]= "1" +os.environ["DO_NOT_TRACK"] = "1" +os.environ["NOTHANKS"] = "1"README.md (1)
67-80
: Consider refining the documentation for clarity.The container selftests section provides good context, but the prominent TODO and uncertainty about the approach might confuse users. Consider:
- Moving the TODO to a less prominent position or converting it to a GitHub issue
- Providing clearer guidance on the current recommended approach
- Adding examples of when each tool might be preferred
##### Container selftests -We're using [Dagger.io](https://dagger.io) to run containers from Python tests. +We currently support [Dagger.io](https://dagger.io) for running containers from Python tests. It has very nice verbose logging by default for every action that is running. -**TODO** need to decide which is more suitable; -ideally the tests should allow switching between Kubernetes and Docker/Podman. -That means tying it to this fancy Dagger thing may not be the smartest thing to do. +We are evaluating support for switching between Kubernetes and Docker/Podman environments. We also considered [Testcontainers.com](https://testcontainers.com/). The main difference between the two is that Dagger creates more abstractions over the container engine. Especially Dagger [does not allow bind-mounting local directories](https://docs.dagger.io/cookbook/#modify-a-copied-directory-or-remote-repository-in-a-container) directly to the container but always copies files in and out.tests/docker_utils.py (2)
42-56
: Consider reducing function complexity.The
container_exec
function has 13 parameters, which makes it difficult to use and maintain. Consider using a configuration object or builder pattern.However, given this is a low-level utility wrapper around Docker's exec API, the current parameter list mirrors the underlying API and may be acceptable for this use case.
114-116
: Clarify or remove uncertain error handling.The comment and exception suggest uncertainty about when this condition could occur. Either provide a clear explanation or handle this case more definitively.
- while self.poll() is None: - raise RuntimeError("Hm could that really happen?") + # Wait for the command to complete + while self.poll() is None: + raise RuntimeError("Command execution did not complete properly")tests/test_check_dagger.py (1)
3-6
: Remove unused imports.The
os
andsys
modules are imported but not used in the active code.-import os import pathlib -import sys import logging
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
poetry.lock
is excluded by!**/*.lock
📒 Files selected for processing (7)
README.md
(1 hunks)pyproject.toml
(1 hunks)tests/__init__.py
(1 hunks)tests/conftest.py
(1 hunks)tests/docker_utils.py
(1 hunks)tests/logs/.gitignore
(1 hunks)tests/test_check_dagger.py
(1 hunks)
🧰 Additional context used
🪛 Ruff (0.11.9)
tests/docker_utils.py
21-21: Use a context manager for opening files
(SIM115)
tests/test_check_dagger.py
3-3: os
imported but unused
Remove unused import: os
(F401)
5-5: sys
imported but unused
Remove unused import: sys
(F401)
295-295: Use a context manager for opening files
(SIM115)
🪛 Pylint (3.3.7)
tests/docker_utils.py
[error] 25-25: function already defined line 23
(E0102)
[refactor] 21-21: Consider using 'with' for resource-allocating operations
(R1732)
[refactor] 42-42: Too many arguments (13/5)
(R0913)
[refactor] 42-42: Too many positional arguments (13/5)
(R0917)
tests/test_check_dagger.py
[refactor] 184-184: Redefining argument with the local name 'cmd'
(R1704)
[refactor] 295-295: Consider using 'with' for resource-allocating operations
(R1732)
🔇 Additional comments (6)
tests/__init__.py (1)
5-6
: Good approach for determining project root path.Using
pathlib.Path(__file__).parent.parent
is a clean and reliable way to establish the project root directory for test utilities.pyproject.toml (1)
13-17
: Verify that the dependency versions are current and secure.The new testing dependencies look appropriate for the container testing infrastructure. Please ensure these versions are current and free from known vulnerabilities.
Are there any known security vulnerabilities in pytest 8.3.2, pytest-subtests 0.13.1, pytest-logger 1.1.1, testcontainers 4.7.2, or pyfakefs 5.6.0?
tests/logs/.gitignore (1)
1-1
: Good practice for ignoring test log files.Properly excluding pytest log files from version control is the right approach for test infrastructure.
tests/conftest.py (3)
16-18
: Proper cleanup implementation.The
pytest_sessionfinish
hook correctly ensures cleanup of testcontainers Reaper instances after test completion. This prevents resource leaks and follows testcontainers best practices.
1-9
: Good use of type checking imports.Using
TYPE_CHECKING
to conditionally import pytest types is an efficient approach that avoids runtime imports while providing proper type hints.
11-13
: Consider security implications of privileged containers.While setting
ryuk_privileged = True
solves SELinux issues with Podman, it grants elevated privileges to the Ryuk container. Ensure this is acceptable for your testing environment and consider documenting any security trade-offs.What are the security implications of running testcontainers Ryuk in privileged mode?
tar = tarfile.open(fileobj=fh, mode="w:gz") | ||
|
||
tar_filter = None | ||
if user or group: | ||
def tar_filter(f: tarfile.TarInfo) -> tarfile.TarInfo: | ||
if user: | ||
f.uid = user | ||
if group: | ||
f.gid = group | ||
return f | ||
|
||
logging.debug(f"Adding {src=} to archive {dst=}") | ||
try: | ||
tar.add(src, arcname=os.path.basename(src), filter=tar_filter) | ||
finally: | ||
tar.close() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Use context manager for tarfile resource management.
The tarfile should be properly closed using a context manager to ensure resource cleanup.
- fh = io.BytesIO()
- tar = tarfile.open(fileobj=fh, mode="w:gz")
+ fh = io.BytesIO()
+ with tarfile.open(fileobj=fh, mode="w:gz") as tar:
+ tar_filter = None
+ if user or group:
+ def tar_filter(f: tarfile.TarInfo) -> tarfile.TarInfo:
+ if user:
+ f.uid = user
+ if group:
+ f.gid = group
+ return f
- tar_filter = None
- if user or group:
- def tar_filter(f: tarfile.TarInfo) -> tarfile.TarInfo:
- if user:
- f.uid = user
- if group:
- f.gid = group
- return f
-
- logging.debug(f"Adding {src=} to archive {dst=}")
- try:
- tar.add(src, arcname=os.path.basename(src), filter=tar_filter)
- finally:
- tar.close()
+ logging.debug(f"Adding {src=} to archive {dst=}")
+ tar.add(src, arcname=os.path.basename(src), filter=tar_filter)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
tar = tarfile.open(fileobj=fh, mode="w:gz") | |
tar_filter = None | |
if user or group: | |
def tar_filter(f: tarfile.TarInfo) -> tarfile.TarInfo: | |
if user: | |
f.uid = user | |
if group: | |
f.gid = group | |
return f | |
logging.debug(f"Adding {src=} to archive {dst=}") | |
try: | |
tar.add(src, arcname=os.path.basename(src), filter=tar_filter) | |
finally: | |
tar.close() | |
fh = io.BytesIO() | |
with tarfile.open(fileobj=fh, mode="w:gz") as tar: | |
tar_filter = None | |
if user or group: | |
def tar_filter(f: tarfile.TarInfo) -> tarfile.TarInfo: | |
if user: | |
f.uid = user | |
if group: | |
f.gid = group | |
return f | |
logging.debug(f"Adding {src=} to archive {dst=}") | |
tar.add(src, arcname=os.path.basename(src), filter=tar_filter) |
🧰 Tools
🪛 Ruff (0.11.9)
21-21: Use a context manager for opening files
(SIM115)
🪛 Pylint (3.3.7)
[error] 25-25: function already defined line 23
(E0102)
[refactor] 21-21: Consider using 'with' for resource-allocating operations
(R1732)
🤖 Prompt for AI Agents
In tests/docker_utils.py around lines 21 to 36, the tarfile is opened and closed
manually, which risks resource leaks if exceptions occur. Replace the manual
open and close calls with a context manager (using the "with" statement) to
automatically handle closing the tarfile resource. This ensures proper cleanup
even if errors happen during the tarfile operations.
uid = 123456 | ||
gid = 0 | ||
test_data_dir = str(ROOT_PATH / "jupyter" / notebook_name / f"{ubi_flavor}-{python_kernel}" / "test") | ||
image = "ghcr.io/jiridanek/notebooks/workbench-images:jupyter-minimal-ubi9-python-3.9-jd_helpful_error_751147cd93fed327e940670edbc99c6f44a1ac24" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Replace hardcoded image reference with parameterized value.
The hardcoded image name should be configurable to support testing different images and versions.
Consider adding a pytest fixture or parameter to make the image configurable:
@pytest.mark.parametrize("image", [
"ghcr.io/jiridanek/notebooks/workbench-images:jupyter-minimal-ubi9-python-3.9-jd_helpful_error_751147cd93fed327e940670edbc99c6f44a1ac24"
])
def test_something_with_papermill(image):
# ... rest of the test
🤖 Prompt for AI Agents
In tests/test_check_dagger.py at line 33, the image reference is hardcoded,
limiting flexibility. Refactor the test to accept the image as a parameter by
adding a pytest fixture or using pytest.mark.parametrize to pass the image
string. Update the test function signature to accept this parameter and replace
the hardcoded image variable with the parameter value to allow testing with
different images and versions.
# # run_kubectl(["wait", "--for=condition=ready", "pod", "runtime-pod", "--timeout=300s"]) | ||
# # LOGGER.error("Usage: make validate-runtime-image image=<container-image-name>") | ||
# # fail = False | ||
image = "ghcr.io/jiridanek/notebooks/workbench-images:runtime-minimal-ubi9-python-3.9-jd_helpful_error_751147cd93fed327e940670edbc99c6f44a1ac24" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Replace hardcoded image reference with parameterized value.
Similar to the other test, this hardcoded image name should be configurable.
Consider using the same parameterization approach as suggested for the papermill test.
🤖 Prompt for AI Agents
In tests/test_check_dagger.py at line 178, the Docker image reference is
hardcoded, which reduces flexibility. Replace the hardcoded image string with a
parameter or fixture that can be configured externally or passed into the test,
following the parameterization approach used in the papermill test. This will
make the test more adaptable and maintainable.
# def run_kubectl(args: list[str], check=True, background=False, stdout=None, stderr=None) -> subprocess.Popen | subprocess.CompletedProcess: | ||
# return run_command([str(ROOT_PATH / 'bin/kubectl')] + args, check=check, background=background, stdout=stdout, stderr=stderr) | ||
# | ||
# | ||
# def run_command(args: list[str], check=True, background=False, stdout=None, stderr=None): | ||
# p = subprocess.Popen(args, text=True, stdout=stdout, stderr=stderr) | ||
# LOGGER.info(f"Running command: {shlex.join(args)}") | ||
# if background: | ||
# return p | ||
# stdout, stderr = p.communicate(timeout=COMMAND_TIMEOUT) | ||
# if stdout: | ||
# LOGGER.debug(f"Command output: {stdout}") | ||
# if check and p.returncode != 0: | ||
# raise subprocess.CalledProcessError(p.returncode, shlex.join(args), stdout, stderr) | ||
# return subprocess.CompletedProcess(args, p.returncode, stdout, stderr) | ||
|
||
|
||
# class Substring(str): | ||
# # """ | ||
# # >>> match Substring("abrakadabra"): | ||
# # ... case "raka": # matches | ||
# # ... pass | ||
# # """ | ||
# __eq__ = str.__contains__ | ||
|
||
# def test_jupyter_minimal_ubi9_python_3_9(): | ||
# test_notebook(notebook_name="jupyter-minimal-ubi9-python-3.9") | ||
# | ||
# def test_jupyter_datascience_ubi9_python_3_9(): | ||
# test_notebook(notebook_name="jupyter-datascience-ubi9-python-3.9") | ||
# | ||
# def test_notebook(notebook_name) -> None: | ||
# notebook_name = notebook_name.replace("cuda-", "").replace(".", "-") | ||
# LOGGER.info("# Running tests for $(NOTEBOOK_NAME) notebook...") | ||
# # Verify the notebook's readiness by pinging the /api endpoint | ||
# run_kubectl(["wait", "--for=condition=ready", "pod", "-l", f"app={notebook_name}", "--timeout=600s"]) | ||
# with run_kubectl(["port-forward", f"svc/{notebook_name}-notebook", "8888:8888"], background=True) as p: | ||
# run_command(["curl", "--retry", "25", "--retry-delay", "1", "--retry-connrefused", | ||
# "http://localhost:8888/notebook/opendatahub/jovyan/api"]) | ||
# p.kill() | ||
# full_notebook_name = run_kubectl(["get", "pods", "-l", f"app={notebook_name}", "-o", "custom-columns=:metadata.name"], stdout=subprocess.PIPE).stdout.strip() | ||
# | ||
# match Substring(full_notebook_name): | ||
# case "minimal-ubi9": | ||
# test_with_papermill(full_notebook_name, "minimal", "ubi9", "python-3.9") | ||
# case "datascience-ubi9": | ||
# validate_ubi9_datascience(full_notebook_name) | ||
# case "pytorch-ubi9": | ||
# validate_ubi9_datascience(full_notebook_name) | ||
# test_with_papermill("pytorch", "ubi9", "python-3.9") | ||
# case "tensorflow-ubi9": | ||
# validate_ubi9_datascience(full_notebook_name) | ||
# test_with_papermill("tensorflow", "ubi9", "python-3.9") | ||
# case "trustyai-ubi9": | ||
# validate_ubi9_datascience(full_notebook_name) | ||
# test_with_papermill("trustyai", "ubi9", "python-3.9") | ||
# case "minimal-ubi8": | ||
# test_with_papermill("minimal", "ubi8", "python-3.8") | ||
# case "datascience-ubi8": | ||
# validate_ubi8_datascience(full_notebook_name) | ||
# case "trustyai-ubi8": | ||
# validate_ubi8_datascience(full_notebook_name) | ||
# test_with_papermill("trustyai", "ubi8", "python-3.8") | ||
# case "anaconda": | ||
# print("There is no test notebook implemented yet for Anaconda Notebook....") | ||
# case _: | ||
# print(f"No matching condition found for {full_notebook_name}.") | ||
|
||
|
||
# def test_with_tenacity() -> None: | ||
|
||
# NOTEBOOK_REPO_BRANCH_BASE = os.environ.get("NOTEBOOK_REPO_BRANCH_BASE") or "https://raw.githubusercontent.com/opendatahub-io/notebooks/main" | ||
# NOTEBOOK_REPO_BRANCH_BASE = os.environ.get("NOTEBOOK_REPO_BRANCH_BASE") or "https://raw.githubusercontent.com/jiridanek/notebooks/jd_update_nbformat" | ||
# | ||
# | ||
|
||
REQUIRED_RUNTIME_IMAGE_COMMANDS = ["curl", "python3"] | ||
REQUIRED_CODE_SERVER_IMAGE_COMMANDS = ["curl", "python", "oc", "code-server"] | ||
REQUIRED_R_STUDIO_IMAGE_COMMANDS = ["curl", "python", "oc", "/usr/lib/rstudio-server/bin/rserver"] | ||
|
||
|
||
# # Function for testing a notebook with papermill | ||
# # ARG 1: Notebook name | ||
# # ARG 1: UBI flavor | ||
# # ARG 1: Python kernel | ||
# def test_with_papermill(full_notebook_name, notebook_name, ubi_flavor, python_kernel): | ||
# run_kubectl(['exec', full_notebook_name, '--', '/bin/sh', "-c", "python3 -m pip install papermill"]) | ||
# r = run_kubectl(['exec', full_notebook_name, '--', '/bin/sh', "-c", | ||
# f"wget {NOTEBOOK_REPO_BRANCH_BASE}/jupyter/{notebook_name}/{ubi_flavor}-{python_kernel}/test/test_notebook.ipynb -O test_notebook.ipynb" | ||
# f" && python3 -m papermill test_notebook.ipynb {notebook_name}_{ubi_flavor}_output.ipynb --kernel python3 --stderr-file {notebook_name}_{ubi_flavor}_error.txt"], check=False) | ||
# if r.returncode != 0: | ||
# LOGGER.error(f"ERROR: The {notebook_name} {ubi_flavor} notebook encountered a failure." | ||
# f" To investigate the issue, you can review the logs located in the ocp-ci cluster on 'artifacts/notebooks-e2e-tests/jupyter-$(1)-$(2)-$(3)-test-e2e' directory or run 'cat $(1)_$(2)_error.txt' within your container." | ||
# f" The make process has been aborted.") | ||
# assert False | ||
# else: | ||
# r = run_kubectl(["exec", full_notebook_name, "--", "/bin/sh", "-c", f"cat {notebook_name}_{ubi_flavor}_error.txt | grep --quiet FAILED"], check=False) | ||
# if r.returncode == 0: | ||
# LOGGER.error(f"ERROR: The {notebook_name} {ubi_flavor} notebook encountered a failure. The make process has been aborted.") | ||
# run_kubectl(["exec", full_notebook_name, "--", "/bin/sh", "-c", f"cat {notebook_name}_{ubi_flavor}_error.txt"]) | ||
# assert False | ||
|
||
|
||
# def validate_ubi9_datascience(full_notebook_name): | ||
# test_with_papermill(full_notebook_name, "minimal", "ubi9", "python-3.9") | ||
# test_with_papermill(full_notebook_name, "datascience", "ubi9", "python-3.9") | ||
# | ||
# def validate_ubi8_datascience(full_notebook_name): | ||
# test_with_papermill(full_notebook_name,"minimal","ubi8","python-3.8") | ||
# test_with_papermill(full_notebook_name,"datascience","ubi8","python-3.8") | ||
# | ||
@pytest.mark.parametrize("cmd", REQUIRED_RUNTIME_IMAGE_COMMANDS) | ||
def test_validate_runtime_image(cmd: str): | ||
LOGGER.info("# Running tests for $(NOTEBOOK_NAME) runtime...") | ||
# # run_kubectl(["wait", "--for=condition=ready", "pod", "runtime-pod", "--timeout=300s"]) | ||
# # LOGGER.error("Usage: make validate-runtime-image image=<container-image-name>") | ||
# # fail = False | ||
image = "ghcr.io/jiridanek/notebooks/workbench-images:runtime-minimal-ubi9-python-3.9-jd_helpful_error_751147cd93fed327e940670edbc99c6f44a1ac24" | ||
# async with dagger.Connection(dagger.Config(log_output=sys.stderr)) as client: | ||
container = testcontainers.core.container.DockerContainer(image) | ||
container.with_command("tail -f /dev/null") | ||
with container.start(): | ||
# c = (client.container().from_(image)) | ||
for cmd in REQUIRED_RUNTIME_IMAGE_COMMANDS: | ||
LOGGER.info(f"=> Checking container image {image} for {cmd} command...") | ||
# # r = run_kubectl(["exec", f"runtime-pod", "which {cmd} > /dev/null 2>&1"], check=False) | ||
r = docker_utils.container_exec(container.get_wrapped_container(), | ||
cmd=["/bin/bash", "-c", f"which {cmd} > /dev/null 2>&1"], | ||
stream=True) | ||
# # if r.returncode != 0: | ||
# # LOGGER.error("ERROR: Container image $$image does not meet criteria for command: $$cmd") | ||
# # fail = True | ||
# # continue | ||
assert r.communicate() == 0 | ||
if cmd == "python3": | ||
LOGGER.info("=> Checking notebook execution...") | ||
|
||
|
||
# # await c.with_exec(use_entrypoint=True, args=[]) | ||
# # print("default artgs", await c.default_args()) | ||
# # TODO: I don't see elyra/ directory on the image | ||
# # await c.with_exec(["/bin/bash", "-c", "python3 -m pip install -r /opt/app-root/elyra/requirements-elyra.txt" | ||
# # " && curl https://raw.githubusercontent.com/nteract/papermill/main/papermill/tests/notebooks/simple_execute.ipynb --output simple_execute.ipynb" | ||
# # " && python3 -m papermill simple_execute.ipynb output.ipynb > /dev/null"]) | ||
# # r = run_kubectl(["exec", "runtime-pod", "/bin/sh", "-c", , check=False) | ||
# # if r.returncode != 0: | ||
# # LOGGER.error("ERROR: Image does not meet Python requirements criteria in requirements-elyra.txt") | ||
# # fail = True | ||
# # assert not fail, "=> ERROR: Container image $$image is not a suitable Elyra runtime image" | ||
# # LOGGER.info(f"=> Container image {image} is a suitable Elyra runtime image") | ||
# | ||
# | ||
# async def test_validate_codeserver_image(): | ||
# # codeserver_pod_ready = run_kubectl( | ||
# # ["wait", "--for=condition=ready", "pod", "codeserver-pod", "--timeout=300s"], check=False) | ||
# # assert codeserver_pod_ready.returncode == 0, "Code-server pod did not become ready within expected time" | ||
# | ||
# # assert image, "Usage: make validate-codeserver-image image=<container-image-name>" | ||
# | ||
# image = "ghcr.io/jiridanek/notebooks/workbench-images:codeserver-ubi9-python-3.9-jd_helpful_error_751147cd93fed327e940670edbc99c6f44a1ac24" | ||
# async with dagger.Connection(dagger.Config(log_output=sys.stderr)) as client: | ||
# c = (client.container().from_(image)) | ||
# for cmd in REQUIRED_CODE_SERVER_IMAGE_COMMANDS: | ||
# await c.with_exec(["/bin/bash", "-c", f"which {cmd} > /dev/null 2>&1"]) | ||
# # result = run_kubectl(["exec", "codeserver-pod", f"which {cmd} > /dev/null 2>&1"], check=False) | ||
# # assert result.returncode == 0, f"ERROR: Container image {image} does not meet criteria for command: {cmd}" | ||
# | ||
# | ||
# # async def validate_rstudio_image(client: dagger.Client, c: dagger.Container): | ||
# async def test_validate_rstudio_image(): | ||
# image = "ghcr.io/jiridanek/notebooks/workbench-images:rstudio-c9s-python-3.9-jd_helpful_error_751147cd93fed327e940670edbc99c6f44a1ac24" | ||
# | ||
# notebook_name = "" | ||
# ubi_flavor = "c9s" | ||
# python_kernel = "python-3.9" | ||
# | ||
# async with (dagger.Connection(dagger.Config(log_output=sys.stderr)) as client): | ||
# c = (client.container() | ||
# .from_(image)) | ||
# | ||
# # $(eval NOTEBOOK_NAME := $(subst .,-,$(subst cuda-,,$*))) | ||
# LOGGER.info("# Running tests for $(NOTEBOOK_NAME) RStudio Server image...") | ||
# # rstudo_pod_ready = run_kubectl(["wait", "--for=condition=ready", "pod", "rstudio-pod", "--timeout=300s"], check=False) | ||
# # assert rstudo_pod_ready.returncode == 0, "Code-server pod did not become ready within expected time" | ||
# # assert image, "Usage: make validate-rstudio-image image=<container-image-name>" | ||
# | ||
# LOGGER.info("=> Checking container image $$image for package intallation...") | ||
# c = c.with_exec(["/bin/bash", "-c", "mkdir -p /opt/app-root/src/R/temp-library > /dev/null 2>&1"]) | ||
# c = c.with_exec(["/bin/bash", "-c", | ||
# '''R -e "install.packages('tinytex', lib='/opt/app-root/src/R/temp-library')" > /dev/null 2>&1''']) | ||
# await c | ||
# | ||
# for cmd in REQUIRED_R_STUDIO_IMAGE_COMMANDS: | ||
# LOGGER.info(f"=> Checking container image {image} for {cmd}...") | ||
# # which_cmd = run_kubectl(["exec", "rstudio-pod", f"which {cmd} > /dev/null 2>&1"], check=False) | ||
# await c.with_exec(["/bin/bash", "-c", f"which {cmd} > /dev/null 2>&1"]) | ||
# # if which_cmd.returncode == 0: | ||
# # LOGGER.info(f"{cmd} executed successfully!") | ||
# # else: | ||
# # LOGGER.error("ERROR: Container image {image} does not meet criteria for command: {cmd}") | ||
# # fail = True | ||
# # continue | ||
# | ||
# LOGGER.info("=> Fetching R script from URL and executing on the container...") | ||
# # run_command(["curl", "-sSL", "-o", "test_script.R" f"{NOTEBOOK_REPO_BRANCH_BASE}/rstudio/c9s-python-3.9/test/test_script.R"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL) | ||
# # run_kubectl(["cp", "test_script.R", "rstudio-pod:/opt/app-root/src/test_script.R"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL) | ||
# # test_script = run_kubectl(["exec", "rstudio-pod", "--", "Rscript /opt/app-root/src/test_script.R > /dev/null 2>&1"]) | ||
# r = client.host().directory(str(ROOT_PATH / "rstudio" / f"{ubi_flavor}-{python_kernel}" / "test")) | ||
# d = (c | ||
# .with_directory("/test", r) | ||
# .with_workdir("/opt/app-data") | ||
# .with_exec(["/bin/sh", "-c", "Rscript /test/test_script.R > /dev/null 2>&1"]) | ||
# ) | ||
# await d | ||
# | ||
# # if test_script.returncode == 0: | ||
# # LOGGER.info("R script executed successfully!") | ||
# # os.unlink("test_script.R") | ||
# # else: | ||
# # LOGGER.error("Error: R script failed.") | ||
# # fail = True | ||
# # | ||
# # assert not fail | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Clean up extensive commented code.
There's a large amount of commented-out code that makes the file difficult to navigate and maintain. Consider either:
- Implementing the functionality if it's needed
- Moving it to a separate development/draft file
- Removing it entirely if it's obsolete
The commented code appears to contain valuable implementations for Kubernetes-based testing and Dagger async patterns that could be useful for future development.
🧰 Tools
🪛 Pylint (3.3.7)
[refactor] 184-184: Redefining argument with the local name 'cmd'
(R1704)
🤖 Prompt for AI Agents
In tests/test_check_dagger.py from lines 61 to 284, there is a large block of
commented-out code that clutters the file and reduces maintainability. Review
this commented code and decide whether to implement the needed functionality,
move it to a separate draft or development file for future reference, or remove
it entirely if it is obsolete. This cleanup will improve readability and
maintainability of the test file.
with open("/config.txt", "wt") as fp: | ||
fp.writelines(lines) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix hardcoded filename in blockinfile function.
The function writes to a hardcoded /config.txt
instead of the provided filename
parameter, which appears to be a bug.
- with open("/config.txt", "wt") as fp:
+ with open(filename, "wt") as fp:
fp.writelines(lines)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
with open("/config.txt", "wt") as fp: | |
fp.writelines(lines) | |
with open(filename, "wt") as fp: | |
fp.writelines(lines) |
🤖 Prompt for AI Agents
In tests/test_check_dagger.py around lines 310 to 311, the code writes to a
hardcoded filename "/config.txt" instead of using the provided filename
parameter. Replace the hardcoded string "/config.txt" with the variable filename
to ensure the function writes to the intended file.
""" | ||
begin = end = -1 | ||
|
||
lines = open(filename, "rt").readlines() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Use context manager for file operations.
The file should be opened using a context manager to ensure proper resource cleanup.
- lines = open(filename, "rt").readlines()
+ with open(filename, "rt") as f:
+ lines = f.readlines()
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
lines = open(filename, "rt").readlines() | |
with open(filename, "rt") as f: | |
lines = f.readlines() |
🧰 Tools
🪛 Ruff (0.11.9)
295-295: Use a context manager for opening files
(SIM115)
🪛 Pylint (3.3.7)
[refactor] 295-295: Consider using 'with' for resource-allocating operations
(R1732)
🤖 Prompt for AI Agents
In tests/test_check_dagger.py at line 295, the file is opened without a context
manager, which risks resource leaks. Replace the open call with a with statement
to open the file, read the lines inside the block, and ensure the file is
properly closed after reading.
for cmd in REQUIRED_RUNTIME_IMAGE_COMMANDS: | ||
LOGGER.info(f"=> Checking container image {image} for {cmd} command...") | ||
# # r = run_kubectl(["exec", f"runtime-pod", "which {cmd} > /dev/null 2>&1"], check=False) | ||
r = docker_utils.container_exec(container.get_wrapped_container(), | ||
cmd=["/bin/bash", "-c", f"which {cmd} > /dev/null 2>&1"], | ||
stream=True) | ||
# # if r.returncode != 0: | ||
# # LOGGER.error("ERROR: Container image $$image does not meet criteria for command: $$cmd") | ||
# # fail = True | ||
# # continue | ||
assert r.communicate() == 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix variable name conflict in loop.
The loop variable cmd
shadows the function parameter cmd
, which is confusing and flagged by static analysis.
- for cmd in REQUIRED_RUNTIME_IMAGE_COMMANDS:
- LOGGER.info(f"=> Checking container image {image} for {cmd} command...")
+ for required_cmd in REQUIRED_RUNTIME_IMAGE_COMMANDS:
+ LOGGER.info(f"=> Checking container image {image} for {required_cmd} command...")
r = docker_utils.container_exec(container.get_wrapped_container(),
- cmd=["/bin/bash", "-c", f"which {cmd} > /dev/null 2>&1"],
+ cmd=["/bin/bash", "-c", f"which {required_cmd} > /dev/null 2>&1"],
stream=True)
assert r.communicate() == 0
- if cmd == "python3":
+ if required_cmd == "python3":
LOGGER.info("=> Checking notebook execution...")
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
for cmd in REQUIRED_RUNTIME_IMAGE_COMMANDS: | |
LOGGER.info(f"=> Checking container image {image} for {cmd} command...") | |
# # r = run_kubectl(["exec", f"runtime-pod", "which {cmd} > /dev/null 2>&1"], check=False) | |
r = docker_utils.container_exec(container.get_wrapped_container(), | |
cmd=["/bin/bash", "-c", f"which {cmd} > /dev/null 2>&1"], | |
stream=True) | |
# # if r.returncode != 0: | |
# # LOGGER.error("ERROR: Container image $$image does not meet criteria for command: $$cmd") | |
# # fail = True | |
# # continue | |
assert r.communicate() == 0 | |
for required_cmd in REQUIRED_RUNTIME_IMAGE_COMMANDS: | |
LOGGER.info(f"=> Checking container image {image} for {required_cmd} command...") | |
# # r = run_kubectl(["exec", f"runtime-pod", "which {cmd} > /dev/null 2>&1"], check=False) | |
r = docker_utils.container_exec(container.get_wrapped_container(), | |
cmd=["/bin/bash", "-c", f"which {required_cmd} > /dev/null 2>&1"], | |
stream=True) | |
# # if r.returncode != 0: | |
# # LOGGER.error("ERROR: Container image $$image does not meet criteria for command: $$cmd") | |
# # fail = True | |
# # continue | |
assert r.communicate() == 0 | |
if required_cmd == "python3": | |
LOGGER.info("=> Checking notebook execution...") |
🧰 Tools
🪛 Pylint (3.3.7)
[refactor] 184-184: Redefining argument with the local name 'cmd'
(R1704)
🤖 Prompt for AI Agents
In tests/test_check_dagger.py around lines 184 to 194, the loop variable named
'cmd' conflicts with a function parameter of the same name, causing confusion
and static analysis warnings. Rename the loop variable to a distinct name such
as 'command' to avoid shadowing the function parameter. Update all references
inside the loop accordingly to use the new variable name.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@coderabbitai let's think about this one.
We have the testcontainer tests in the tests/containers
directory. Will they somehow benefit from using dagger?
We cannot use dagger for final builds because that does not work with Konflux (konflux runs buildah builds from Dockerfiles). So if we wanted to use dagger for CI, we could in github actions. But, if you look into our .github/workflows, you'd see we're running Kubernetes; a bit of a problem with Dagger. As well as dealing with huge images, copying them into dagger engine container will be overhead that cannot be afforded.
Dagger does not have some self-hosting features but they are stalled and the commercial prong of the product focuses on saas that we're not gonna use
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The combination of testcontainers + your existing Kubernetes infrastructure + Konflux compatibility makes this a much more sustainable path forward than Dagger.
✏️ Learnings added
Learnt from: jiridanek
PR: opendatahub-io/notebooks#629
File: tests/test_check_dagger.py:1-1
Timestamp: 2025-09-04T15:02:29.290Z
Learning: jiridanek identified key constraints preventing Dagger adoption in opendatahub-io/notebooks: incompatibility with Konflux buildah builds, problems with Kubernetes-based GitHub Actions CI, overhead from copying large images to Dagger engine containers, and stalled self-hosting features with commercial focus on SaaS. This led to the decision to stick with testcontainers for container image testing instead of Dagger.
Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.
🧠 Learnings used
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2185
File: jupyter/pytorch/ubi9-python-3.12/Dockerfile.cuda:109-117
Timestamp: 2025-08-29T08:48:55.985Z
Learning: jiridanek prefers to implement systematic cleanup improvements through dedicated GitHub issues (like #2076) rather than applying individual point fixes during PR reviews, maintaining consistency with their established pattern of comprehensive code quality improvements.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1259
File: jupyter/rocm/tensorflow/ubi9-python-3.12/kustomize/base/service.yaml:5-15
Timestamp: 2025-07-02T18:59:15.788Z
Learning: jiridanek creates targeted GitHub issues for specific test quality improvements identified during PR reviews in opendatahub-io/notebooks. Issue #1268 demonstrates this by converting a review comment about insufficient tf2onnx conversion test validation into a comprehensive improvement plan with clear acceptance criteria, code examples, and ROCm-specific context.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-05T17:24:08.616Z
Learning: jiridanek requested PR review for #1521 covering s390x architecture support improvements, demonstrating continued focus on systematic multi-architecture compatibility enhancements in the opendatahub-io/notebooks repository through clean implementation with centralized configuration, proper CI integration, and architecture-aware testing patterns.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-22T12:15:38.859Z
Learning: jiridanek requested automation for digest resolution using Registry v2 API during PR #2103 review. A comprehensive GitHub issue #2104 was successfully created covering Python and bash implementation approaches for resolving image tags to sha256 digests in manifests/base/params-latest.env, with detailed acceptance criteria, CI/CD integration options, authentication handling for quay.io registry, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-01T14:46:03.215Z
Learning: jiridanek requested GitHub issue creation for two nitpicks during PR #1588 review: comment wording improvement in ROCm TensorFlow Dockerfile and typo fix in Jupyter DataScience Dockerfile stage header. Issues #1589 and #1590 were successfully created with comprehensive problem descriptions, specific file locations and line numbers, clear before/after solutions, detailed acceptance criteria, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1259
File: jupyter/rocm/tensorflow/ubi9-python-3.12/Pipfile:31-36
Timestamp: 2025-08-20T17:51:39.525Z
Learning: jiridanek consistently escalates point issues identified during PR reviews into systematic repository-wide improvements. When CodeRabbit flagged missing libxcrypt-compat in a single ROCm TensorFlow Python 3.12 image during PR #1259 review, jiridanek requested creation of issue #2075 for systematic review and fixing of all Python 3.12 UBI9 images, demonstrating his comprehensive approach to infrastructure consistency and code quality.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-07-11T11:16:05.131Z
Learning: jiridanek requested GitHub issue creation for RStudio py311 Tekton push pipelines during PR #1379 review. Issue #1384 was successfully created covering two RStudio variants (CPU and CUDA) found in manifests/base/params-latest.env, with comprehensive problem description, implementation requirements following the same pattern as other workbench pipelines, clear acceptance criteria, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-22T12:15:38.859Z
Learning: jiridanek requested automation for digest resolution using Registry v2 API during PR #2103 review. A comprehensive GitHub issue was created covering Python and bash implementation approaches for resolving image tags to sha256 digests in manifests/base/params-latest.env, with detailed acceptance criteria, CI/CD integration options, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1995
File: jupyter/rocm/tensorflow/ubi9-python-3.11/Dockerfile.rocm:0-0
Timestamp: 2025-08-19T06:31:05.424Z
Learning: jiridanek requested GitHub issue creation for ROCm device detection test during PR #1995 review. The issue covers comprehensive test requirements for validating that device_lib.list_local_devices() successfully detects ROCm/HIP devices after ROCm package optimizations, including multiple solution options (pytest, shell script, notebook-based), detailed acceptance criteria for core requirements and test coverage, implementation guidance for test framework selection and CI integration, example test code for TensorFlow device detection validation, and proper context linking to PR #1995 and review comment, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1320
File: jupyter/pytorch/ubi9-python-3.12/Dockerfile.cuda:42-52
Timestamp: 2025-07-09T12:29:56.162Z
Learning: jiridanek requested GitHub issue creation for OpenShift client architecture mapping problem affecting 29 Dockerfiles during PR #1320 review. Issue was created with comprehensive analysis covering all affected files using $(uname -m) returning 'aarch64' but OpenShift mirror expecting 'arm64', systematic solution using BuildKit TARGETARCH mapping with proper amd64→x86_64 and arm64→arm64 conversion, detailed acceptance criteria, and implementation guidance, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1269
File: codeserver/ubi9-python-3.12/run-code-server.sh:7-10
Timestamp: 2025-07-03T14:01:22.819Z
Learning: jiridanek requested GitHub issue creation for container startup robustness and lifecycle management improvements in codeserver/ubi9-python-3.12/run-code-server.sh during PR #1269 review. Issue #1298 was successfully created with comprehensive problem description covering race conditions, failure detection, orphaned processes, and signal handling, along with multiple solution options, phased acceptance criteria, testing approach, and proper context linking, continuing the established pattern of systematic code quality improvements.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@coderabbitai help me think of ways to use dagger after all, and overcome constraints to achieve success
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
to address performance concerns.
Recommended Implementation Path
- Phase 1: Deploy Dagger Engine on your Kubernetes cluster for local development testing
- Phase 2: Create Dagger modules for your most complex testing scenarios (notebook validation, multi-architecture testing)
- Phase 3: Implement hybrid approach - Dagger for development/testing, Konflux for production builds
- Phase 4: Explore advanced caching and performance optimizations
The key insight is that Dagger has evolved significantly in 2024, with mature Kubernetes integration, advanced caching, and enterprise-ready features that directly address your original constraints. The hybrid approach lets you capture Dagger's benefits while respecting your existing infrastructure investments.
🧠 Learnings used
Learnt from: jiridanek
PR: opendatahub-io/notebooks#629
File: tests/test_check_dagger.py:1-1
Timestamp: 2025-09-04T15:02:29.312Z
Learning: jiridanek identified key constraints preventing Dagger adoption in opendatahub-io/notebooks: incompatibility with Konflux buildah builds, problems with Kubernetes-based GitHub Actions CI, overhead from copying large images to Dagger engine containers, and stalled self-hosting features with commercial focus on SaaS. This led to the decision to stick with testcontainers for container image testing instead of Dagger.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1909
File: runtimes/pytorch+llmcompressor/ubi9-python-3.11/Dockerfile.cuda:11-15
Timestamp: 2025-08-12T08:40:55.286Z
Learning: jiridanek requested GitHub issue creation for redundant CUDA upgrade optimization during PR #1909 review. Analysis revealed all 14 CUDA Dockerfiles contain redundant `yum upgrade -y` commands in cuda-base stages that execute after base stages already performed comprehensive `dnf upgrade` via pre-upgrade blocks, causing unnecessary CI latency and build inefficiency. Issue includes complete scope analysis with specific line numbers, investigation framework requiring NVIDIA upstream documentation review, multiple solution options, comprehensive acceptance criteria covering systematic testing and performance measurement, and proper context linking to PR #1909 review comment.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1909
File: runtimes/pytorch+llmcompressor/ubi9-python-3.11/Dockerfile.cuda:11-15
Timestamp: 2025-08-12T08:40:55.286Z
Learning: jiridanek requested GitHub issue creation for redundant CUDA upgrade optimization during PR #1909 review. Issue covers duplicate yum/dnf upgrade commands in cuda-base stages that execute after base stages already performed comprehensive upgrades, causing unnecessary CI latency and build inefficiency across multiple CUDA Dockerfiles. The solution requires investigating NVIDIA upstream documentation requirements before removing redundant upgrades, with systematic testing of all CUDA variants and performance measurement. Issue follows established pattern of comprehensive problem analysis, multiple solution options, detailed acceptance criteria, and proper context linking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1521
File: tests/containers/workbenches/jupyterlab/jupyterlab_test.py:61-68
Timestamp: 2025-08-01T09:33:23.033Z
Learning: jiridanek requested GitHub issue creation for improving architecture detection efficiency and centralization in JupyterLab tests during PR #1521 review. Issue #1576 was successfully created covering performance inefficiency of current container start/stop approach for architecture detection, need for centralized configuration of architecture-specific limitations, comprehensive solution options including session caching and helper functions, detailed acceptance criteria covering performance and maintainability requirements, phased implementation guidance, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1259
File: jupyter/rocm/tensorflow/ubi9-python-3.12/Pipfile:31-36
Timestamp: 2025-08-20T17:51:39.525Z
Learning: jiridanek consistently escalates point issues identified during PR reviews into systematic repository-wide improvements. When CodeRabbit flagged missing libxcrypt-compat in a single ROCm TensorFlow Python 3.12 image during PR #1259 review, jiridanek requested creation of issue #2075 for systematic review and fixing of all Python 3.12 UBI9 images, demonstrating his comprehensive approach to infrastructure consistency and code quality.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1320
File: jupyter/pytorch/ubi9-python-3.12/Dockerfile.cuda:42-52
Timestamp: 2025-07-09T12:29:56.162Z
Learning: jiridanek requested GitHub issue creation for OpenShift client architecture mapping problem affecting 29 Dockerfiles during PR #1320 review. Issue was created with comprehensive analysis covering all affected files using $(uname -m) returning 'aarch64' but OpenShift mirror expecting 'arm64', systematic solution using BuildKit TARGETARCH mapping with proper amd64→x86_64 and arm64→arm64 conversion, detailed acceptance criteria, and implementation guidance, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-19T15:45:23.201Z
Learning: jiridanek demonstrated an effective methodology for reviewing large-scale refactoring PRs involving repetitive changes across many files: (1) Focus on potential copy-paste errors by identifying what should remain unique vs. what should be consistent, (2) Systematically verify serviceAccountName uniqueness preservation in Tekton PipelineRun files, (3) Confirm pipelineRef consistency across all modified files, (4) Validate that extracted shared components contain the correct content. This approach is particularly valuable for Tekton pipeline refactoring in the opendatahub-io/notebooks repository.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-22T12:15:38.859Z
Learning: jiridanek requested automation for digest resolution using Registry v2 API during PR #2103 review. A comprehensive GitHub issue #2104 was successfully created covering Python and bash implementation approaches for resolving image tags to sha256 digests in manifests/base/params-latest.env, with detailed acceptance criteria, CI/CD integration options, authentication handling for quay.io registry, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2185
File: rstudio/c9s-python-3.11/Dockerfile.cuda:68-73
Timestamp: 2025-08-29T08:46:40.297Z
Learning: jiridanek prioritized implementing Docker/Podman build caching (issue #2076) over cleanup command standardization in PR #2185, confirming their preference for high-impact performance improvements over micro-optimizations in Dockerfile cleanup patterns.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-27T15:20:35.737Z
Learning: jiridanek requested GitHub issue creation for DNF consistency during PR #2145 review, identifying inconsistent yum vs dnf usage across UBI9 Dockerfiles. Issue #2157 was created with comprehensive repository-wide audit strategy, systematic replacement approach, benefits analysis, and clear acceptance criteria for standardizing package manager usage across all UBI9 images, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2185
File: jupyter/pytorch/ubi9-python-3.12/Dockerfile.cuda:109-117
Timestamp: 2025-08-29T08:48:55.985Z
Learning: jiridanek prefers to implement systematic cleanup improvements through dedicated GitHub issues (like #2076) rather than applying individual point fixes during PR reviews, maintaining consistency with their established pattern of comprehensive code quality improvements.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-05T17:24:08.616Z
Learning: jiridanek requested PR review for #1521 covering s390x architecture support improvements, demonstrating continued focus on systematic multi-architecture compatibility enhancements in the opendatahub-io/notebooks repository through clean implementation with centralized configuration, proper CI integration, and architecture-aware testing patterns.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1259
File: jupyter/rocm/tensorflow/ubi9-python-3.12/kustomize/base/service.yaml:5-15
Timestamp: 2025-07-02T18:59:15.788Z
Learning: jiridanek creates targeted GitHub issues for specific test quality improvements identified during PR reviews in opendatahub-io/notebooks. Issue #1268 demonstrates this by converting a review comment about insufficient tf2onnx conversion test validation into a comprehensive improvement plan with clear acceptance criteria, code examples, and ROCm-specific context.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-01T14:46:03.215Z
Learning: jiridanek requested GitHub issue creation for two nitpicks during PR #1588 review: comment wording improvement in ROCm TensorFlow Dockerfile and typo fix in Jupyter DataScience Dockerfile stage header. Issues #1589 and #1590 were successfully created with comprehensive problem descriptions, specific file locations and line numbers, clear before/after solutions, detailed acceptance criteria, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-07-11T11:16:05.131Z
Learning: jiridanek requested GitHub issue creation for RStudio py311 Tekton push pipelines during PR #1379 review. Issue #1384 was successfully created covering two RStudio variants (CPU and CUDA) found in manifests/base/params-latest.env, with comprehensive problem description, implementation requirements following the same pattern as other workbench pipelines, clear acceptance criteria, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-22T12:15:38.859Z
Learning: jiridanek requested automation for digest resolution using Registry v2 API during PR #2103 review. A comprehensive GitHub issue was created covering Python and bash implementation approaches for resolving image tags to sha256 digests in manifests/base/params-latest.env, with detailed acceptance criteria, CI/CD integration options, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1995
File: jupyter/rocm/tensorflow/ubi9-python-3.11/Dockerfile.rocm:0-0
Timestamp: 2025-08-19T06:31:05.424Z
Learning: jiridanek requested GitHub issue creation for ROCm device detection test during PR #1995 review. The issue covers comprehensive test requirements for validating that device_lib.list_local_devices() successfully detects ROCm/HIP devices after ROCm package optimizations, including multiple solution options (pytest, shell script, notebook-based), detailed acceptance criteria for core requirements and test coverage, implementation guidance for test framework selection and CI integration, example test code for TensorFlow device detection validation, and proper context linking to PR #1995 and review comment, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@coderabbitai talk about each of the concerns in turn and describe what has changed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
.
Your original constraints have been systematically addressed through enterprise features, Kubernetes-native deployment patterns, revolutionary caching improvements, and robust self-hosting capabilities. The hybrid approach lets you capture Dagger's benefits while respecting your existing infrastructure investments.
🐰✨
🧠 Learnings used
Learnt from: jiridanek
PR: opendatahub-io/notebooks#629
File: tests/test_check_dagger.py:1-1
Timestamp: 2025-09-04T15:02:29.312Z
Learning: jiridanek identified key constraints preventing Dagger adoption in opendatahub-io/notebooks: incompatibility with Konflux buildah builds, problems with Kubernetes-based GitHub Actions CI, overhead from copying large images to Dagger engine containers, and stalled self-hosting features with commercial focus on SaaS. This led to the decision to stick with testcontainers for container image testing instead of Dagger.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1909
File: runtimes/pytorch+llmcompressor/ubi9-python-3.11/Dockerfile.cuda:11-15
Timestamp: 2025-08-12T08:40:55.286Z
Learning: jiridanek requested GitHub issue creation for redundant CUDA upgrade optimization during PR #1909 review. Analysis revealed all 14 CUDA Dockerfiles contain redundant `yum upgrade -y` commands in cuda-base stages that execute after base stages already performed comprehensive `dnf upgrade` via pre-upgrade blocks, causing unnecessary CI latency and build inefficiency. Issue includes complete scope analysis with specific line numbers, investigation framework requiring NVIDIA upstream documentation review, multiple solution options, comprehensive acceptance criteria covering systematic testing and performance measurement, and proper context linking to PR #1909 review comment.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1909
File: runtimes/pytorch+llmcompressor/ubi9-python-3.11/Dockerfile.cuda:11-15
Timestamp: 2025-08-12T08:40:55.286Z
Learning: jiridanek requested GitHub issue creation for redundant CUDA upgrade optimization during PR #1909 review. Issue covers duplicate yum/dnf upgrade commands in cuda-base stages that execute after base stages already performed comprehensive upgrades, causing unnecessary CI latency and build inefficiency across multiple CUDA Dockerfiles. The solution requires investigating NVIDIA upstream documentation requirements before removing redundant upgrades, with systematic testing of all CUDA variants and performance measurement. Issue follows established pattern of comprehensive problem analysis, multiple solution options, detailed acceptance criteria, and proper context linking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1521
File: tests/containers/workbenches/jupyterlab/jupyterlab_test.py:61-68
Timestamp: 2025-08-01T09:33:23.033Z
Learning: jiridanek requested GitHub issue creation for improving architecture detection efficiency and centralization in JupyterLab tests during PR #1521 review. Issue #1576 was successfully created covering performance inefficiency of current container start/stop approach for architecture detection, need for centralized configuration of architecture-specific limitations, comprehensive solution options including session caching and helper functions, detailed acceptance criteria covering performance and maintainability requirements, phased implementation guidance, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1259
File: jupyter/rocm/tensorflow/ubi9-python-3.12/Pipfile:31-36
Timestamp: 2025-08-20T17:51:39.525Z
Learning: jiridanek consistently escalates point issues identified during PR reviews into systematic repository-wide improvements. When CodeRabbit flagged missing libxcrypt-compat in a single ROCm TensorFlow Python 3.12 image during PR #1259 review, jiridanek requested creation of issue #2075 for systematic review and fixing of all Python 3.12 UBI9 images, demonstrating his comprehensive approach to infrastructure consistency and code quality.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-19T15:45:23.201Z
Learning: jiridanek demonstrated an effective methodology for reviewing large-scale refactoring PRs involving repetitive changes across many files: (1) Focus on potential copy-paste errors by identifying what should remain unique vs. what should be consistent, (2) Systematically verify serviceAccountName uniqueness preservation in Tekton PipelineRun files, (3) Confirm pipelineRef consistency across all modified files, (4) Validate that extracted shared components contain the correct content. This approach is particularly valuable for Tekton pipeline refactoring in the opendatahub-io/notebooks repository.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1320
File: jupyter/pytorch/ubi9-python-3.12/Dockerfile.cuda:42-52
Timestamp: 2025-07-09T12:29:56.162Z
Learning: jiridanek requested GitHub issue creation for OpenShift client architecture mapping problem affecting 29 Dockerfiles during PR #1320 review. Issue was created with comprehensive analysis covering all affected files using $(uname -m) returning 'aarch64' but OpenShift mirror expecting 'arm64', systematic solution using BuildKit TARGETARCH mapping with proper amd64→x86_64 and arm64→arm64 conversion, detailed acceptance criteria, and implementation guidance, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-22T12:15:38.859Z
Learning: jiridanek requested automation for digest resolution using Registry v2 API during PR #2103 review. A comprehensive GitHub issue #2104 was successfully created covering Python and bash implementation approaches for resolving image tags to sha256 digests in manifests/base/params-latest.env, with detailed acceptance criteria, CI/CD integration options, authentication handling for quay.io registry, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2185
File: rstudio/c9s-python-3.11/Dockerfile.cuda:68-73
Timestamp: 2025-08-29T08:46:40.297Z
Learning: jiridanek prioritized implementing Docker/Podman build caching (issue #2076) over cleanup command standardization in PR #2185, confirming their preference for high-impact performance improvements over micro-optimizations in Dockerfile cleanup patterns.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2185
File: jupyter/pytorch/ubi9-python-3.12/Dockerfile.cuda:109-117
Timestamp: 2025-08-29T08:48:55.985Z
Learning: jiridanek prefers to implement systematic cleanup improvements through dedicated GitHub issues (like #2076) rather than applying individual point fixes during PR reviews, maintaining consistency with their established pattern of comprehensive code quality improvements.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-05T17:24:08.616Z
Learning: jiridanek requested PR review for #1521 covering s390x architecture support improvements, demonstrating continued focus on systematic multi-architecture compatibility enhancements in the opendatahub-io/notebooks repository through clean implementation with centralized configuration, proper CI integration, and architecture-aware testing patterns.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1259
File: jupyter/rocm/tensorflow/ubi9-python-3.12/kustomize/base/service.yaml:5-15
Timestamp: 2025-07-02T18:59:15.788Z
Learning: jiridanek creates targeted GitHub issues for specific test quality improvements identified during PR reviews in opendatahub-io/notebooks. Issue #1268 demonstrates this by converting a review comment about insufficient tf2onnx conversion test validation into a comprehensive improvement plan with clear acceptance criteria, code examples, and ROCm-specific context.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-01T14:46:03.215Z
Learning: jiridanek requested GitHub issue creation for two nitpicks during PR #1588 review: comment wording improvement in ROCm TensorFlow Dockerfile and typo fix in Jupyter DataScience Dockerfile stage header. Issues #1589 and #1590 were successfully created with comprehensive problem descriptions, specific file locations and line numbers, clear before/after solutions, detailed acceptance criteria, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-07-11T11:16:05.131Z
Learning: jiridanek requested GitHub issue creation for RStudio py311 Tekton push pipelines during PR #1379 review. Issue #1384 was successfully created covering two RStudio variants (CPU and CUDA) found in manifests/base/params-latest.env, with comprehensive problem description, implementation requirements following the same pattern as other workbench pipelines, clear acceptance criteria, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-22T12:15:38.859Z
Learning: jiridanek requested automation for digest resolution using Registry v2 API during PR #2103 review. A comprehensive GitHub issue was created covering Python and bash implementation approaches for resolving image tags to sha256 digests in manifests/base/params-latest.env, with detailed acceptance criteria, CI/CD integration options, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1995
File: jupyter/rocm/tensorflow/ubi9-python-3.11/Dockerfile.rocm:0-0
Timestamp: 2025-08-19T06:31:05.424Z
Learning: jiridanek requested GitHub issue creation for ROCm device detection test during PR #1995 review. The issue covers comprehensive test requirements for validating that device_lib.list_local_devices() successfully detects ROCm/HIP devices after ROCm package optimizations, including multiple solution options (pytest, shell script, notebook-based), detailed acceptance criteria for core requirements and test coverage, implementation guidance for test framework selection and CI integration, example test code for TensorFlow device detection validation, and proper context linking to PR #1995 and review comment, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@coderabbitai elaborate and provide links to document the improvements
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Key Resources
- Kubernetes Integration: https://docs.dagger.io/ci/integrations/kubernetes/
- Production Patterns: https://dagger.io/blog/run-dagger-on-kubernetes
- Performance Optimizations: https://dagger.io/blog/2024-highlights
- Enterprise Features: https://dagger.io/cloud
- Self-Hosting Guide: https://dagger.io/blog/puzzle-case-study
- Production Scaling Discussion: How to scale Dagger in production? dagger/dagger#6486
The transformation has been remarkable - your original constraints around Konflux compatibility, Kubernetes integration challenges, performance overhead, and self-hosting limitations have all been systematically addressed through enterprise-grade solutions and architectural improvements.
🧠 Learnings used
Learnt from: jiridanek
PR: opendatahub-io/notebooks#629
File: tests/test_check_dagger.py:1-1
Timestamp: 2025-09-04T15:02:29.312Z
Learning: jiridanek identified key constraints preventing Dagger adoption in opendatahub-io/notebooks: incompatibility with Konflux buildah builds, problems with Kubernetes-based GitHub Actions CI, overhead from copying large images to Dagger engine containers, and stalled self-hosting features with commercial focus on SaaS. This led to the decision to stick with testcontainers for container image testing instead of Dagger.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1521
File: tests/containers/workbenches/jupyterlab/jupyterlab_test.py:61-68
Timestamp: 2025-08-01T09:33:23.033Z
Learning: jiridanek requested GitHub issue creation for improving architecture detection efficiency and centralization in JupyterLab tests during PR #1521 review. Issue #1576 was successfully created covering performance inefficiency of current container start/stop approach for architecture detection, need for centralized configuration of architecture-specific limitations, comprehensive solution options including session caching and helper functions, detailed acceptance criteria covering performance and maintainability requirements, phased implementation guidance, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1909
File: runtimes/pytorch+llmcompressor/ubi9-python-3.11/Dockerfile.cuda:11-15
Timestamp: 2025-08-12T08:40:55.286Z
Learning: jiridanek requested GitHub issue creation for redundant CUDA upgrade optimization during PR #1909 review. Analysis revealed all 14 CUDA Dockerfiles contain redundant `yum upgrade -y` commands in cuda-base stages that execute after base stages already performed comprehensive `dnf upgrade` via pre-upgrade blocks, causing unnecessary CI latency and build inefficiency. Issue includes complete scope analysis with specific line numbers, investigation framework requiring NVIDIA upstream documentation review, multiple solution options, comprehensive acceptance criteria covering systematic testing and performance measurement, and proper context linking to PR #1909 review comment.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1909
File: runtimes/pytorch+llmcompressor/ubi9-python-3.11/Dockerfile.cuda:11-15
Timestamp: 2025-08-12T08:40:55.286Z
Learning: jiridanek requested GitHub issue creation for redundant CUDA upgrade optimization during PR #1909 review. Issue covers duplicate yum/dnf upgrade commands in cuda-base stages that execute after base stages already performed comprehensive upgrades, causing unnecessary CI latency and build inefficiency across multiple CUDA Dockerfiles. The solution requires investigating NVIDIA upstream documentation requirements before removing redundant upgrades, with systematic testing of all CUDA variants and performance measurement. Issue follows established pattern of comprehensive problem analysis, multiple solution options, detailed acceptance criteria, and proper context linking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1259
File: jupyter/rocm/tensorflow/ubi9-python-3.12/Pipfile:31-36
Timestamp: 2025-08-20T17:51:39.525Z
Learning: jiridanek consistently escalates point issues identified during PR reviews into systematic repository-wide improvements. When CodeRabbit flagged missing libxcrypt-compat in a single ROCm TensorFlow Python 3.12 image during PR #1259 review, jiridanek requested creation of issue #2075 for systematic review and fixing of all Python 3.12 UBI9 images, demonstrating his comprehensive approach to infrastructure consistency and code quality.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-01T14:46:03.215Z
Learning: jiridanek requested GitHub issue creation for two nitpicks during PR #1588 review: comment wording improvement in ROCm TensorFlow Dockerfile and typo fix in Jupyter DataScience Dockerfile stage header. Issues #1589 and #1590 were successfully created with comprehensive problem descriptions, specific file locations and line numbers, clear before/after solutions, detailed acceptance criteria, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1259
File: jupyter/rocm/tensorflow/ubi9-python-3.12/test/test_notebook.ipynb:22-29
Timestamp: 2025-07-02T18:27:51.097Z
Learning: jiridanek consistently creates comprehensive follow-up GitHub issues from PR review comments in opendatahub-io/notebooks, turning specific code quality concerns into systematic improvements tracked with proper context, acceptance criteria, and cross-references. Issue #1266 demonstrates this pattern by expanding a specific error handling concern in load_expected_versions() into a repository-wide improvement initiative.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1259
File: jupyter/rocm/tensorflow/ubi9-python-3.12/kustomize/base/service.yaml:5-15
Timestamp: 2025-07-02T18:59:15.788Z
Learning: jiridanek creates targeted GitHub issues for specific test quality improvements identified during PR reviews in opendatahub-io/notebooks. Issue #1268 demonstrates this by converting a review comment about insufficient tf2onnx conversion test validation into a comprehensive improvement plan with clear acceptance criteria, code examples, and ROCm-specific context.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-07-11T11:16:05.131Z
Learning: jiridanek requested GitHub issue creation for RStudio py311 Tekton push pipelines during PR #1379 review. Issue #1384 was successfully created covering two RStudio variants (CPU and CUDA) found in manifests/base/params-latest.env, with comprehensive problem description, implementation requirements following the same pattern as other workbench pipelines, clear acceptance criteria, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1306
File: jupyter/trustyai/ubi9-python-3.12/kustomize/base/kustomization.yaml:8-12
Timestamp: 2025-07-08T19:09:48.746Z
Learning: jiridanek requested GitHub issue creation for misleading CUDA prefix in TrustyAI image tags during PR #1306 review. Issue was created with comprehensive problem description covering both Python 3.11 and 3.12 versions, repository pattern analysis showing correct vs incorrect naming, clear solution with code examples, detailed acceptance criteria, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1333
File: runtimes/rocm-tensorflow/ubi9-python-3.12/Dockerfile.rocm:50-50
Timestamp: 2025-07-08T19:30:01.738Z
Learning: jiridanek requested GitHub issue creation for multi-architecture support in ROCm TensorFlow image during PR #1333 review. Issue #1346 was created with comprehensive problem description covering hardcoded x86_64 architecture breaking multi-arch support, detailed impact analysis, three solution options (runtime detection, BuildKit TARGETARCH integration, hybrid approach) with pros/cons analysis, comprehensive acceptance criteria covering core requirements and testing, phased implementation guidance, related files identification, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1269
File: codeserver/ubi9-python-3.12/nginx/root/usr/share/container-scripts/nginx/common.sh:1-3
Timestamp: 2025-07-03T12:07:19.365Z
Learning: jiridanek consistently requests GitHub issue creation for technical improvements identified during code reviews in opendatahub-io/notebooks, ensuring systematic tracking of code quality enhancements like shell script portability issues with comprehensive descriptions, solution options, and acceptance criteria.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1320
File: jupyter/pytorch/ubi9-python-3.12/Dockerfile.cuda:65-66
Timestamp: 2025-07-09T12:31:02.033Z
Learning: jiridanek requested GitHub issue creation for MSSQL repo file hardcoding problem during PR #1320 review. Issue #1363 was created and updated with comprehensive problem description covering hardcoded x86_64 MSSQL repo files breaking multi-architecture builds across 10 affected Dockerfiles (including datascience, CUDA, ROCm, and TrustyAI variants), detailed root cause analysis, three solution options with code examples, clear acceptance criteria for all image types, implementation guidance following established multi-architecture patterns, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2185
File: jupyter/pytorch/ubi9-python-3.12/Dockerfile.cuda:109-117
Timestamp: 2025-08-29T08:48:55.985Z
Learning: jiridanek prefers to implement systematic cleanup improvements through dedicated GitHub issues (like #2076) rather than applying individual point fixes during PR reviews, maintaining consistency with their established pattern of comprehensive code quality improvements.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-05T17:24:08.616Z
Learning: jiridanek requested PR review for #1521 covering s390x architecture support improvements, demonstrating continued focus on systematic multi-architecture compatibility enhancements in the opendatahub-io/notebooks repository through clean implementation with centralized configuration, proper CI integration, and architecture-aware testing patterns.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-22T12:15:38.859Z
Learning: jiridanek requested automation for digest resolution using Registry v2 API during PR #2103 review. A comprehensive GitHub issue #2104 was successfully created covering Python and bash implementation approaches for resolving image tags to sha256 digests in manifests/base/params-latest.env, with detailed acceptance criteria, CI/CD integration options, authentication handling for quay.io registry, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-22T12:15:38.859Z
Learning: jiridanek requested automation for digest resolution using Registry v2 API during PR #2103 review. A comprehensive GitHub issue was created covering Python and bash implementation approaches for resolving image tags to sha256 digests in manifests/base/params-latest.env, with detailed acceptance criteria, CI/CD integration options, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1995
File: jupyter/rocm/tensorflow/ubi9-python-3.11/Dockerfile.rocm:0-0
Timestamp: 2025-08-19T06:31:05.424Z
Learning: jiridanek requested GitHub issue creation for ROCm device detection test during PR #1995 review. The issue covers comprehensive test requirements for validating that device_lib.list_local_devices() successfully detects ROCm/HIP devices after ROCm package optimizations, including multiple solution options (pytest, shell script, notebook-based), detailed acceptance criteria for core requirements and test coverage, implementation guidance for test framework selection and CI integration, example test code for TensorFlow device detection validation, and proper context linking to PR #1995 and review comment, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1320
File: jupyter/pytorch/ubi9-python-3.12/Dockerfile.cuda:42-52
Timestamp: 2025-07-09T12:29:56.162Z
Learning: jiridanek requested GitHub issue creation for OpenShift client architecture mapping problem affecting 29 Dockerfiles during PR #1320 review. Issue was created with comprehensive analysis covering all affected files using $(uname -m) returning 'aarch64' but OpenShift mirror expecting 'arm64', systematic solution using BuildKit TARGETARCH mapping with proper amd64→x86_64 and arm64→arm64 conversion, detailed acceptance criteria, and implementation guidance, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Rebooted in
oc version
in the test, which can be run with ci testing #829This is a followup to
https://issues.redhat.com/browse/RHOAIENG-9707
Description
Example run
https://github.com/jiridanek/notebooks/actions/runs/10040438261/job/27746490803#step:9:19
Things to check or do
For now it uses images from
https://github.com/jiridanek/notebooks/actions/runs/10006075164/job/27658080202
and not from the current build.
How Has This Been Tested?
Merge criteria:
Summary by CodeRabbit
New Features
Documentation
Chores
.gitignore
to exclude test log files from version control.