Skip to content

RHOAIENG-28660: Updates to T&V runtimes for IBM Z Triton #879

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

eturner24
Copy link
Contributor

@eturner24 eturner24 commented Jul 28, 2025

Description

  • Added instructions and YAML configurations for using the IBM Z Accelerated for NVIDIA Triton Inference Server as a supported runtime.
  • Updated tables and lists to include the new IBM Z Accelerated runtime, its features, deployment requirements, and supported model formats.
  • Included new prerequisite details and links to external resources for the IBM Z Accelerated runtime.
Screenshot 2025-07-28 at 13 06 15 Screenshot 2025-07-28 at 13 06 38 (screenshot truncated)

How Has This Been Tested?

Merge criteria:

  • The commits are squashed in a cohesive manner and have meaningful messages.
  • Testing instructions have been added in the PR body (for PRs involving changes that are not immediately obvious).
  • The developer has manually tested the changes and verified that the changes work

Summary by CodeRabbit

  • Documentation
    • Added instructions and YAML configuration examples for deploying IBM Z Accelerated for NVIDIA Triton Inference Server as a model-serving runtime.
    • Updated tables and lists to include IBM Z Accelerated for NVIDIA Triton Inference Server, with details on supported model formats, protocols, and deployment requirements.
    • Included new prerequisite information and external resource links for the new runtime.
    • Made minor formatting and spacing improvements.

Copy link

coderabbitai bot commented Jul 28, 2025

"""

Walkthrough

The documentation was updated to add support for the "IBM Z Accelerated for NVIDIA Triton Inference Server" as a tested and verified model serving runtime. This includes new prerequisite details, YAML configuration examples, updated runtime tables, deployment requirements, and resource links. Existing NVIDIA Triton references were generalized and reorganized.

Changes

Cohort / File(s) Change Summary
Add IBM Z Accelerated runtime instructions
modules/adding-a-tested-and-verified-runtime-for-the-single-model-serving-platform.adoc
Generalized runtime references, added IBM Z Accelerated runtime prerequisites, introduced YAML configuration examples for REST and gRPC, reorganized NVIDIA Triton instructions, and made minor formatting adjustments.
Update supported runtimes table
modules/customizable-model-serving-runtime-parameters.adoc
Inserted "IBM Z Accelerated for NVIDIA Triton Inference Server" entry with documentation link into the supported runtimes table.
Expand tested/verified runtimes and requirements
modules/ref-tested-verified-runtimes.adoc
Added IBM Z Accelerated runtime to the tested/verified runtimes list, described supported model formats, deployment requirements, and appended a new external resource link.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes
"""

Note

⚡️ Unit Test Generation is now available in beta!

Learn more here, or try it out under "Finishing Touches" below.

✨ Finishing Touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (1)
modules/adding-a-tested-and-verified-runtime-for-the-single-model-serving-platform.adoc (1)

32-33: Step wording still assumes single API selection but offers two YAML blocks.

Consider clarifying that only one of the two manifests (REST or gRPC) should be applied, otherwise duplicate metadata.name errors may occur.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3c16a36 and 710a900.

📒 Files selected for processing (3)
  • modules/adding-a-tested-and-verified-runtime-for-the-single-model-serving-platform.adoc (4 hunks)
  • modules/customizable-model-serving-runtime-parameters.adoc (1 hunks)
  • modules/ref-tested-verified-runtimes.adoc (3 hunks)
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: bredamc
PR: opendatahub-io/opendatahub-documentation#860
File: modules/using-llm-as-a-judge-metrics-with-lmeval.adoc:62-65
Timestamp: 2025-07-16T09:05:50.422Z
Learning: In the opendatahub-documentation project, bredamc prefers comprehensive error detection in code reviews. Continue to point out potential errors whether they are related to the code changes in the PR or not, but clearly distinguish between errors introduced in the PR versus errors in existing code.
modules/adding-a-tested-and-verified-runtime-for-the-single-model-serving-platform.adoc (1)

Learnt from: jiridanek
PR: #801
File: modules/running-a-pipeline-in-jupyterlab.adoc:8-8
Timestamp: 2025-06-17T12:18:47.675Z
Learning: PR #607 in opendatahub-io/kubeflow repository introduced automatic generation of ds-pipeline-config secret for Elyra runtime configuration when DSPA objects are present in the namespace, eliminating the need for manual runtime configuration creation in basic workbenches that have access to pipeline servers.

🔇 Additional comments (3)
modules/customizable-model-serving-runtime-parameters.adoc (1)

19-20: Table entry added correctly – no blocking issues.

The new IBM Z Triton runtime row is well-formed and the link resolves to the correct public docs.
No further action required.

modules/adding-a-tested-and-verified-runtime-for-the-single-model-serving-platform.adoc (1)

55-56: Placeholder image tag :xxx – remind authors to pin a digest.

Before publishing, replace :xxx with a concrete tag or preferably a digest (@sha256:…) to ensure reproducible deployments.

Also applies to: 114-115

modules/ref-tested-verified-runtimes.adoc (1)

26-27: Good addition, but double-check terminology consistency.

Elsewhere the runtime is referred to as “IBM Z Accelerated for NVIDIA Triton Inference Server”; ensure the same phrasing is used across all docs (currently consistent here). No functional issues detected.

Also applies to: 37-38

Comment on lines +65 to +70
limits:
cpu: "2"
memory: 4Gi
requests:
cpu: "5"
memory: 4Gi
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

requests.cpu greater than limits.cpu – scheduling will fail.

Kubernetes requires requests <= limits.
Swap the values (or raise limits) to avoid spec.containers[].resources.limits.cpu: Invalid value.

-  limits:
-    cpu: "2"
+  limits:
+    cpu: "5"
   requests:
-    cpu: "5"
+    cpu: "2"

Apply to both REST (lines 65-70) and gRPC (124-129) variants.

Also applies to: 124-129

🤖 Prompt for AI Agents
In
modules/adding-a-tested-and-verified-runtime-for-the-single-model-serving-platform.adoc
at lines 65-70 and 124-129, the cpu resource requests value is greater than the
limits value, which causes Kubernetes scheduling to fail. Fix this by ensuring
that the cpu requests value is less than or equal to the cpu limits value,
either by swapping the values or increasing the limits to be at least as large
as the requests.

Comment on lines +154 to +157
- emptyDir: null
medium: Memory
sizeLimit: 2Gi
name: shm
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Volume definition malformed – medium and sizeLimit must nest under emptyDir.

Current structure:

- emptyDir: null
  medium: Memory
  sizeLimit: 2Gi
  name: shm

Valid structure:

- emptyDir: null
- medium: Memory
- sizeLimit: 2Gi
+emptyDir:
+  medium: Memory
+  sizeLimit: 2Gi

Without this fix the manifest fails to parse.

🤖 Prompt for AI Agents
In
modules/adding-a-tested-and-verified-runtime-for-the-single-model-serving-platform.adoc
around lines 154 to 157, the volume definition is malformed because the keys
'medium' and 'sizeLimit' are not nested under 'emptyDir'. To fix this,
restructure the YAML so that 'emptyDir' is an object containing 'medium' and
'sizeLimit' keys, instead of setting 'emptyDir' to null and placing those keys
at the same level. This will ensure the manifest parses correctly.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (2)
modules/adding-a-tested-and-verified-runtime-for-the-single-model-serving-platform.adoc (2)

64-69: requests.cpu exceeds limits.cpu – Kubernetes will reject the manifest

spec.containers[].resources.requests.cpu ("5") must be ≤ the corresponding limits.cpu ("2").
The same mis-ordering appears in both the REST and gRPC variants.

-  limits:
-    cpu: "2"
+  limits:
+    cpu: "5"
   requests:
-    cpu: "5"
+    cpu: "2"

Apply this swap (or raise the limits) to both blocks to avoid a requests must not exceed limits validation error at admission time.

Also applies to: 121-127


152-155: Volume definition still malformed – medium & sizeLimit must nest under emptyDir

The YAML places medium and sizeLimit alongside emptyDir: null, yielding an invalid object:

- emptyDir: null
  medium: Memory
  sizeLimit: 2Gi

Correct structure:

-- emptyDir: null
-  medium: Memory
-  sizeLimit: 2Gi
+emptyDir:
+  medium: Memory
+  sizeLimit: 2Gi

This change is required or the manifest will not parse.

🧹 Nitpick comments (1)
modules/adding-a-tested-and-verified-runtime-for-the-single-model-serving-platform.adoc (1)

54-55: Placeholder tag :xxx should be replaced with a concrete, published image reference

icr.io/ibmz/ibmz-accelerated-for-nvidia-triton-inference-server:xxx is not a resolvable image and will cause ImagePullBackOff. Replace xxx with an explicit version/tag or digest before publishing the documentation.

Also applies to: 112-113

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 710a900 and c2b9255.

📒 Files selected for processing (1)
  • modules/adding-a-tested-and-verified-runtime-for-the-single-model-serving-platform.adoc (4 hunks)
🧰 Additional context used
🧠 Learnings (1)
modules/adding-a-tested-and-verified-runtime-for-the-single-model-serving-platform.adoc (1)

Learnt from: jiridanek
PR: #801
File: modules/running-a-pipeline-in-jupyterlab.adoc:8-8
Timestamp: 2025-06-17T12:18:47.675Z
Learning: PR #607 in opendatahub-io/kubeflow repository introduced automatic generation of ds-pipeline-config secret for Elyra runtime configuration when DSPA objects are present in the namespace, eliminating the need for manual runtime configuration creation in basic workbenches that have access to pipeline servers.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant