Skip to content

RHAI-ENG-312 - Add a prereq for the version of llama stack client required in a workbench environment #899

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

chtyler
Copy link
Contributor

@chtyler chtyler commented Aug 8, 2025

Description

Added prereq to a number of modules stating the required verison of llama stack client as a prereq.

How Has This Been Tested?

Merge criteria:

  • The commits are squashed in a cohesive manner and have meaningful messages.
  • Testing instructions have been added in the PR body (for PRs involving changes that are not immediately obvious).
  • The developer has manually tested the changes and verified that the changes work

Summary by CodeRabbit

  • Documentation
    • Updated prerequisites to require llama_stack_client version 0.2.14 or later in the workbench environment across relevant guides.
    • Expanded querying instructions with detailed steps and example code for installing the client, importing classes, initializing the client, listing models, and performing queries using both low-level and high-level APIs.

Copy link

coderabbitai bot commented Aug 8, 2025

Walkthrough

The documentation was updated to explicitly require the installation of llama_stack_client version 0.2.14 or later as a prerequisite for several Llama Stack workflows. Step-by-step instructions and code examples for installing and using the client in Jupyter notebooks were added, particularly for querying ingested content.

Changes

Cohort / File(s) Change Summary
Prerequisite Addition: Llama Stack Client
modules/ingesting-content-into-a-llama-model.adoc, modules/preparing-documents-with-docling-for-llama-stack-retrieval.adoc
Added a prerequisite specifying that llama_stack_client version 0.2.14 or later must be installed in the workbench or Jupyter notebook environment before proceeding with Llama Stack ingestion or preparation procedures. This is reflected in the prerequisites section of each document, with conditional logic for upstream environments where applicable.
Expanded Querying Instructions
modules/querying-ingested-content-in-a-llama-model.adoc
Updated documentation to add the llama_stack_client installation prerequisite and detailed procedural steps for installing, importing, and using the client to query ingested content. New code snippets demonstrate client setup, model selection, and querying using both low-level and high-level APIs. No changes to code entities—documentation only.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant JupyterNotebook
    participant llama_stack_client
    participant LlamaStackServer

    User->>JupyterNotebook: Open notebook and install llama_stack_client (v0.2.14+)
    JupyterNotebook->>llama_stack_client: Import Agent, AgentEventLogger, LlamaStackClient
    User->>JupyterNotebook: Instantiate LlamaStackClient with endpoint
    JupyterNotebook->>LlamaStackServer: Connect to endpoint
    User->>JupyterNotebook: List available models
    JupyterNotebook->>LlamaStackServer: Request model list
    LlamaStackServer-->>JupyterNotebook: Return model list
    User->>JupyterNotebook: Select model and define vector DB ID
    User->>JupyterNotebook: Query ingested content (low-level RAG or Agent API)
    JupyterNotebook->>LlamaStackServer: Send query
    LlamaStackServer-->>JupyterNotebook: Return query results
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

Possibly related PRs

Suggested reviewers

  • jiripetrlik
  • eturner24

Note

🔌 MCP (Model Context Protocol) integration is now available in Early Access!

Pro users can now connect to remote MCP servers under the Integrations page to get reviews and chat conversations that understand additional development context.

✨ Finishing Touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@chtyler chtyler force-pushed the RHAI-ENG-312-document-llamastack-client-version branch from 0f21069 to 55c107c Compare August 8, 2025 15:33
@chtyler chtyler changed the title Added llama stack client version number prereq RHAI-ENG-312 - Add a prereq for the version of llama stack client required in a workbench environment Aug 8, 2025
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🔭 Outside diff range comments (1)
modules/querying-ingested-content-in-a-llama-model.adoc (1)

33-38: Align the install step with the stated minimum version and fix package-name inconsistency.

  • The prose says “llama_stack” but the command installs “llama_stack_client”. Use “llama_stack_client” consistently.
  • Enforce the minimum version in the %pip command and add --upgrade so existing environments are bumped as needed.

Apply these diffs:

-. In a new notebook cell, install the `llama_stack` client package:
+. In a new notebook cell, install the `llama_stack_client` package (version 0.2.14 or later):
-%pip install llama_stack_client
+%pip install "llama_stack_client>=0.2.14" --upgrade
🧹 Nitpick comments (1)
modules/querying-ingested-content-in-a-llama-model.adoc (1)

19-20: Typo: inconsistent model identifier (“llama-32-3b-instruct” vs “llama-3.2-3b-instruct”).

Line 19 references “llama-3.2-3b-instruct”, but the example list shows “llama-32-3b-instruct”. Use the same, correct identifier in both places.

Proposed fix in the example list:

-[Model(identifier='llama-32-3b-instruct', metadata={}, api_model_type='llm', provider_id='vllm-inference', provider_resource_id='llama-32-3b-instruct', type='model', model_type='llm'),
+[Model(identifier='llama-3.2-3b-instruct', metadata={}, api_model_type='llm', provider_id='vllm-inference', provider_resource_id='llama-3.2-3b-instruct', type='model', model_type='llm'),

Also applies to: 65-66

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0f21069 and 55c107c.

📒 Files selected for processing (3)
  • modules/ingesting-content-into-a-llama-model.adoc (1 hunks)
  • modules/preparing-documents-with-docling-for-llama-stack-retrieval.adoc (1 hunks)
  • modules/querying-ingested-content-in-a-llama-model.adoc (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • modules/ingesting-content-into-a-llama-model.adoc
  • modules/preparing-documents-with-docling-for-llama-stack-retrieval.adoc

@@ -20,6 +20,7 @@ endif::[]
* You have configured a Llama Stack deployment by creating a `LlamaStackDistribution` instance to enable RAG functionality.
* You have created a project workbench within a data science project.
* You have opened a Jupyter notebook and it is running in your workbench environment.
* You have installed the `llama_stack_client` version 0.2.14 or later in your workbench environment.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Make the prereq unambiguous and verifiable (name + version).

To avoid confusion and ensure users meet the minimum required client, clarify the wording and align with the install step.

Apply this diff:

-* You have installed the `llama_stack_client` version 0.2.14 or later in your workbench environment. 
+* You have installed the `llama_stack_client` package (version 0.2.14 or later) in your workbench environment.

Run the following scripts to (1) confirm the package name/version exists on PyPI and (2) verify other modules consistently declare the same prerequisite.


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "Checking PyPI metadata for llama_stack_client / llama-stack-client..."
for pkg in llama_stack_client llama-stack-client; do
  url="https://pypi.org/pypi/$pkg/json"
  code=$(curl -s -o /dev/null -w "%{http_code}" "$url")
  if [[ "$code" == "200" ]]; then
    echo "Found $pkg on PyPI"
    python - <<'PY'
import json, sys, urllib.request
pkg = sys.argv[1]
with urllib.request.urlopen(f"https://pypi.org/pypi/{pkg}/json") as r:
    data = json.load(r)
print("Latest:", data["info"]["version"])
print("Has 0.2.14:", "0.2.14" in data["releases"])
PY
    "$pkg"
  else
    echo "Not found on PyPI: $pkg (HTTP $code)"
  fi
done

Length of output: 587


🏁 Script executed:

#!/bin/bash
# Verify other modules declare the same prereq consistently
rg -n --glob '*.adoc' -e 'llama[_-]stack[_-]client' -A 2

Length of output: 2981


To ensure we correctly verify both the PyPI package/version and the model‐ID inconsistency, let’s re-run with fixed scripts.

These will confirm:

  • The correct PyPI package name(s) and whether 0.2.14 exists.
  • All occurrences of the model identifier to pinpoint any mismatches.

🏁 Script executed:

#!/usr/bin/env bash
set -euo pipefail

echo "Verifying PyPI package versions for llama_stack_client..."
for pkg in llama_stack_client llama-stack-client; do
  echo "Checking $pkg on PyPI..."
  data=$(curl -s "https://pypi.org/pypi/${pkg}/json" || echo "")
  if [[ -z "$data" ]]; then
    echo "$pkg: not found or no data"
    continue
  fi
  latest=$(echo "$data" | jq -r '.info.version')
  has014=$(echo "$data" | jq '.releases | has("0.2.14")')
  echo "  → latest version: $latest"
  echo "  → includes 0.2.14 release? $has014"
done

Length of output: 272684


🏁 Script executed:

#!/usr/bin/env bash
set -euo pipefail

echo "Checking for model ID typos across AsciiDoc modules..."
rg -n --glob '*.adoc' -e 'llama-3\.2-3b-instruct' -e 'llama-32-3b-instruct'

Length of output: 3035


Clarify package prerequisite and fix model‐ID typos

Confirmed on PyPI: the package llama-stack-client exists and includes a 0.2.14 release.

• Update prerequisite wording (line 23) to reference the package and version range unambiguously.
• Enforce the version floor in the install snippet (lines 37–38).
• Correct all occurrences of llama-32-3b-instruct to llama-3.2-3b-instruct.

Apply this diff:

 modules/querying-ingested-content-in-a-llama-model.adoc
 23c23
-* You have installed the `llama_stack_client` version 0.2.14 or later in your workbench environment.
+* You have installed the llama-stack-client package (version 0.2.14 or later) in your workbench environment.
 37,38c37,38
-%pip install llama_stack_client
+%pip install --upgrade "llama-stack-client>=0.2.14"
 ...
 19c19
-* …for example, the llama-3.2-3b-instruct model.
+* …for example, the llama-3.2-3b-instruct model.
 65c65
-[Model(identifier='llama-32-3b-instruct', …
+[Model(identifier='llama-3.2-3b-instruct', …

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In modules/querying-ingested-content-in-a-llama-model.adoc at line 23, update
the prerequisite to clearly reference the package name as `llama-stack-client`
with version 0.2.14 or later. Then, in the install snippet around lines 37–38,
modify the installation command to enforce installing version 0.2.14 or higher.
Finally, search the entire document for all instances of the model ID
`llama-32-3b-instruct` and replace them with the correct
`llama-3.2-3b-instruct`.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant