-
Notifications
You must be signed in to change notification settings - Fork 28
RHAI-ENG-312 - Add a prereq for the version of llama stack client required in a workbench environment #899
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
WalkthroughThe documentation was updated to explicitly require the installation of Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant JupyterNotebook
participant llama_stack_client
participant LlamaStackServer
User->>JupyterNotebook: Open notebook and install llama_stack_client (v0.2.14+)
JupyterNotebook->>llama_stack_client: Import Agent, AgentEventLogger, LlamaStackClient
User->>JupyterNotebook: Instantiate LlamaStackClient with endpoint
JupyterNotebook->>LlamaStackServer: Connect to endpoint
User->>JupyterNotebook: List available models
JupyterNotebook->>LlamaStackServer: Request model list
LlamaStackServer-->>JupyterNotebook: Return model list
User->>JupyterNotebook: Select model and define vector DB ID
User->>JupyterNotebook: Query ingested content (low-level RAG or Agent API)
JupyterNotebook->>LlamaStackServer: Send query
LlamaStackServer-->>JupyterNotebook: Return query results
Estimated code review effort🎯 2 (Simple) | ⏱️ ~8 minutes Possibly related PRs
Suggested reviewers
Note 🔌 MCP (Model Context Protocol) integration is now available in Early Access!Pro users can now connect to remote MCP servers under the Integrations page to get reviews and chat conversations that understand additional development context. ✨ Finishing Touches🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
0f21069
to
55c107c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🔭 Outside diff range comments (1)
modules/querying-ingested-content-in-a-llama-model.adoc (1)
33-38
: Align the install step with the stated minimum version and fix package-name inconsistency.
- The prose says “llama_stack” but the command installs “llama_stack_client”. Use “llama_stack_client” consistently.
- Enforce the minimum version in the %pip command and add --upgrade so existing environments are bumped as needed.
Apply these diffs:
-. In a new notebook cell, install the `llama_stack` client package: +. In a new notebook cell, install the `llama_stack_client` package (version 0.2.14 or later):-%pip install llama_stack_client +%pip install "llama_stack_client>=0.2.14" --upgrade
🧹 Nitpick comments (1)
modules/querying-ingested-content-in-a-llama-model.adoc (1)
19-20
: Typo: inconsistent model identifier (“llama-32-3b-instruct” vs “llama-3.2-3b-instruct”).Line 19 references “llama-3.2-3b-instruct”, but the example list shows “llama-32-3b-instruct”. Use the same, correct identifier in both places.
Proposed fix in the example list:
-[Model(identifier='llama-32-3b-instruct', metadata={}, api_model_type='llm', provider_id='vllm-inference', provider_resource_id='llama-32-3b-instruct', type='model', model_type='llm'), +[Model(identifier='llama-3.2-3b-instruct', metadata={}, api_model_type='llm', provider_id='vllm-inference', provider_resource_id='llama-3.2-3b-instruct', type='model', model_type='llm'),Also applies to: 65-66
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
modules/ingesting-content-into-a-llama-model.adoc
(1 hunks)modules/preparing-documents-with-docling-for-llama-stack-retrieval.adoc
(1 hunks)modules/querying-ingested-content-in-a-llama-model.adoc
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- modules/ingesting-content-into-a-llama-model.adoc
- modules/preparing-documents-with-docling-for-llama-stack-retrieval.adoc
@@ -20,6 +20,7 @@ endif::[] | |||
* You have configured a Llama Stack deployment by creating a `LlamaStackDistribution` instance to enable RAG functionality. | |||
* You have created a project workbench within a data science project. | |||
* You have opened a Jupyter notebook and it is running in your workbench environment. | |||
* You have installed the `llama_stack_client` version 0.2.14 or later in your workbench environment. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Make the prereq unambiguous and verifiable (name + version).
To avoid confusion and ensure users meet the minimum required client, clarify the wording and align with the install step.
Apply this diff:
-* You have installed the `llama_stack_client` version 0.2.14 or later in your workbench environment.
+* You have installed the `llama_stack_client` package (version 0.2.14 or later) in your workbench environment.
Run the following scripts to (1) confirm the package name/version exists on PyPI and (2) verify other modules consistently declare the same prerequisite.
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "Checking PyPI metadata for llama_stack_client / llama-stack-client..."
for pkg in llama_stack_client llama-stack-client; do
url="https://pypi.org/pypi/$pkg/json"
code=$(curl -s -o /dev/null -w "%{http_code}" "$url")
if [[ "$code" == "200" ]]; then
echo "Found $pkg on PyPI"
python - <<'PY'
import json, sys, urllib.request
pkg = sys.argv[1]
with urllib.request.urlopen(f"https://pypi.org/pypi/{pkg}/json") as r:
data = json.load(r)
print("Latest:", data["info"]["version"])
print("Has 0.2.14:", "0.2.14" in data["releases"])
PY
"$pkg"
else
echo "Not found on PyPI: $pkg (HTTP $code)"
fi
done
Length of output: 587
🏁 Script executed:
#!/bin/bash
# Verify other modules declare the same prereq consistently
rg -n --glob '*.adoc' -e 'llama[_-]stack[_-]client' -A 2
Length of output: 2981
To ensure we correctly verify both the PyPI package/version and the model‐ID inconsistency, let’s re-run with fixed scripts.
These will confirm:
- The correct PyPI package name(s) and whether 0.2.14 exists.
- All occurrences of the model identifier to pinpoint any mismatches.
🏁 Script executed:
#!/usr/bin/env bash
set -euo pipefail
echo "Verifying PyPI package versions for llama_stack_client..."
for pkg in llama_stack_client llama-stack-client; do
echo "Checking $pkg on PyPI..."
data=$(curl -s "https://pypi.org/pypi/${pkg}/json" || echo "")
if [[ -z "$data" ]]; then
echo " → $pkg: not found or no data"
continue
fi
latest=$(echo "$data" | jq -r '.info.version')
has014=$(echo "$data" | jq '.releases | has("0.2.14")')
echo " → latest version: $latest"
echo " → includes 0.2.14 release? $has014"
done
Length of output: 272684
🏁 Script executed:
#!/usr/bin/env bash
set -euo pipefail
echo "Checking for model ID typos across AsciiDoc modules..."
rg -n --glob '*.adoc' -e 'llama-3\.2-3b-instruct' -e 'llama-32-3b-instruct'
Length of output: 3035
Clarify package prerequisite and fix model‐ID typos
Confirmed on PyPI: the package llama-stack-client
exists and includes a 0.2.14 release.
• Update prerequisite wording (line 23) to reference the package and version range unambiguously.
• Enforce the version floor in the install snippet (lines 37–38).
• Correct all occurrences of llama-32-3b-instruct
to llama-3.2-3b-instruct
.
Apply this diff:
modules/querying-ingested-content-in-a-llama-model.adoc
23c23
-* You have installed the `llama_stack_client` version 0.2.14 or later in your workbench environment.
+* You have installed the llama-stack-client package (version 0.2.14 or later) in your workbench environment.
37,38c37,38
-%pip install llama_stack_client
+%pip install --upgrade "llama-stack-client>=0.2.14"
...
19c19
-* …for example, the llama-3.2-3b-instruct model.
+* …for example, the llama-3.2-3b-instruct model.
65c65
-[Model(identifier='llama-32-3b-instruct', …
+[Model(identifier='llama-3.2-3b-instruct', …
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In modules/querying-ingested-content-in-a-llama-model.adoc at line 23, update
the prerequisite to clearly reference the package name as `llama-stack-client`
with version 0.2.14 or later. Then, in the install snippet around lines 37–38,
modify the installation command to enforce installing version 0.2.14 or higher.
Finally, search the entire document for all instances of the model ID
`llama-32-3b-instruct` and replace them with the correct
`llama-3.2-3b-instruct`.
Description
Added prereq to a number of modules stating the required verison of llama stack client as a prereq.
How Has This Been Tested?
Merge criteria:
Summary by CodeRabbit
llama_stack_client
version 0.2.14 or later in the workbench environment across relevant guides.