Skip to content

Commit 02039b1

Browse files
committed
Added llama stack client version number prereq
1 parent 106e1e2 commit 02039b1

3 files changed

+3
-0
lines changed

modules/ingesting-content-into-a-llama-model.adoc

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,7 @@ You can quickly customize and prototype your retrievable content by ingesting ra
1010
* You have deployed a Llama 3.2 model with a vLLM model server and you have integrated LlamaStack.
1111
* You have created a project workbench within a data science project.
1212
* You have opened a Jupyter notebook and it is running in your workbench environment.
13+
* You have installed the `llama_stack_client` version 0.2.8 in your workbench environment.
1314
* You have a created and configured a vector database instance and you know its identifier.
1415
ifdef::self-managed[]
1516
* Your environment has network access to the vector database service through {openshift-platform}.

modules/preparing-documents-with-docling-for-llama-stack-retrieval.adoc

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -27,6 +27,7 @@ ifdef::upstream[]
2727
* You have installed local object storage buckets and created connections, as described in link:{odhdocshome}/working-on-data-science-projects/#adding-a-connection-to-your-data-science-project_projects[Adding a connection to your data science project].
2828
endif::[]
2929
ifndef::upstream[]
30+
* You have installed the `llama_stack_client` version 0.2.8 in your workbench environment.
3031
* You have installed local object storage buckets and created connections, as described in link:{rhoaidocshome}{default-format-url}/working_on_data_science_projects/using-connections_projects#adding-a-connection-to-your-data-science-project_projects[Adding a connection to your data science project].
3132
endif::[]
3233
* You have compiled to YAML a data science pipeline that includes a Docling transform, either one of the RAG demo samples or your own custom pipeline.

modules/querying-ingested-content-in-a-llama-model.adoc

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,7 @@ endif::[]
2020
* You have configured a Llama Stack deployment by creating a `LlamaStackDistribution` instance to enable RAG functionality.
2121
* You have created a project workbench within a data science project.
2222
* You have opened a Jupyter notebook and it is running in your workbench environment.
23+
* You have installed the `llama_stack_client` version 0.2.8 in your workbench environment.
2324
* You have ingested content into your model.
2425

2526
[NOTE]

0 commit comments

Comments
 (0)