diff --git a/modules/ingesting-content-into-a-llama-model.adoc b/modules/ingesting-content-into-a-llama-model.adoc index 05c664df..5922ca15 100644 --- a/modules/ingesting-content-into-a-llama-model.adoc +++ b/modules/ingesting-content-into-a-llama-model.adoc @@ -10,6 +10,7 @@ You can quickly customize and prototype your retrievable content by ingesting ra * You have deployed a Llama 3.2 model with a vLLM model server and you have integrated LlamaStack. * You have created a project workbench within a data science project. * You have opened a Jupyter notebook and it is running in your workbench environment. +* You have installed the `llama_stack_client` version 0.2.14 or later in your workbench environment. * You have a created and configured a vector database instance and you know its identifier. ifdef::self-managed[] * Your environment has network access to the vector database service through {openshift-platform}. diff --git a/modules/preparing-documents-with-docling-for-llama-stack-retrieval.adoc b/modules/preparing-documents-with-docling-for-llama-stack-retrieval.adoc index f4c28ef0..7e2e161a 100644 --- a/modules/preparing-documents-with-docling-for-llama-stack-retrieval.adoc +++ b/modules/preparing-documents-with-docling-for-llama-stack-retrieval.adoc @@ -27,6 +27,7 @@ ifdef::upstream[] * You have installed local object storage buckets and created connections, as described in link:{odhdocshome}/working-on-data-science-projects/#adding-a-connection-to-your-data-science-project_projects[Adding a connection to your data science project]. endif::[] ifndef::upstream[] +* You have installed the `llama_stack_client` version 0.2.14 or later in your workbench environment. * You have installed local object storage buckets and created connections, as described in link:{rhoaidocshome}{default-format-url}/working_on_data_science_projects/using-connections_projects#adding-a-connection-to-your-data-science-project_projects[Adding a connection to your data science project]. endif::[] * You have compiled to YAML a data science pipeline that includes a Docling transform, either one of the RAG demo samples or your own custom pipeline. diff --git a/modules/querying-ingested-content-in-a-llama-model.adoc b/modules/querying-ingested-content-in-a-llama-model.adoc index 06df526e..bc7781b1 100644 --- a/modules/querying-ingested-content-in-a-llama-model.adoc +++ b/modules/querying-ingested-content-in-a-llama-model.adoc @@ -20,6 +20,7 @@ endif::[] * You have configured a Llama Stack deployment by creating a `LlamaStackDistribution` instance to enable RAG functionality. * You have created a project workbench within a data science project. * You have opened a Jupyter notebook and it is running in your workbench environment. +* You have installed the `llama_stack_client` version 0.2.14 or later in your workbench environment. * You have ingested content into your model. [NOTE]