From 02039b10680d758e5648a9a3066fb511c2bf504d Mon Sep 17 00:00:00 2001 From: Chris Tyler Date: Fri, 8 Aug 2025 10:43:12 +0100 Subject: [PATCH 1/2] Added llama stack client version number prereq --- modules/ingesting-content-into-a-llama-model.adoc | 1 + ...eparing-documents-with-docling-for-llama-stack-retrieval.adoc | 1 + modules/querying-ingested-content-in-a-llama-model.adoc | 1 + 3 files changed, 3 insertions(+) diff --git a/modules/ingesting-content-into-a-llama-model.adoc b/modules/ingesting-content-into-a-llama-model.adoc index 05c664df9..6d0d082f0 100644 --- a/modules/ingesting-content-into-a-llama-model.adoc +++ b/modules/ingesting-content-into-a-llama-model.adoc @@ -10,6 +10,7 @@ You can quickly customize and prototype your retrievable content by ingesting ra * You have deployed a Llama 3.2 model with a vLLM model server and you have integrated LlamaStack. * You have created a project workbench within a data science project. * You have opened a Jupyter notebook and it is running in your workbench environment. +* You have installed the `llama_stack_client` version 0.2.8 in your workbench environment. * You have a created and configured a vector database instance and you know its identifier. ifdef::self-managed[] * Your environment has network access to the vector database service through {openshift-platform}. diff --git a/modules/preparing-documents-with-docling-for-llama-stack-retrieval.adoc b/modules/preparing-documents-with-docling-for-llama-stack-retrieval.adoc index f4c28ef0b..a48cc8bf2 100644 --- a/modules/preparing-documents-with-docling-for-llama-stack-retrieval.adoc +++ b/modules/preparing-documents-with-docling-for-llama-stack-retrieval.adoc @@ -27,6 +27,7 @@ ifdef::upstream[] * You have installed local object storage buckets and created connections, as described in link:{odhdocshome}/working-on-data-science-projects/#adding-a-connection-to-your-data-science-project_projects[Adding a connection to your data science project]. endif::[] ifndef::upstream[] +* You have installed the `llama_stack_client` version 0.2.8 in your workbench environment. * You have installed local object storage buckets and created connections, as described in link:{rhoaidocshome}{default-format-url}/working_on_data_science_projects/using-connections_projects#adding-a-connection-to-your-data-science-project_projects[Adding a connection to your data science project]. endif::[] * You have compiled to YAML a data science pipeline that includes a Docling transform, either one of the RAG demo samples or your own custom pipeline. diff --git a/modules/querying-ingested-content-in-a-llama-model.adoc b/modules/querying-ingested-content-in-a-llama-model.adoc index 06df526ef..2df0cc50d 100644 --- a/modules/querying-ingested-content-in-a-llama-model.adoc +++ b/modules/querying-ingested-content-in-a-llama-model.adoc @@ -20,6 +20,7 @@ endif::[] * You have configured a Llama Stack deployment by creating a `LlamaStackDistribution` instance to enable RAG functionality. * You have created a project workbench within a data science project. * You have opened a Jupyter notebook and it is running in your workbench environment. +* You have installed the `llama_stack_client` version 0.2.8 in your workbench environment. * You have ingested content into your model. [NOTE] From 55c107c6347999d9b3a83c34d96275afb5193a2b Mon Sep 17 00:00:00 2001 From: Chris Tyler Date: Fri, 8 Aug 2025 16:33:07 +0100 Subject: [PATCH 2/2] Changing the llama stack client version number after peer review feedback --- modules/ingesting-content-into-a-llama-model.adoc | 2 +- ...paring-documents-with-docling-for-llama-stack-retrieval.adoc | 2 +- modules/querying-ingested-content-in-a-llama-model.adoc | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/modules/ingesting-content-into-a-llama-model.adoc b/modules/ingesting-content-into-a-llama-model.adoc index 6d0d082f0..5922ca152 100644 --- a/modules/ingesting-content-into-a-llama-model.adoc +++ b/modules/ingesting-content-into-a-llama-model.adoc @@ -10,7 +10,7 @@ You can quickly customize and prototype your retrievable content by ingesting ra * You have deployed a Llama 3.2 model with a vLLM model server and you have integrated LlamaStack. * You have created a project workbench within a data science project. * You have opened a Jupyter notebook and it is running in your workbench environment. -* You have installed the `llama_stack_client` version 0.2.8 in your workbench environment. +* You have installed the `llama_stack_client` version 0.2.14 or later in your workbench environment. * You have a created and configured a vector database instance and you know its identifier. ifdef::self-managed[] * Your environment has network access to the vector database service through {openshift-platform}. diff --git a/modules/preparing-documents-with-docling-for-llama-stack-retrieval.adoc b/modules/preparing-documents-with-docling-for-llama-stack-retrieval.adoc index a48cc8bf2..7e2e161a8 100644 --- a/modules/preparing-documents-with-docling-for-llama-stack-retrieval.adoc +++ b/modules/preparing-documents-with-docling-for-llama-stack-retrieval.adoc @@ -27,7 +27,7 @@ ifdef::upstream[] * You have installed local object storage buckets and created connections, as described in link:{odhdocshome}/working-on-data-science-projects/#adding-a-connection-to-your-data-science-project_projects[Adding a connection to your data science project]. endif::[] ifndef::upstream[] -* You have installed the `llama_stack_client` version 0.2.8 in your workbench environment. +* You have installed the `llama_stack_client` version 0.2.14 or later in your workbench environment. * You have installed local object storage buckets and created connections, as described in link:{rhoaidocshome}{default-format-url}/working_on_data_science_projects/using-connections_projects#adding-a-connection-to-your-data-science-project_projects[Adding a connection to your data science project]. endif::[] * You have compiled to YAML a data science pipeline that includes a Docling transform, either one of the RAG demo samples or your own custom pipeline. diff --git a/modules/querying-ingested-content-in-a-llama-model.adoc b/modules/querying-ingested-content-in-a-llama-model.adoc index 2df0cc50d..bc7781b17 100644 --- a/modules/querying-ingested-content-in-a-llama-model.adoc +++ b/modules/querying-ingested-content-in-a-llama-model.adoc @@ -20,7 +20,7 @@ endif::[] * You have configured a Llama Stack deployment by creating a `LlamaStackDistribution` instance to enable RAG functionality. * You have created a project workbench within a data science project. * You have opened a Jupyter notebook and it is running in your workbench environment. -* You have installed the `llama_stack_client` version 0.2.8 in your workbench environment. +* You have installed the `llama_stack_client` version 0.2.14 or later in your workbench environment. * You have ingested content into your model. [NOTE]