You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: assemblies/assembly-customizing-developer-lightspeed.adoc
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@
4
4
[id="{context}"]
5
5
= Customizing {ls-short}
6
6
7
-
You can customize {ls-short} functionalities such as gathering feedback, storing chat history in PostgreSQL, and configuring Model Context Protocol (MCP) tools.
7
+
You can customize {ls-short} functionalities such as gathering feedback, storing chat history in PostgreSQL, and xref:proc-configure-mcp-tools-for-developer-lightspeed[configuring Model Context Protocol (MCP) tools].
The {lcs-name} and Llama Stack deploy together as sidecar containers to augment {rhdh-short} functionality.
6
+
The {lcs-name} and Llama Stack deploy together as sidecar containers to augment {rhdh-very-short} functionality.
7
7
8
-
The {lcs-name} serves as the Llama Stack service intermediary, managing configurations for key components. These components include the Large Language Model (LLM) inference providers, Model Context Protocol (MCP) or Retrieval Augmented Generation (RAG) tool runtime providers, safety providers, and vector database settings.
8
+
The {lcs-name} serves as the Llama Stack service intermediary, managing configurations for key components. These components include the large language model (LLM) inference providers, Model Context Protocol (MCP) or retrieval augmented generation (RAG) tool runtime providers, safety providers, and vector database settings.
9
9
10
10
* {lcs-name} manages authentication, user feedback collection, MCP server configuration, and caching.
Copy file name to clipboardExpand all lines: modules/developer-lightspeed/con-llm-requirements.adoc
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@
5
5
6
6
{ls-short} follows a _Bring Your Own Model_ approach. This model means that to function, {ls-short} requires access to a large language model (LLM) which you must provide. An LLM is a type of generative AI that interprets natural language and generates human-like text or audio responses. When an LLM is used as a virtual assistant, the LLM can interpret questions and provide answers in a conversational manner.
7
7
8
-
LLMs are usually provided by a service or server. Since {ls-short} does not provide an LLM for you, you must configure your preferred LLM provider during installation. You can configure the underlying Llama Stack server to integrate with a number of LLM `providers`` that offer compatibility with the OpenAI API including the following LLMs:
8
+
LLMs are usually provided by a service or server. Because {ls-short} does not provide an LLM for you, you must configure your preferred LLM provider during installation. You can configure the underlying Llama Stack server to integrate with a number of LLM `providers`` that offer compatibility with the OpenAI API including the following LLMs:
9
9
10
10
* OpenAI (cloud-based inference service)
11
11
* {rhoai-brand-name} (enterprise model builder & inference server)
Copy file name to clipboardExpand all lines: modules/developer-lightspeed/proc-changing-your-llm-provider.adoc
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,11 +3,11 @@
3
3
[id="proc-changing-your-llm-provider_{context}"]
4
4
= Changing your LLM provider in {ls-short}
5
5
6
-
{ls-short} operates on a {developer-lightspeed-link}#con-about-bring-your-own-model_appendix-about-user-data-security[_Bring Your Own Model_] approach, meaning you must provide and configure access to your preferred Large Language Model (LLM) provider for the service to function. Llama Stack acts as an intermediary layer that handles the configuration and setup of these LLM providers.
6
+
{ls-short} operates on a {developer-lightspeed-link}#con-about-bring-your-own-model_appendix-about-user-data-security[_Bring Your Own Model_] approach, meaning you must provide and configure access to your preferred large language model (LLM) provider for the service to function. Llama Stack acts as an intermediary layer that handles the configuration and setup of these LLM providers.
7
7
8
8
.Procedure
9
9
10
-
You can define additional LLM providers by updating your Llama Stack app config (`llama-stack`) file. In the `inference` section within your `llama-stack.yaml` file, add your new provider configuration as shown in the following code:
10
+
* You can define additional LLM providers by updating your Llama Stack app config (`llama-stack`) file. In the `inference` section within your `llama-stack.yaml` file, add your new provider configuration as shown in the following example:
Copy file name to clipboardExpand all lines: modules/developer-lightspeed/proc-installing-and-configuring-lightspeed.adoc
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,9 +5,9 @@
5
5
6
6
{ls-short} includes three main components that work together to provide virtual assistant (chat) functionality to your developers.
7
7
8
-
* Llama Stack server (container sidecar):: This server, based on open source Llama Stack, operates as the main gateway to your LLM inferencing provider for chat services. Its modular nature allows you to integrate other services, such as the Model Context Protocol (MCP). You must integrate your LLM provider with the Llama Stack server to support the chat functionality. This dependency on external LLM providers is called *Bring Your Own Model* (BYOM).
8
+
* Llama Stack server (container sidecar):: This service (based on open source https://github.com/llamastack/llama-stack[Llama Stack]) operates as the main gateway to your LLM inferencing provider for chat services. Its modular nature allows you to integrate other services, such as the Model Context Protocol (MCP). You must integrate your LLM provider with the Llama Stack server to support the chat functionality. This dependency on external LLM providers is called *Bring Your Own Model* (BYOM).
9
9
10
-
* {lcs-name} (container sidecar):: This service, based on the open source Lightspeed Core, enables features that complement the Llama Stack server, including maintaining chat history and gathering user feedback.
10
+
* {lcs-name} (container sidecar):: This service (based on the open source https://github.com/lightspeed-core[Lightspeed Core]) enables features that complement the Llama Stack server, including maintaining chat history and gathering user feedback.
11
11
12
12
* {ls-short} (dynamic plugins):: These plugins are required to enable the {ls-short} user interface within your {product-very-short} instance.
13
13
@@ -16,7 +16,7 @@ Configuring these components to initialise correctly and communicate with each o
16
16
[NOTE]
17
17
====
18
18
If you have already installed the previous {ls-short} (Developer Preview) with Road-Core Service (RCS), you must remove the previous {ls-short} configurations and settings and reinstall.
19
-
This step is necessary as {ls-short} has a new architecture. In the previous release, {ls-short} required the use of the {rcs-name} as a sidecar container for interfacing with LLM providers. The updated architecture removes and replaces RCS with the new {lcs-name} and Llama Stack server, and requires new configurations for the plugins, volumes, containers, and secrets.
19
+
This step is necessary as {ls-short} has a new architecture. In the previous release, {ls-short} required the use of the Road-Core Service as a sidecar container for interfacing with LLM providers. The updated architecture removes and replaces RCS with the new {lcs-name} and Llama Stack server, and requires new configurations for the plugins, volumes, containers, and secrets.
0 commit comments