Skip to content

Commit 4d49c70

Browse files
committed
Incorporated Tim's suggestions
1 parent a134806 commit 4d49c70

File tree

6 files changed

+14
-14
lines changed

6 files changed

+14
-14
lines changed

assemblies/assembly-customizing-developer-lightspeed.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
[id="{context}"]
55
= Customizing {ls-short}
66

7-
You can customize {ls-short} functionalities such as gathering feedback, storing chat history in PostgreSQL, and configuring Model Context Protocol (MCP) tools.
7+
You can customize {ls-short} functionalities such as gathering feedback, storing chat history in PostgreSQL, and xref:proc-configure-mcp-tools-for-developer-lightspeed[configuring Model Context Protocol (MCP) tools].
88

99
include::modules/developer-lightspeed/proc-gathering-feedback.adoc[leveloffset=+1]
1010

modules/developer-lightspeed/con-about-lightspeed-stack-and-llama-stack.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,9 +3,9 @@
33
[id="con-about-lightspeed-stack-and-llama-stack_{context}"]
44
= About {lcs-name} and Llama Stack
55

6-
The {lcs-name} and Llama Stack deploy together as sidecar containers to augment {rhdh-short} functionality.
6+
The {lcs-name} and Llama Stack deploy together as sidecar containers to augment {rhdh-very-short} functionality.
77

8-
The {lcs-name} serves as the Llama Stack service intermediary, managing configurations for key components. These components include the Large Language Model (LLM) inference providers, Model Context Protocol (MCP) or Retrieval Augmented Generation (RAG) tool runtime providers, safety providers, and vector database settings.
8+
The {lcs-name} serves as the Llama Stack service intermediary, managing configurations for key components. These components include the large language model (LLM) inference providers, Model Context Protocol (MCP) or retrieval augmented generation (RAG) tool runtime providers, safety providers, and vector database settings.
99

1010
* {lcs-name} manages authentication, user feedback collection, MCP server configuration, and caching.
1111

modules/developer-lightspeed/con-llm-requirements.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55

66
{ls-short} follows a _Bring Your Own Model_ approach. This model means that to function, {ls-short} requires access to a large language model (LLM) which you must provide. An LLM is a type of generative AI that interprets natural language and generates human-like text or audio responses. When an LLM is used as a virtual assistant, the LLM can interpret questions and provide answers in a conversational manner.
77

8-
LLMs are usually provided by a service or server. Since {ls-short} does not provide an LLM for you, you must configure your preferred LLM provider during installation. You can configure the underlying Llama Stack server to integrate with a number of LLM `providers`` that offer compatibility with the OpenAI API including the following LLMs:
8+
LLMs are usually provided by a service or server. Because {ls-short} does not provide an LLM for you, you must configure your preferred LLM provider during installation. You can configure the underlying Llama Stack server to integrate with a number of LLM `providers`` that offer compatibility with the OpenAI API including the following LLMs:
99

1010
* OpenAI (cloud-based inference service)
1111
* {rhoai-brand-name} (enterprise model builder & inference server)

modules/developer-lightspeed/con-rag-embeddings.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
:_mod-docs-content-type: CONCEPT
22

33
[id="con-rag-embeddings_{context}"]
4-
= Retrieval Augmented Generation embeddings
4+
= Retrieval augmented generation (RAG) embeddings
55

66
The {product} documentation serves as the Retrieval-Augmented Generation (RAG) data source.
77

modules/developer-lightspeed/proc-changing-your-llm-provider.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,11 +3,11 @@
33
[id="proc-changing-your-llm-provider_{context}"]
44
= Changing your LLM provider in {ls-short}
55

6-
{ls-short} operates on a {developer-lightspeed-link}#con-about-bring-your-own-model_appendix-about-user-data-security[_Bring Your Own Model_] approach, meaning you must provide and configure access to your preferred Large Language Model (LLM) provider for the service to function. Llama Stack acts as an intermediary layer that handles the configuration and setup of these LLM providers.
6+
{ls-short} operates on a {developer-lightspeed-link}#con-about-bring-your-own-model_appendix-about-user-data-security[_Bring Your Own Model_] approach, meaning you must provide and configure access to your preferred large language model (LLM) provider for the service to function. Llama Stack acts as an intermediary layer that handles the configuration and setup of these LLM providers.
77

88
.Procedure
99

10-
You can define additional LLM providers by updating your Llama Stack app config (`llama-stack`) file. In the `inference` section within your `llama-stack.yaml` file, add your new provider configuration as shown in the following code:
10+
* You can define additional LLM providers by updating your Llama Stack app config (`llama-stack`) file. In the `inference` section within your `llama-stack.yaml` file, add your new provider configuration as shown in the following example:
1111
+
1212
[source,yaml]
1313
----

modules/developer-lightspeed/proc-installing-and-configuring-lightspeed.adoc

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -5,9 +5,9 @@
55

66
{ls-short} includes three main components that work together to provide virtual assistant (chat) functionality to your developers.
77

8-
* Llama Stack server (container sidecar):: This server, based on open source Llama Stack, operates as the main gateway to your LLM inferencing provider for chat services. Its modular nature allows you to integrate other services, such as the Model Context Protocol (MCP). You must integrate your LLM provider with the Llama Stack server to support the chat functionality. This dependency on external LLM providers is called *Bring Your Own Model* (BYOM).
8+
* Llama Stack server (container sidecar):: This service (based on open source https://github.com/llamastack/llama-stack[Llama Stack]) operates as the main gateway to your LLM inferencing provider for chat services. Its modular nature allows you to integrate other services, such as the Model Context Protocol (MCP). You must integrate your LLM provider with the Llama Stack server to support the chat functionality. This dependency on external LLM providers is called *Bring Your Own Model* (BYOM).
99

10-
* {lcs-name} (container sidecar):: This service, based on the open source Lightspeed Core, enables features that complement the Llama Stack server, including maintaining chat history and gathering user feedback.
10+
* {lcs-name} (container sidecar):: This service (based on the open source https://github.com/lightspeed-core[Lightspeed Core]) enables features that complement the Llama Stack server, including maintaining chat history and gathering user feedback.
1111

1212
* {ls-short} (dynamic plugins):: These plugins are required to enable the {ls-short} user interface within your {product-very-short} instance.
1313

@@ -16,7 +16,7 @@ Configuring these components to initialise correctly and communicate with each o
1616
[NOTE]
1717
====
1818
If you have already installed the previous {ls-short} (Developer Preview) with Road-Core Service (RCS), you must remove the previous {ls-short} configurations and settings and reinstall.
19-
This step is necessary as {ls-short} has a new architecture. In the previous release, {ls-short} required the use of the {rcs-name} as a sidecar container for interfacing with LLM providers. The updated architecture removes and replaces RCS with the new {lcs-name} and Llama Stack server, and requires new configurations for the plugins, volumes, containers, and secrets.
19+
This step is necessary as {ls-short} has a new architecture. In the previous release, {ls-short} required the use of the Road-Core Service as a sidecar container for interfacing with LLM providers. The updated architecture removes and replaces RCS with the new {lcs-name} and Llama Stack server, and requires new configurations for the plugins, volumes, containers, and secrets.
2020
====
2121

2222
.Prerequisites
@@ -413,7 +413,7 @@ stringData:
413413
includes:
414414
- dynamic-plugins.default.yaml
415415
plugins:
416-
- package: oci://ghcr.io/redhat-developer/rhdh-plugin-export-overlays/red-hat-developer-hub-backstage-plugin-lightspeed:bs_1.39.1__0.5.7!red-hat-developer-hub-backstage-plugin-lightspeed
416+
- package: oci://ghcr.io/redhat-developer/rhdh-plugin-export-overlays/red-hat-developer-hub-backstage-plugin-lightspeed:next__1.0.1!red-hat-developer-hub-backstage-plugin-lightspeed
417417
disabled: false
418418
pluginConfig:
419419
lightspeed:
@@ -438,7 +438,7 @@ includes:
438438
menuItem:
439439
icon: LightspeedIcon
440440
text: Lightspeed
441-
- package: oci://ghcr.io/redhat-developer/rhdh-plugin-export-overlays/red-hat-developer-hub-backstage-plugin-lightspeed-backend:bs_1.39.1__0.5.7!red-hat-developer-hub-backstage-plugin-lightspeed-backend
441+
- package: oci://ghcr.io/redhat-developer/rhdh-plugin-export-overlays/red-hat-developer-hub-backstage-plugin-lightspeed-backend:next__1.0.1!red-hat-developer-hub-backstage-plugin-lightspeed-backend
442442
disabled: false
443443
pluginConfig:
444444
lightspeed:
@@ -582,7 +582,7 @@ dynamic:
582582
includes:
583583
- dynamic-plugins.default.yaml
584584
plugins:
585-
- package: oci://ghcr.io/redhat-developer/rhdh-plugin-export-overlays/red-hat-developer-hub-backstage-plugin-lightspeed:bs_1.39.1__0.5.7!red-hat-developer-hub-backstage-plugin-lightspeed
585+
- package: oci://ghcr.io/redhat-developer/rhdh-plugin-export-overlays/red-hat-developer-hub-backstage-plugin-lightspeed:next__1.0.1!red-hat-developer-hub-backstage-plugin-lightspeed
586586
disabled: false
587587
pluginConfig:
588588
lightspeed:
@@ -607,7 +607,7 @@ dynamic:
607607
menuItem:
608608
icon: LightspeedIcon
609609
text: Lightspeed
610-
- package: oci://ghcr.io/redhat-developer/rhdh-plugin-export-overlays/red-hat-developer-hub-backstage-plugin-lightspeed-backend:bs_1.39.1__0.5.7!red-hat-developer-hub-backstage-plugin-lightspeed-backend
610+
- package: oci://ghcr.io/redhat-developer/rhdh-plugin-export-overlays/red-hat-developer-hub-backstage-plugin-lightspeed-backend:next__1.0.1!red-hat-developer-hub-backstage-plugin-lightspeed-backend
611611
disabled: false
612612
pluginConfig:
613613
lightspeed:

0 commit comments

Comments
 (0)