Skip to content

Commit 9d0b8d9

Browse files
committed
Minor changes
1 parent fa24e4a commit 9d0b8d9

File tree

2 files changed

+19
-5
lines changed

2 files changed

+19
-5
lines changed

modules/developer-lightspeed/con-about-lightspeed-stack-and-llama-stack.adoc

Lines changed: 18 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -5,13 +5,27 @@
55

66
The {lcs-name} and Llama Stack deploy together as sidecar containers to augment {product-very-short} functionality.
77

8-
The {lcs-name} serves as the Llama Stack service intermediary, managing configurations for key components. These components include the large language model (LLM) inference providers, Model Context Protocol (MCP) or retrieval augmented generation (RAG) tool runtime providers, safety providers, and vector database settings.
8+
The Llama Stack delivers the augmented functionality by integrating and managing core components, which include:
99

10-
* {lcs-name} manages user feedback collection, MCP server configuration, and conversation history.
10+
* Large language model (LLM) inference providers
1111

12-
* Llama Stack provides the inference functionality that {lcs-short} uses to process requests. For more information, see https://llamastack.github.io/docs#what-is-llama-stack[What is Llama Stack].
12+
* Model Context Protocol (MCP) or Retrieval Augmented Generation (RAG) tool runtime providers
1313

14-
* The {ls-brand-name} plugin in {product-very-short} sends prompts and receives LLM responses through the {lcs-short} sidecar. {lcs-short} then uses the Llama Stack sidecar service to perform inference and MCP or RAG tool calling.
14+
* Safety providers
15+
16+
* Vector database settings
17+
18+
The {lcs-name} serves as the Llama Stack service intermediary. It manages the operational configuration and key data, specifically:
19+
20+
* User feedback collection
21+
22+
* MCP server configuration
23+
24+
* Conversation history
25+
26+
Llama Stack provides the inference functionality that {lcs-short} uses to process requests. For more information, see https://llamastack.github.io/docs#what-is-llama-stack[What is Llama Stack].
27+
28+
The {ls-brand-name} plugin in {product-very-short} sends prompts and receives LLM responses through the {lcs-short} sidecar. {lcs-short} then uses the Llama Stack sidecar service to perform inference and MCP or RAG tool calling.
1529

1630
[NOTE]
1731
====

modules/developer-lightspeed/proc-using-developer-lightspeed-to-start-a-chat-for-the-first-time.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ You can start a chat with {ls-short} for quick answers on a number of topics dep
2323
+
2424
[NOTE]
2525
====
26-
The following file types are supported: `yaml`, `json`, `txt`, and `xml`.
26+
The following file types are supported: `yaml`, `json`, and `txt`.
2727
====
2828
*** Click *Open*.
2929
** To start a chat using the existing prompts, in the {ls-short} virtual assistant interface, click any of the relevant prompt tiles.

0 commit comments

Comments
 (0)