You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: modules/developer-lightspeed/con-llm-requirements.adoc
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@
5
5
6
6
{ls-short} follows a _Bring Your Own Model_ approach. This model means that to function, {ls-short} requires access to a large language model (LLM) which you must provide. An LLM is a type of generative AI that interprets natural language and generates human-like text or audio responses. When an LLM is used as a virtual assistant, the LLM can interpret questions and provide answers in a conversational manner.
7
7
8
-
LLMs are usually provided by a service or server. Because {ls-short} does not provide an LLM for you, you must configure your preferred LLM provider during installation. You can configure the underlying Llama Stack server to integrate with a number of LLM `providers` that offer compatibility with the OpenAI API including the following LLMs:
8
+
LLMs are usually provided by a service or server. Because {ls-short} does not provide an LLM for you, you must configure your preferred LLM provider during installation. You can configure the underlying Llama Stack server to integrate with a number of LLM `providers` that offer compatibility with the OpenAI API including the following inference providers:
9
9
10
10
* OpenAI (cloud-based inference service)
11
11
* {rhoai-brand-name} (enterprise model builder & inference server)
0 commit comments