|
| 1 | +:_mod-docs-content-type: PROCEDURE |
| 2 | + |
| 3 | +[id="proc-changing-your-llm-provider_{context}"] |
| 4 | += Changing your LLM provider in {ls-short} |
| 5 | + |
| 6 | +{ls-short} operates on a link:{developer-lightspeed-link}##con-about-bring-your-own-model_appendix-about-user-data-security[_Bring Your Own Model_] approach, meaning you must provide and configure access to your preferred Large Language Model (LLM) provider for the service to function. The Road-Core Service (RCS) acts as an intermediary layer that handles the configuration and setup of these LLM providers. |
| 7 | + |
| 8 | +[IMPORTANT] |
| 9 | +==== |
| 10 | +The LLM provider configuration section includes a mandatory dummy provider block. Due to limitations of Road Core, this dummy provider must remain present when working with Lightspeed. This block is typically marked with comments (# Start: Do not remove this block and # End: Do not remove this block) and must not be removed from the configuration file. |
| 11 | +==== |
| 12 | + |
| 13 | +.Prerequisites |
| 14 | + |
| 15 | +* The path to the file containing your API token must be accessible by the RCS container, requiring the file to be mounted to the RCS container. |
| 16 | + |
| 17 | +.Procedure |
| 18 | + |
| 19 | +You can define additional LLM providers using either of following methods: |
| 20 | + |
| 21 | +* Recommended: In your Developer Lightspeed plugin configuration (the `lightspeed` section within the `lightspeed-app-config.yaml` file), define the new provider or providers under the `lightspeed.servers` key as shown in the following code: |
| 22 | ++ |
| 23 | +[source,yaml] |
| 24 | +---- |
| 25 | +lightspeed: |
| 26 | + servers: |
| 27 | + - id: _<my_new_provider>_ |
| 28 | + url: _<my_new_url>_ |
| 29 | + token: _<my_new_token>_ |
| 30 | +---- |
| 31 | +** Optional: You can set the `id`, `url`, and `token` values in a Kubernetes Secret and reference them as environment variables using the `envFrom` section. |
| 32 | +[source,yaml] |
| 33 | +---- |
| 34 | +containers: |
| 35 | + - name: my-container |
| 36 | + image: my-image |
| 37 | + envFrom: |
| 38 | + - secretRef: |
| 39 | + name: my-secret |
| 40 | +---- |
| 41 | + |
| 42 | +* You can add new LLM providers by updating the `rcsconfig.yaml` file. |
| 43 | +.. In the `llm_providers` section within your `rcsconfig.yaml` file, add your new provider configuration below the mandatory dummy provider block as shown in the following code: |
| 44 | ++ |
| 45 | +[source,yaml] |
| 46 | +---- |
| 47 | +llm_providers: |
| 48 | + # Start: Do not remove this block |
| 49 | + - name: dummy |
| 50 | + type: openai |
| 51 | + url: https://dummy.com |
| 52 | + models: |
| 53 | + - name: dummymodel |
| 54 | + # END: Do not remove this block |
| 55 | + - name: _<my_new_providers>_ |
| 56 | + type: openai |
| 57 | + url: _<my_provider_url>_ |
| 58 | + credentials_path: path/to/token |
| 59 | + disable_model_check: true |
| 60 | +---- |
| 61 | +.. If you need to define a new provider in `rcsconfig.yaml`, you must configure the following critical parameters: |
| 62 | +** `credentials_path`: Specifies the path to a `.txt` file that contains your API token. This file must be mounted and accessible by the RCS container. |
| 63 | +** `disable_model_check`: Set this field to `true` to allow the RCS to locate models through the `/v1/models` endpoint of the provider. When you set this field to `true`, you avoid the need to define model names explicitly in the configuration. |
0 commit comments