Skip to content

Commit 2434244

Browse files
committed
Update link
1 parent 8f26cb5 commit 2434244

File tree

1 file changed

+7
-7
lines changed

1 file changed

+7
-7
lines changed

solutions/observability/connect-to-own-local-llm.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -21,11 +21,11 @@ If your Elastic deployment is not on the same network, you would need to configu
2121

2222
This example uses a server hosted in GCP to configure LM Studio with the [Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407) model.
2323

24-
### Already running LM Studio? [_skip_if_already_running]
24+
### Already running LM Studio? [skip-if-already-running]
2525

26-
If LM Studio is already installed, the server is running, and you have a model loaded (with a context window of at least 64K tokens), you can skip directly to [Configure the connector in your Elastic deployment](#configure-the-connector-in-your-elastic-deployment-_configure_the_connector_in_your_elastic_deployment).
26+
If LM Studio is already installed, the server is running, and you have a model loaded (with a context window of at least 64K tokens), you can skip directly to [Configure the connector in your Elastic deployment](#configure-the-connector-in-your-elastic-deployment).
2727

28-
## Configure LM Studio and download a model [_configure_lm_studio_and_download_a_model]
28+
## Configure LM Studio and download a model [configure-lm-studio-and-download-a-model]
2929

3030
LM Studio supports the OpenAI SDK, which makes it compatible with Elastic’s OpenAI connector, allowing you to connect to any model available in the LM Studio marketplace.
3131

@@ -68,11 +68,11 @@ This [`mistralai/mistral-nemo-instruct-2407`](https://lmstudio.ai/models/mistral
6868
The {{obs-ai-assistant}} requires a model with at least 64,000 token context window.
6969
::::
7070

71-
## Load a model in LM Studio [_load_a_model_in_lm_studio]
71+
## Load a model in LM Studio [load-a-model-in-lm-studio]
7272

7373
After downloading a model, load it in LM Studio using the GUI or LM Studio’s [CLI tool](https://lmstudio.ai/docs/cli/load).
7474

75-
### Option 1: Load a model using the CLI (Recommended) [_option_1_load_a_model_using_the_cli_recommended]
75+
### Option 1: Load a model using the CLI (Recommended) [option-1-load-a-model-using-the-cli-recommended]
7676

7777
Once you’ve downloaded a model, use the following commands in your CLI:
7878

@@ -104,7 +104,7 @@ To verify which model is loaded, use the `lms ps` command.
104104

105105
If your model uses NVIDIA drivers, you can check the GPU performance with the `sudo nvidia-smi` command.
106106

107-
### Option 2: Load a model using the GUI [_option_2_load_a_model_using_the_gui]
107+
### Option 2: Load a model using the GUI [option-2-load-a-model-using-the-gui]
108108

109109
Once the model is downloaded, it will appear in the "My Models" window in LM Studio.
110110

@@ -121,7 +121,7 @@ Once the model is downloaded, it will appear in the "My Models" window in LM Stu
121121
:alt: Loading a model in LM studio developer tab
122122
:::
123123

124-
## Configure the connector in your Elastic deployment [_configure_the_connector_in_your_elastic_deployment]
124+
## Configure the connector in your Elastic deployment [configure-the-connector-in-your-elastic-deployment]
125125

126126
Finally, configure the connector:
127127

0 commit comments

Comments
 (0)