You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: solutions/observability/connect-to-own-local-llm.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,11 +21,11 @@ If your Elastic deployment is not on the same network, you would need to configu
21
21
22
22
This example uses a server hosted in GCP to configure LM Studio with the [Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407) model.
If LM Studio is already installed, the server is running, and you have a model loaded (with a context window of at least 64K tokens), you can skip directly to [Configure the connector in your Elastic deployment](#configure-the-connector-in-your-elastic-deployment-_configure_the_connector_in_your_elastic_deployment).
26
+
If LM Studio is already installed, the server is running, and you have a model loaded (with a context window of at least 64K tokens), you can skip directly to [Configure the connector in your Elastic deployment](#configure-the-connector-in-your-elastic-deployment).
27
27
28
-
## Configure LM Studio and download a model [_configure_lm_studio_and_download_a_model]
28
+
## Configure LM Studio and download a model [configure-lm-studio-and-download-a-model]
29
29
30
30
LM Studio supports the OpenAI SDK, which makes it compatible with Elastic’s OpenAI connector, allowing you to connect to any model available in the LM Studio marketplace.
31
31
@@ -68,11 +68,11 @@ This [`mistralai/mistral-nemo-instruct-2407`](https://lmstudio.ai/models/mistral
68
68
The {{obs-ai-assistant}} requires a model with at least 64,000 token context window.
69
69
::::
70
70
71
-
## Load a model in LM Studio [_load_a_model_in_lm_studio]
71
+
## Load a model in LM Studio [load-a-model-in-lm-studio]
72
72
73
73
After downloading a model, load it in LM Studio using the GUI or LM Studio’s [CLI tool](https://lmstudio.ai/docs/cli/load).
74
74
75
-
### Option 1: Load a model using the CLI (Recommended) [_option_1_load_a_model_using_the_cli_recommended]
75
+
### Option 1: Load a model using the CLI (Recommended) [option-1-load-a-model-using-the-cli-recommended]
76
76
77
77
Once you’ve downloaded a model, use the following commands in your CLI:
78
78
@@ -104,7 +104,7 @@ To verify which model is loaded, use the `lms ps` command.
104
104
105
105
If your model uses NVIDIA drivers, you can check the GPU performance with the `sudo nvidia-smi` command.
106
106
107
-
### Option 2: Load a model using the GUI [_option_2_load_a_model_using_the_gui]
107
+
### Option 2: Load a model using the GUI [option-2-load-a-model-using-the-gui]
108
108
109
109
Once the model is downloaded, it will appear in the "My Models" window in LM Studio.
110
110
@@ -121,7 +121,7 @@ Once the model is downloaded, it will appear in the "My Models" window in LM Stu
121
121
:alt: Loading a model in LM studio developer tab
122
122
:::
123
123
124
-
## Configure the connector in your Elastic deployment [_configure_the_connector_in_your_elastic_deployment]
124
+
## Configure the connector in your Elastic deployment [configure-the-connector-in-your-elastic-deployment]
0 commit comments