Skip to content

Commit d7aaf46

Browse files
committed
Update model to Llama
1 parent 44ba9ef commit d7aaf46

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

solutions/observability/connect-to-own-local-llm.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ If your Elastic deployment is not on the same network, you must configure an Ngi
1919
You do not have to set up a proxy if LM studio is running locally, or on the same network as your Elastic deployment.
2020
::::
2121

22-
This example uses a server hosted in GCP to configure LM Studio with the [Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407) model.
22+
This example uses a server hosted in GCP to configure LM Studio with the [Llama-3.3-70B-Instruct](https://huggingface.co/lmstudio-community/Llama-3.3-70B-Instruct-GGUF) model.
2323

2424
### Already running LM Studio? [skip-if-already-running]
2525

0 commit comments

Comments
 (0)