Skip to content

Commit b4f3816

Browse files
Update connect-to-own-local-llm.md
Co-authored-by: Mike Birnstiehl <[email protected]>
1 parent 2ed08b5 commit b4f3816

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

solutions/observability/connect-to-own-local-llm.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ You do not have to set up a proxy if LM Studio is running locally, or on the sam
2020
::::
2121

2222
::::{note}
23-
For information about the performance of open-source models on tasks within the {{obs-ai-assistant}}, refer to the [LLM performance matrix](/solutions/observability/llm-performance-matrix.md).
23+
For information about the performance of open-source models on {{obs-ai-assistant}} tasks, refer to the [LLM performance matrix](/solutions/observability/llm-performance-matrix.md).
2424
::::
2525

2626
This example uses a server hosted in GCP to configure LM Studio with the [Llama-3.3-70B-Instruct](https://huggingface.co/lmstudio-community/Llama-3.3-70B-Instruct-GGUF) model.

0 commit comments

Comments
 (0)