Skip to content

Commit 3f3ae83

Browse files
committed
Remove duplicate note
1 parent 75fa51f commit 3f3ae83

File tree

1 file changed

+1
-6
lines changed

1 file changed

+1
-6
lines changed

solutions/observability/connect-to-own-local-llm.md

Lines changed: 1 addition & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,6 @@ Once you’ve launched LM Studio:
4141
For security reasons, before downloading a model, verify that it is from a trusted source or by a verified author. It can be helpful to review community feedback on the model (for example using a site like Hugging Face).
4242
::::
4343

44-
4544
:::{image} /solutions/images/observability-ai-assistant-lms-model-selection.png
4645
:alt: The LM Studio model selection interface with download options
4746
:::
@@ -54,11 +53,7 @@ This [`mistralai/mistral-nemo-instruct-2407`](https://lmstudio.ai/models/mistral
5453
| Examples: Llama, Mistral. | The number of parameters is a measure of the size and the complexity of the model. The more parameters a model has, the more data it can process, learn from, generate, and predict. | The context window defines how much information the model can process at once. If the number of input tokens exceeds this limit, input gets truncated. | Specific formats for quantization vary, most models now support GPU rather than CPU offloading. |
5554

5655
::::{important}
57-
For security reasons, before downloading a model, verify that it is from a trusted source or by a verified author. It can be helpful to review community feedback on the model (for example using a site like Hugging Face).
58-
::::
59-
60-
::::{important}
61-
The {{obs-ai-assistant}} requires a model with a minimum 64K token context window.
56+
The {{obs-ai-assistant}} requires a model with at least 64,000 token context window.
6257
::::
6358

6459
## Load a model in LM Studio [_load_a_model_in_lm_studio]

0 commit comments

Comments
 (0)