Skip to content

Commit ba6053d

Browse files
committed
Address review comments
1 parent 03c1992 commit ba6053d

File tree

2 files changed

+1
-5
lines changed

2 files changed

+1
-5
lines changed
-167 KB
Binary file not shown.

solutions/observability/connect-to-own-local-llm.md

Lines changed: 1 addition & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ For security reasons, before downloading a model, verify that it is from a trust
5454
:alt: The LM Studio model selection interface with download options
5555
:::
5656

57-
In this example we used [`llama-3.3-70b-instruct`](https://lmstudio.ai/models/meta/llama-3.3-70b). It has 70B total parameters, a 128,000 token context window, and uses GGUF [quantization](https://huggingface.co/docs/transformers/main/en/quantization/overview). For more information about model names and format information, refer to the following table.
57+
Throughout this documentation, we used [`llama-3.3-70b-instruct`](https://lmstudio.ai/models/meta/llama-3.3-70b). It has 70B total parameters, a 128,000 token context window, and uses GGUF [quantization](https://huggingface.co/docs/transformers/main/en/quantization/overview). For more information about model names and format information, refer to the following table.
5858

5959
| Attribute | Description |
6060
| --- | --- |
@@ -85,10 +85,6 @@ When loading a model, use the `--context-length` flag with a context window of 6
8585
Optionally, you can set how much to offload to the GPU by using the `--gpu` flag. `--gpu max` will offload all layers to GPU.
8686
::::
8787

88-
:::{image} /solutions/images/observability-ai-assistant-lms-commands.png
89-
:alt: The CLI interface during execution of initial LM Studio commands
90-
:::
91-
9288
After the model loads, you should see the message `Model loaded successfully` in the CLI.
9389

9490
:::{image} /solutions/images/observability-ai-assistant-model-loaded.png

0 commit comments

Comments
 (0)