Skip to content

Commit 891a776

Browse files
committed
improve running locally docs
1 parent cdaf2f9 commit 891a776

File tree

1 file changed

+39
-12
lines changed

1 file changed

+39
-12
lines changed

docs/guides/running-locally.mdx

Lines changed: 39 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -4,25 +4,52 @@ title: Running Locally
44

55
Open Interpreter can be run fully locally.
66

7-
Users need to install software to run local LLMs. Open Interpreter supports multiple local model providers such as [Ollama](https://www.ollama.com/), [Llamafile](https://github.com/Mozilla-Ocho/llamafile), [LM Studio](https://lmstudio.ai/), and [Jan](https://jan.ai/).
7+
Users need to install software to run local LLMs. Open Interpreter supports multiple local model providers such as [Ollama](https://www.ollama.com/), [Llamafile](https://github.com/Mozilla-Ocho/llamafile), [Jan](https://jan.ai/), and [LM Studio](https://lmstudio.ai/).
88

9-
## Local Setup Menu
9+
<Tip>
10+
Local models perform better with extra guidance and direction. You can improve
11+
performance for your use-case by creating a new [Profile](/guides/profiles).
12+
</Tip>
1013

11-
A Local Setup Menu was created to simplify the process of using OI locally. To access this menu, run the command `interpreter --local`.
14+
## Terminal Usage
1215

13-
### Provider
16+
### Local Setup Menu
17+
18+
A Local Setup Menu was created to simplify the process of using OI locally. To access this menu, run the command `interpreter --local`.
1419

1520
Select your chosen local model provider from the list of options.
1621

17-
It is possible to use a provider other than the ones listed. Instead of running `--local` you will set the `--api_base` flag to set a [custom endpoint](/language-models/local-models/custom-endpoint).
22+
Most providers will require the user to state the model they are using. Provider specific instructions are shown to the user in the menu.
1823

19-
### Model
24+
### Custom Local
2025

21-
Most providers will require the user to state the model they are using. There are Provider specific instructions shown to the user.
26+
If you want to use a provider other than the ones listed, you will set the `--api_base` flag to set a [custom endpoint](/language-models/local-models/custom-endpoint).
2227

23-
It is possible to set the model without going through the Local Setup Menu by setting the `--model` flag to select a [model](/settings/all-settings#model-selection).
28+
You will also need to set the model by passing in the `--model` flag to select a [model](/settings/all-settings#model-selection).
2429

25-
<Tip>
26-
Local models perform better with extra guidance and direction. You can improve
27-
performance for your use-case by creating a new [Profile](/guides/profiles).
28-
</Tip>
30+
```python
31+
interpreter --api_base "http://localhost:11434" --model ollama/codestral
32+
```
33+
34+
<Info>
35+
Other terminal flags are explained in [Settings](/settings/all-settings).
36+
</Info>
37+
38+
## Python Usage
39+
40+
In order to have a Python script use Open Interpreter locally, some fields need to be set
41+
42+
```python
43+
from interpreter import interpreter
44+
45+
interpreter.offline = True
46+
interpreter.llm.model = "ollama/codestral"
47+
interpreter.llm.api_base = "http://localhost:11434"
48+
49+
interpreter.chat("how many files are on my desktop?")
50+
```
51+
52+
<Info>
53+
Other congifuration settings are explained in
54+
[Settings](/settings/all-settings).
55+
</Info>

0 commit comments

Comments
 (0)