Skip to content

Commit ece9ca9

Browse files
committed
Merge branch 'main' of github.com:lorenzejay/open-interpreter into 1076
2 parents 6f3e59f + 6d12384 commit ece9ca9

File tree

8 files changed

+374
-70
lines changed

8 files changed

+374
-70
lines changed

docs/language-models/local-models/janai.mdx

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -14,19 +14,19 @@ To run Open Interpreter with Jan.ai, follow these steps:
1414

1515
4. Click the 'Advanced' button under the GENERAL section, and toggle on the "Enable API Server" option. This will start a local server that you can use to interact with your model.
1616

17-
5. Now we fire up Open Interpreter with this custom model. To do so, run this command, but replace `<model_name>` with the name of the model you downloaded:
17+
5. Now we fire up Open Interpreter with this custom model. Either run `interpreter --local` in the terminal to set it up interactively, or run this command, but replace `<model_id>` with the id of the model you downloaded:
1818

1919
<CodeGroup>
2020

2121
```bash Terminal
22-
interpreter --api_base http://localhost:1337/v1 --model <model_name>
22+
interpreter --api_base http://localhost:1337/v1 --model <model_id>
2323
```
2424

2525
```python Python
2626
from interpreter import interpreter
2727

2828
interpreter.offline = True # Disables online features like Open Procedures
29-
interpreter.llm.model = "<model-name>"
29+
interpreter.llm.model = "<model_id>"
3030
interpreter.llm.api_base = "http://localhost:1337/v1 "
3131

3232
interpreter.chat()
@@ -39,7 +39,7 @@ If your model can handle a longer context window than the default 3000, you can
3939
<CodeGroup>
4040

4141
```bash Terminal
42-
interpreter --api_base http://localhost:1337/v1 --model <model_name> --context_window 5000
42+
interpreter --api_base http://localhost:1337/v1 --model <model_id> --context_window 5000
4343
```
4444

4545
```python Python

docs/language-models/local-models/llamafile.mdx

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,9 @@
22
title: LlamaFile
33
---
44

5-
To use LlamaFile with Open Interpreter, you'll need to download the model and start the server by running the file in the terminal. You can do this with the following commands:
5+
The easiest way to get started with local models in Open Interpreter is to run `interpreter --local` in the terminal, select LlamaFile, then go through the interactive set up process. This will download the model and start the server for you. If you choose to do it manually, you can follow the instructions below.
6+
7+
To use LlamaFile manually with Open Interpreter, you'll need to download the model and start the server by running the file in the terminal. You can do this with the following commands:
68

79
```bash
810
# Download Mixtral
@@ -22,4 +24,4 @@ chmod +x mixtral-8x7b-instruct-v0.1.Q5_K_M-server.llamafile
2224
interpreter --api_base https://localhost:8080/v1
2325
```
2426

25-
Please note that if you are using a Mac with Apple Silicon, you'll need to have Xcode installed.
27+
Please note that if you are using a Mac with Apple Silicon, you'll need to have Xcode installed.

docs/language-models/local-models/lm-studio.mdx

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -27,9 +27,13 @@ for a more detailed guide check out [this video by Mike Bird](https://www.youtub
2727

2828
Once the server is running, you can begin your conversation with Open Interpreter.
2929

30-
(When you run the command `interpreter --local`, the steps above will be displayed.)
30+
(When you run the command `interpreter --local` and select LMStudio, these steps will be displayed.)
3131

32-
<Info>Local mode sets your `context_window` to 3000, and your `max_tokens` to 1000. If your model has different requirements, [set these parameters manually.](/settings#language-model)</Info>
32+
<Info>
33+
Local mode sets your `context_window` to 3000, and your `max_tokens` to 1000.
34+
If your model has different requirements, [set these parameters
35+
manually.](/settings#language-model)
36+
</Info>
3337

3438
# Python
3539

docs/language-models/local-models/ollama.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ To run Ollama with Open interpreter:
1616
ollama run <model-name>
1717
```
1818

19-
4. It will likely take a while to download, but once it does, we are ready to use it with Open Interpreter.
19+
4. It will likely take a while to download, but once it does, we are ready to use it with Open Interpreter. You can either run `interpreter --local` to set it up interactively in the terminal, or do it manually:
2020

2121
<CodeGroup>
2222

docs/settings/all-settings.mdx

Lines changed: 13 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -4,11 +4,7 @@ title: All Settings
44

55
<CardGroup cols={3}>
66

7-
<Card
8-
title="Language Model Settings"
9-
icon="microchip"
10-
href="#language-model"
11-
>
7+
<Card title="Language Model Settings" icon="microchip" href="#language-model">
128
Set your `model`, `api_key`, `temperature`, etc.
139
</Card>
1410

@@ -304,6 +300,18 @@ interpreter --version
304300

305301
</CodeGroup>
306302

303+
### Open Local Models Directory
304+
305+
Opens the models directory. All downloaded Llamafiles are saved here.
306+
307+
<CodeGroup>
308+
309+
```bash Terminal
310+
interpreter --local_models
311+
```
312+
313+
</CodeGroup>
314+
307315
### Open Profiles Directory
308316

309317
Opens the profiles directory. New yaml profile files can be added to this directory.

0 commit comments

Comments
 (0)