Skip to content

Commit fb48591

Browse files
committed
updated running locally
1 parent c3bf845 commit fb48591

File tree

2 files changed

+18
-32
lines changed

2 files changed

+18
-32
lines changed

docs/guides/profiles.mdx

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -4,11 +4,11 @@ title: Profiles
44

55
Profiles are a powerful way to customize your instance of Open Interpreter.
66

7-
Everything from the model to the context window to the message templates can be configured in a profile. This allows you to save multiple variations of Open Interpreter to optimize your specific use-cases.
7+
Profiles are Python files that configure Open Interpreter. A wide range of fields from the [model](/settings/all-settings#model-selection) to the [context window](/settings/all-settings#context-window) to the [message templates](/settings/all-settings#user-message-template) can be configured in a Profile. This allows you to save multiple variations of Open Interpreter to optimize for your specific use-cases.
88

9-
You can access Profiles with `interpreter --profiles`. This will open up the directory where all of your profiles are stored.
9+
You can access your Profiles by running `interpreter --profiles`. This will open the directory where all of your Profiles are stored.
1010

11-
To apply a Profile to a session of Open Interpreter, you can run `interpreter --profile <name>`
11+
To apply a Profile to an Open Interpreter session, you can run `interpreter --profile <name>`
1212

1313
# Example Profile
1414

@@ -17,7 +17,6 @@ from interpreter import interpreter
1717

1818
interpreter.os = True
1919
interpreter.llm.supports_vision = True
20-
# interpreter.shrink_images = True # Faster but less accurate
2120

2221
interpreter.llm.model = "gpt-4-vision-preview"
2322

docs/guides/running-locally.mdx

Lines changed: 15 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -2,40 +2,27 @@
22
title: Running Locally
33
---
44

5-
In this video, Mike Bird goes over three different methods for running Open Interpreter with a local language model:
5+
Open Interpreter can be run fully locally.
66

7-
<iframe width="560" height="315" src="https://www.youtube.com/embed/CEs51hGWuGU?si=cN7f6QhfT4edfG5H" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
7+
Users need to install software to run local LLMs. Open Interpreter supports multiple local model providers such as [Ollama](https://www.ollama.com/), [Llamafile](https://github.com/Mozilla-Ocho/llamafile), [LM Studio](https://lmstudio.ai/), and [Jan](https://jan.ai/).
88

9-
## How to Use Open Interpreter Locally
9+
## Local Setup Menu
1010

11-
### Ollama
11+
A Local Setup Menu was created to simplify the process of using OI locally. To access this menu, run the command `interpreter --local`.
1212

13-
1. Download Ollama from https://ollama.ai/download
14-
2. Run the command:
15-
`ollama run dolphin-mixtral:8x7b-v2.6`
16-
3. Execute the Open Interpreter:
17-
`interpreter --model ollama/dolphin-mixtral:8x7b-v2.6`
13+
### Provider
1814

19-
### Jan.ai
15+
Select your chosen local model provider from the list of options.
2016

21-
1. Download Jan from http://jan.ai
22-
2. Download the model from the Hub
23-
3. Enable API server:
24-
1. Go to Settings
25-
2. Navigate to Advanced
26-
3. Enable API server
27-
4. Select the model to use
28-
5. Run the Open Interpreter with the specified API base:
29-
`interpreter --api_base http://localhost:1337/v1 --model mixtral-8x7b-instruct`
17+
It is possible to use a provider other than the ones listed. Instead of running `--local` you will set the `--api_base` flag to set a [custom endpoint](/language-models/local-models/custom-endpoint).
3018

31-
### Llamafile
19+
### Model
3220

33-
⚠ Ensure that Xcode is installed for Apple Silicon
21+
Most providers will require the user to state the model they are using. There are Provider specific instructions shown to the user.
3422

35-
1. Download or create a llamafile from https://github.com/Mozilla-Ocho/llamafile
36-
2. Make the llamafile executable:
37-
`chmod +x mixtral-8x7b-instruct-v0.1.Q5_K_M.llamafile`
38-
3. Execute the llamafile:
39-
`./mixtral-8x7b-instruct-v0.1.Q5_K_M.llamafile`
40-
4. Run the interpreter with the specified API base:
41-
`interpreter --api_base https://localhost:8080/v1`
23+
It is possible to set the model without going through the Local Setup Menu by setting the `--model` flag to select a [model](/settings/all-settings#model-selection).
24+
25+
<Tip>
26+
Local models perform better with extra guidance and direction. You can improve
27+
performance for your use-case by creating a new [Profile](/guides/profiles).
28+
</Tip>

0 commit comments

Comments
 (0)