|
2 | 2 | title: Running Locally
|
3 | 3 | ---
|
4 | 4 |
|
5 |
| -In this video, Mike Bird goes over three different methods for running Open Interpreter with a local language model: |
| 5 | +Open Interpreter can be run fully locally. |
6 | 6 |
|
7 |
| -<iframe width="560" height="315" src="https://www.youtube.com/embed/CEs51hGWuGU?si=cN7f6QhfT4edfG5H" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> |
| 7 | +Users need to install software to run local LLMs. Open Interpreter supports multiple local model providers such as [Ollama](https://www.ollama.com/), [Llamafile](https://github.com/Mozilla-Ocho/llamafile), [LM Studio](https://lmstudio.ai/), and [Jan](https://jan.ai/). |
8 | 8 |
|
9 |
| -## How to Use Open Interpreter Locally |
| 9 | +## Local Setup Menu |
10 | 10 |
|
11 |
| -### Ollama |
| 11 | +A Local Setup Menu was created to simplify the process of using OI locally. To access this menu, run the command `interpreter --local`. |
12 | 12 |
|
13 |
| -1. Download Ollama from https://ollama.ai/download |
14 |
| -2. Run the command: |
15 |
| -`ollama run dolphin-mixtral:8x7b-v2.6` |
16 |
| -3. Execute the Open Interpreter: |
17 |
| -`interpreter --model ollama/dolphin-mixtral:8x7b-v2.6` |
| 13 | +### Provider |
18 | 14 |
|
19 |
| -### Jan.ai |
| 15 | +Select your chosen local model provider from the list of options. |
20 | 16 |
|
21 |
| -1. Download Jan from http://jan.ai |
22 |
| -2. Download the model from the Hub |
23 |
| -3. Enable API server: |
24 |
| - 1. Go to Settings |
25 |
| - 2. Navigate to Advanced |
26 |
| - 3. Enable API server |
27 |
| -4. Select the model to use |
28 |
| -5. Run the Open Interpreter with the specified API base: |
29 |
| -`interpreter --api_base http://localhost:1337/v1 --model mixtral-8x7b-instruct` |
| 17 | +It is possible to use a provider other than the ones listed. Instead of running `--local` you will set the `--api_base` flag to set a [custom endpoint](/language-models/local-models/custom-endpoint). |
30 | 18 |
|
31 |
| -### Llamafile |
| 19 | +### Model |
32 | 20 |
|
33 |
| -⚠ Ensure that Xcode is installed for Apple Silicon |
| 21 | +Most providers will require the user to state the model they are using. There are Provider specific instructions shown to the user. |
34 | 22 |
|
35 |
| -1. Download or create a llamafile from https://github.com/Mozilla-Ocho/llamafile |
36 |
| -2. Make the llamafile executable: |
37 |
| -`chmod +x mixtral-8x7b-instruct-v0.1.Q5_K_M.llamafile` |
38 |
| -3. Execute the llamafile: |
39 |
| -`./mixtral-8x7b-instruct-v0.1.Q5_K_M.llamafile` |
40 |
| -4. Run the interpreter with the specified API base: |
41 |
| -`interpreter --api_base https://localhost:8080/v1` |
| 23 | +It is possible to set the model without going through the Local Setup Menu by setting the `--model` flag to select a [model](/settings/all-settings#model-selection). |
| 24 | + |
| 25 | +<Tip> |
| 26 | + Local models perform better with extra guidance and direction. You can improve |
| 27 | + performance for your use-case by creating a new [Profile](/guides/profiles). |
| 28 | +</Tip> |
0 commit comments