Skip to content

Commit fceaeef

Browse files
authored
Merge pull request #1212 from RateteApple/main
Refine documentation formatting and style for clarity
2 parents 0271e51 + ee40e7e commit fceaeef

File tree

1 file changed

+24
-18
lines changed

1 file changed

+24
-18
lines changed

docs/guides/running-locally.mdx

Lines changed: 24 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -6,30 +6,36 @@ In this video, Mike Bird goes over three different methods for running Open Inte
66

77
<iframe width="560" height="315" src="https://www.youtube.com/embed/CEs51hGWuGU?si=cN7f6QhfT4edfG5H" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
88

9-
## How to use Open Interpreter locally
9+
## How to Use Open Interpreter Locally
1010

1111
### Ollama
1212

13-
1. Download Ollama - https://ollama.ai/download
14-
2. `ollama run dolphin-mixtral:8x7b-v2.6`
15-
3. `interpreter --model ollama/dolphin-mixtral:8x7b-v2.6`
13+
1. Download Ollama from https://ollama.ai/download
14+
2. Run the command:
15+
`ollama run dolphin-mixtral:8x7b-v2.6`
16+
3. Execute the Open Interpreter:
17+
`interpreter --model ollama/dolphin-mixtral:8x7b-v2.6`
1618

17-
# Jan.ai
19+
### Jan.ai
1820

19-
1. Download Jan - [Jan.ai](http://jan.ai/)
20-
2. Download model from Hub
21-
3. Enable API server
22-
1. Settings
23-
2. Advanced
21+
1. Download Jan from http://jan.ai
22+
2. Download the model from the Hub
23+
3. Enable API server:
24+
1. Go to Settings
25+
2. Navigate to Advanced
2426
3. Enable API server
25-
4. Select Model to use
26-
5. `interpreter --api_base http://localhost:1337/v1 --model mixtral-8x7b-instruct`
27+
4. Select the model to use
28+
5. Run the Open Interpreter with the specified API base:
29+
`interpreter --api_base http://localhost:1337/v1 --model mixtral-8x7b-instruct`
2730

28-
# llamafile
31+
### Llamafile
2932

30-
1. Download or make a llamafile - https://github.com/Mozilla-Ocho/llamafile
31-
2. `chmod +x mixtral-8x7b-instruct-v0.1.Q5_K_M.llamafile`
32-
3. `./mixtral-8x7b-instruct-v0.1.Q5_K_M.llamafile`
33-
4. `interpreter --api_base https://localhost:8080/v1`
33+
⚠ Ensure that Xcode is installed for Apple Silicon
3434

35-
Make sure that Xcode is installed for Apple Silicon
35+
1. Download or create a llamafile from https://github.com/Mozilla-Ocho/llamafile
36+
2. Make the llamafile executable:
37+
`chmod +x mixtral-8x7b-instruct-v0.1.Q5_K_M.llamafile`
38+
3. Execute the llamafile:
39+
`./mixtral-8x7b-instruct-v0.1.Q5_K_M.llamafile`
40+
4. Run the interpreter with the specified API base:
41+
`interpreter --api_base https://localhost:8080/v1`

0 commit comments

Comments
 (0)