@@ -6,30 +6,36 @@ In this video, Mike Bird goes over three different methods for running Open Inte
6
6
7
7
<iframe width = " 560" height = " 315" src = " https://www.youtube.com/embed/CEs51hGWuGU?si=cN7f6QhfT4edfG5H" title = " YouTube video player" frameborder = " 0" allow = " accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen ></iframe >
8
8
9
- ## How to use Open Interpreter locally
9
+ ## How to Use Open Interpreter Locally
10
10
11
11
### Ollama
12
12
13
- 1 . Download Ollama - https://ollama.ai/download
14
- 2 . ` ollama run dolphin-mixtral:8x7b-v2.6 `
15
- 3 . ` interpreter --model ollama/dolphin-mixtral:8x7b-v2.6 `
13
+ 1 . Download Ollama from https://ollama.ai/download
14
+ 2 . Run the command:
15
+ ` ollama run dolphin-mixtral:8x7b-v2.6 `
16
+ 3 . Execute the Open Interpreter:
17
+ ` interpreter --model ollama/dolphin-mixtral:8x7b-v2.6 `
16
18
17
- # Jan.ai
19
+ ### Jan.ai
18
20
19
- 1 . Download Jan - [ Jan.ai ] ( http://jan.ai/ )
20
- 2 . Download model from Hub
21
- 3 . Enable API server
22
- 1 . Settings
23
- 2 . Advanced
21
+ 1 . Download Jan from http://jan.ai
22
+ 2 . Download the model from the Hub
23
+ 3 . Enable API server:
24
+ 1 . Go to Settings
25
+ 2 . Navigate to Advanced
24
26
3 . Enable API server
25
- 4 . Select Model to use
26
- 5 . ` interpreter --api_base http://localhost:1337/v1 --model mixtral-8x7b-instruct `
27
+ 4 . Select the model to use
28
+ 5 . Run the Open Interpreter with the specified API base:
29
+ ` interpreter --api_base http://localhost:1337/v1 --model mixtral-8x7b-instruct `
27
30
28
- # llamafile
31
+ ### Llamafile
29
32
30
- 1 . Download or make a llamafile - https://github.com/Mozilla-Ocho/llamafile
31
- 2 . ` chmod +x mixtral-8x7b-instruct-v0.1.Q5_K_M.llamafile `
32
- 3 . ` ./mixtral-8x7b-instruct-v0.1.Q5_K_M.llamafile `
33
- 4 . ` interpreter --api_base https://localhost:8080/v1 `
33
+ ⚠ Ensure that Xcode is installed for Apple Silicon
34
34
35
- Make sure that Xcode is installed for Apple Silicon
35
+ 1 . Download or create a llamafile from https://github.com/Mozilla-Ocho/llamafile
36
+ 2 . Make the llamafile executable:
37
+ ` chmod +x mixtral-8x7b-instruct-v0.1.Q5_K_M.llamafile `
38
+ 3 . Execute the llamafile:
39
+ ` ./mixtral-8x7b-instruct-v0.1.Q5_K_M.llamafile `
40
+ 4 . Run the interpreter with the specified API base:
41
+ ` interpreter --api_base https://localhost:8080/v1 `
0 commit comments