File tree Expand file tree Collapse file tree 1 file changed +25
-0
lines changed
interpreter/terminal_interface/profiles/defaults Expand file tree Collapse file tree 1 file changed +25
-0
lines changed Original file line number Diff line number Diff line change
1
+ """
2
+ This is an Open Interpreter profile for using Llama 3.2:3b served locally by Cortex.
3
+
4
+ This profile configures Open Interpreter to use a locally hosted Llama 3.2 model through Cortex.
5
+
6
+ Run `cortex start` before running Open Interpreter.
7
+
8
+ More information about Cortex: https://cortex.so/docs/
9
+
10
+ """
11
+
12
+ from interpreter import interpreter
13
+
14
+
15
+ # Update the model to match t
16
+ interpreter .llm .model = "llama3.2:3b-gguf-q8-0"
17
+ interpreter .llm .context_window = 8192
18
+ interpreter .llm .max_tokens = 4096
19
+ interpreter .llm .api_base = "http://127.0.0.1:39281/v1"
20
+ interpreter .llm .supports_functions = False
21
+ interpreter .llm .supports_vision = False
22
+
23
+ interpreter .offline = True
24
+ interpreter .loop = True
25
+ interpreter .auto_run = False
You can’t perform that action at this time.
0 commit comments