You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/guides/profiles.mdx
+28-3Lines changed: 28 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,13 +4,38 @@ title: Profiles
4
4
5
5
Profiles are a powerful way to customize your instance of Open Interpreter.
6
6
7
-
Profiles can be accessed with `interpreter --profiles`
7
+
Everything from the model to the context window to the message templates can be configured in a profile. This allows you to save multiple variations of Open Interpreter to optimize your specific use-cases.
8
8
9
-
Profiles can be set with `interpreter --profile name`
9
+
You can access Profiles with `interpreter --profiles`. This will open up the directory where all of your profiles are stored.
10
10
11
-
### Helpful fields
11
+
To apply a Profile to a session of Open Interpreter, you can run `interpreter --profile <name>`
12
+
13
+
# Example Profile
14
+
15
+
```Python
16
+
from interpreter import interpreter
17
+
18
+
interpreter.os =True
19
+
interpreter.llm.supports_vision =True
20
+
# interpreter.shrink_images = True # Faster but less accurate
21
+
22
+
interpreter.llm.model ="gpt-4-vision-preview"
23
+
24
+
interpreter.llm.supports_functions =False
25
+
interpreter.llm.context_window =110000
26
+
interpreter.llm.max_tokens =4096
27
+
interpreter.auto_run =True
28
+
interpreter.force_task_completion =True
29
+
```
30
+
31
+
## Helpful fields for local models
32
+
33
+
Local models benefit from more coersion and guidance. This can impact the conversational nature of Open Interpreter when the user needs to add extra context to messages. These fields allow templates to be applied to messages to improve the steering of the language model while maintaining the natural conversation.
12
34
13
35
`interpreter.user_message_template` allows users to have their message wrapped in a template. This can be helpful steering a language model to a desired behaviour without needing the user to add extra context to their message.
36
+
14
37
`interpreter.always_apply_user_message_template` has all user messages to be wrapped in the template. If False, only the last User message will be wrapped.
38
+
15
39
`interpreter.code_output_template` wraps the output from the computer after code is run. This can help with nudging the language model to continue working or to explain outputs.
40
+
16
41
`interpreter.empty_code_output_template` is the message that is sent to the language model if code execution results in no output.
Copy file name to clipboardExpand all lines: docs/settings/all-settings.mdx
+33Lines changed: 33 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -239,6 +239,22 @@ llm:
239
239
240
240
</CodeGroup>
241
241
242
+
### Execution Instructions
243
+
244
+
If `llm.supports_functions` is `False`, this value will be added to the system message. This parameter tells language models how to execute code. This can be set to an empty string or to `False` if you don't want to tell the LLM how to do this.
245
+
246
+
<CodeGroup>
247
+
248
+
````python Python
249
+
interpreter.llm.execution_instructions = "To execute code on the user's machine, write a markdown code block. Specify the language after the ```. You will receive the output. Use any programming language."
250
+
````
251
+
252
+
````python Profile
253
+
interpreter.llm.execution_instructions = "To execute code on the user's machine, write a markdown code block. Specify the language after the ```. You will receive the output. Use any programming language."
254
+
````
255
+
256
+
</CodeGroup>
257
+
242
258
### LLM Supports Vision
243
259
244
260
Inform Open Interpreter that the language model you're using supports vision. Defaults to `False`.
@@ -702,6 +718,22 @@ interpreter.empty_code_output_template = "The code above was executed on my mach
702
718
703
719
</CodeGroup>
704
720
721
+
### Code Output Sender
722
+
723
+
This field determines whether the computer / code output messages are sent as the assistant or as the user. The default is user.
724
+
725
+
<CodeGroup>
726
+
727
+
```python Python
728
+
interpreter.code_output_sender ="user"
729
+
```
730
+
731
+
```python Profile
732
+
interpreter.code_output_sender ="assistant"
733
+
```
734
+
735
+
</CodeGroup>
736
+
705
737
# Computer
706
738
707
739
The `computer` object in `interpreter.computer` is a virtual computer that the AI controls. Its primary interface/function is to execute code and return the output in real-time.
0 commit comments