Skip to content

Commit 1e8203c

Browse files
authored
Merge pull request #1427 from OpenInterpreter/development
Documentation and profile updates
2 parents 0a9002d + a28073b commit 1e8203c

File tree

17 files changed

+251
-22
lines changed

17 files changed

+251
-22
lines changed

docs/guides/profiles.mdx

Lines changed: 26 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ If you want to make your own profile, start with the [Template Profile](https://
2121

2222
To apply a Profile to an Open Interpreter session, you can run `interpreter --profile <name>`
2323

24-
# Example Profile
24+
# Example Python Profile
2525

2626
```Python
2727
from interpreter import interpreter
@@ -38,6 +38,31 @@ interpreter.auto_run = True
3838
interpreter.loop = True
3939
```
4040

41+
# Example YAML Profile
42+
43+
<Info> Make sure YAML profile version is set to 0.2.5 </Info>
44+
45+
```YAML
46+
llm:
47+
model: "gpt-4-o"
48+
temperature: 0
49+
# api_key: ... # Your API key, if the API requires it
50+
# api_base: ... # The URL where an OpenAI-compatible server is running to handle LLM API requests
51+
52+
# Computer Settings
53+
computer:
54+
import_computer_api: True # Gives OI a helpful Computer API designed for code interpreting language models
55+
56+
# Custom Instructions
57+
custom_instructions: "" # This will be appended to the system message
58+
59+
# General Configuration
60+
auto_run: False # If True, code will run without asking for confirmation
61+
offline: False # If True, will disable some online features like checking for updates
62+
63+
version: 0.2.5 # Configuration file version (do not modify)
64+
```
65+
4166
<Tip>
4267
There are many settings that can be configured. [See them all
4368
here](/settings/all-settings)

docs/mint.json

Lines changed: 24 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,10 @@
3131
"navigation": [
3232
{
3333
"group": "Getting Started",
34-
"pages": ["getting-started/introduction", "getting-started/setup"]
34+
"pages": [
35+
"getting-started/introduction",
36+
"getting-started/setup"
37+
]
3538
},
3639
{
3740
"group": "Guides",
@@ -47,7 +50,9 @@
4750
},
4851
{
4952
"group": "Settings",
50-
"pages": ["settings/all-settings"]
53+
"pages": [
54+
"settings/all-settings"
55+
]
5156
},
5257
{
5358
"group": "Language Models",
@@ -105,11 +110,16 @@
105110
},
106111
{
107112
"group": "Protocols",
108-
"pages": ["protocols/lmc-messages"]
113+
"pages": [
114+
"protocols/lmc-messages"
115+
]
109116
},
110117
{
111118
"group": "Integrations",
112-
"pages": ["integrations/e2b", "integrations/docker"]
119+
"pages": [
120+
"integrations/e2b",
121+
"integrations/docker"
122+
]
113123
},
114124
{
115125
"group": "Safety",
@@ -120,9 +130,17 @@
120130
"safety/best-practices"
121131
]
122132
},
133+
{
134+
"group": "Troubleshooting",
135+
"pages": [
136+
"troubleshooting/faq"
137+
]
138+
},
123139
{
124140
"group": "Telemetry",
125-
"pages": ["telemetry/telemetry"]
141+
"pages": [
142+
"telemetry/telemetry"
143+
]
126144
}
127145
],
128146
"feedback": {
@@ -133,4 +151,4 @@
133151
"youtube": "https://www.youtube.com/@OpenInterpreter",
134152
"linkedin": "https://www.linkedin.com/company/openinterpreter"
135153
}
136-
}
154+
}

docs/troubleshooting/faq.mdx

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
---
2+
title: "FAQ"
3+
description: "Frequently Asked Questions"
4+
---
5+
6+
<Accordion title="Does Open Interpreter ensure that my data doesn't leave my computer?">
7+
As long as you're using a local language model, your messages / personal info
8+
won't leave your computer. If you use a cloud model, we send your messages +
9+
custom instructions to the model. We also have a basic telemetry
10+
[function](https://github.com/OpenInterpreter/open-interpreter/blob/main/interpreter/core/core.py#L167)
11+
(copied over from ChromaDB's telemetry) that anonymously tracks usage. This
12+
only lets us know if a message was sent, includes no PII. OI errors will also
13+
be reported here which includes the exception string. Detailed docs on all
14+
this is [here](/telemetry/telemetry), and you can opt out by running
15+
`--local`, `--offline`, or `--disable_telemetry`.
16+
</Accordion>

examples/Dockerfile

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
# This is a Dockerfile for using an isolated instance of Open Interpreter
2+
3+
# Start with Python 3.11
4+
FROM python:3.11
5+
6+
# Replace <your_openai_api_key> with your own key
7+
ENV OPENAI_API_KEY <your_openai_api_key>
8+
9+
# Install Open Interpreter
10+
RUN pip install open-interpreter
11+
12+
13+
# To run the container
14+
15+
# docker build -t openinterpreter .
16+
# docker run -d -it --name interpreter-instance openinterpreter interpreter
17+
# docker attach interpreter-instance
18+
19+
# To mount a volume
20+
# docker run -d -it -v /path/on/your/host:/path/in/the/container --name interpreter-instance openinterpreter interpreter

examples/interactive_quickstart.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
# This is all you need to get started
2+
from interpreter import interpreter
3+
4+
interpreter.chat()

examples/local_server.ipynb

Lines changed: 119 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,119 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"metadata": {},
6+
"source": [
7+
"# Build a local Open Interpreter server for a custom front end"
8+
]
9+
},
10+
{
11+
"cell_type": "code",
12+
"execution_count": null,
13+
"metadata": {},
14+
"outputs": [],
15+
"source": [
16+
"from flask import Flask, request, jsonify\n",
17+
"from interpreter import interpreter\n",
18+
"import json"
19+
]
20+
},
21+
{
22+
"cell_type": "code",
23+
"execution_count": null,
24+
"metadata": {},
25+
"outputs": [],
26+
"source": [
27+
"app = Flask(__name__)\n",
28+
"\n",
29+
"# Configure Open Interpreter\n",
30+
"\n",
31+
"## Local Model\n",
32+
"# interpreter.offline = True\n",
33+
"# interpreter.llm.model = \"ollama/llama3.1\"\n",
34+
"# interpreter.llm.api_base = \"http://localhost:11434\"\n",
35+
"# interpreter.llm.context_window = 4000\n",
36+
"# interpreter.llm.max_tokens = 3000\n",
37+
"# interpreter.auto_run = True\n",
38+
"# interpreter.verbose = True\n",
39+
"\n",
40+
"## Hosted Model\n",
41+
"interpreter.llm.model = \"gpt-4o\"\n",
42+
"interpreter.llm.context_window = 10000\n",
43+
"interpreter.llm.max_tokens = 4096\n",
44+
"interpreter.auto_run = True\n",
45+
"\n",
46+
"# Create an endpoint\n",
47+
"@app.route('/chat', methods=['POST'])\n",
48+
"def chat():\n",
49+
" # Expected payload: {\"prompt\": \"User's message or question\"}\n",
50+
" data = request.json\n",
51+
" prompt = data.get('prompt')\n",
52+
" \n",
53+
" if not prompt:\n",
54+
" return jsonify({\"error\": \"No prompt provided\"}), 400\n",
55+
"\n",
56+
" full_response = \"\"\n",
57+
" try:\n",
58+
" for chunk in interpreter.chat(prompt, stream=True, display=False):\n",
59+
" if isinstance(chunk, dict):\n",
60+
" if chunk.get(\"type\") == \"message\":\n",
61+
" full_response += chunk.get(\"content\", \"\")\n",
62+
" elif isinstance(chunk, str):\n",
63+
" # Attempt to parse the string as JSON\n",
64+
" try:\n",
65+
" json_chunk = json.loads(chunk)\n",
66+
" full_response += json_chunk.get(\"response\", \"\")\n",
67+
" except json.JSONDecodeError:\n",
68+
" # If it's not valid JSON, just add the string\n",
69+
" full_response += chunk\n",
70+
" except Exception as e:\n",
71+
" return jsonify({\"error\": str(e)}), 500\n",
72+
"\n",
73+
" return jsonify({\"response\": full_response.strip()})\n",
74+
"\n",
75+
"if __name__ == '__main__':\n",
76+
" app.run(host='0.0.0.0', port=5001)\n",
77+
"\n",
78+
"print(\"Open Interpreter server is running on http://0.0.0.0:5001\")"
79+
]
80+
},
81+
{
82+
"cell_type": "markdown",
83+
"metadata": {},
84+
"source": [
85+
"## Make a request to the server"
86+
]
87+
},
88+
{
89+
"cell_type": "markdown",
90+
"metadata": {},
91+
"source": [
92+
"curl -X POST http://localhost:5001/chat \\\n",
93+
" -H \"Content-Type: application/json\" \\\n",
94+
" -d '{\"prompt\": \"Hello, how are you?\"}'"
95+
]
96+
}
97+
],
98+
"metadata": {
99+
"kernelspec": {
100+
"display_name": "Python 3",
101+
"language": "python",
102+
"name": "python3"
103+
},
104+
"language_info": {
105+
"codemirror_mode": {
106+
"name": "ipython",
107+
"version": 3
108+
},
109+
"file_extension": ".py",
110+
"mimetype": "text/x-python",
111+
"name": "python",
112+
"nbconvert_exporter": "python",
113+
"pygments_lexer": "ipython3",
114+
"version": "3.11.9"
115+
}
116+
},
117+
"nbformat": 4,
118+
"nbformat_minor": 2
119+
}

interpreter/core/llm/llm.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ def __init__(self, interpreter):
5151
self.completions = fixed_litellm_completions
5252

5353
# Settings
54-
self.model = "gpt-4-turbo"
54+
self.model = "gpt-4o"
5555
self.temperature = 0
5656

5757
self.supports_vision = None # Will try to auto-detect
Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,26 @@
1+
"""
2+
This is an Open Interpreter profile. It configures Open Interpreter to run Anthropic's `Claude 3 Sonnet` using Bedrock.
3+
"""
4+
5+
"""
6+
Required pip package:
7+
pip install boto3>=1.28.57
8+
9+
Required environment variables:
10+
os.environ["AWS_ACCESS_KEY_ID"] = "" # Access key
11+
os.environ["AWS_SECRET_ACCESS_KEY"] = "" # Secret access key
12+
os.environ["AWS_REGION_NAME"] = "" # us-east-1, us-east-2, us-west-1, us-west-2
13+
14+
More information can be found here: https://docs.litellm.ai/docs/providers/bedrock
15+
"""
16+
17+
from interpreter import interpreter
18+
19+
interpreter.llm.model = "bedrock/anthropic.claude-3-sonnet-20240229-v1:0"
20+
21+
interpreter.computer.import_computer_api = True
22+
23+
interpreter.llm.supports_functions = True
24+
interpreter.llm.supports_vision = True
25+
interpreter.llm.context_window = 100000
26+
interpreter.llm.max_tokens = 4096

interpreter/terminal_interface/profiles/defaults/default.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44

55
# LLM Settings
66
llm:
7-
model: "gpt-4-turbo"
7+
model: "gpt-4o"
88
temperature: 0
99
# api_key: ... # Your API key, if the API requires it
1010
# api_base: ... # The URL where an OpenAI-compatible server is running to handle LLM API requests
@@ -26,7 +26,7 @@ computer:
2626

2727
# To use a separate model for the `wtf` command:
2828
# wtf:
29-
# model: "gpt-3.5-turbo"
29+
# model: "gpt-4o-mini"
3030

3131
# Documentation
3232
# All options: https://docs.openinterpreter.com/settings

interpreter/terminal_interface/profiles/defaults/fast.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
# Remove the "#" before the settings below to use them.
44

55
llm:
6-
model: "gpt-3.5-turbo"
6+
model: "gpt-4o-mini"
77
temperature: 0
88
# api_key: ... # Your API key, if the API requires it
99
# api_base: ... # The URL where an OpenAI-compatible server is running to handle LLM API requests
@@ -23,4 +23,4 @@ custom_instructions: "The user has set you to FAST mode. **No talk, just code.**
2323

2424
# All options: https://docs.openinterpreter.com/settings
2525

26-
version: 0.2.1 # Configuration file version (do not modify)
26+
version: 0.2.5 # Configuration file version (do not modify)

0 commit comments

Comments
 (0)