Skip to content

Commit b6a64a4

Browse files
authored
Merge pull request #1340 from OpenInterpreter/main
Update Development Branch
2 parents f4bcbef + 7557524 commit b6a64a4

30 files changed

+2538
-764
lines changed

Dockerfile

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,16 +8,20 @@ FROM python:3.11.8
88
# Set environment variables
99
# ENV OPENAI_API_KEY ...
1010

11+
ENV HOST 0.0.0.0
12+
# ^ Sets the server host to 0.0.0.0, Required for the server to be accessible outside the container
13+
1114
# Copy required files into container
12-
RUN mkdir -p interpreter
15+
RUN mkdir -p interpreter scripts
1316
COPY interpreter/ interpreter/
17+
COPY scripts/ scripts/
1418
COPY poetry.lock pyproject.toml README.md ./
1519

1620
# Expose port 8000
1721
EXPOSE 8000
1822

1923
# Install server dependencies
20-
RUN pip install -e ".[server]"
24+
RUN pip install ".[server]"
2125

2226
# Start the server
2327
ENTRYPOINT ["interpreter", "--server"]

benchmarks/simple.py

Lines changed: 0 additions & 18 deletions
This file was deleted.

docs/server/usage.mdx

Lines changed: 171 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,171 @@
1+
---
2+
title: Server Usage
3+
---
4+
5+
## Starting the Server
6+
7+
### From Command Line
8+
To start the server from the command line, use:
9+
10+
```bash
11+
interpreter --server
12+
```
13+
14+
### From Python
15+
To start the server from within a Python script:
16+
17+
```python
18+
from interpreter import AsyncInterpreter
19+
20+
async_interpreter = AsyncInterpreter()
21+
async_interpreter.server.run(port=8000) # Default port is 8000, but you can customize it
22+
```
23+
24+
## WebSocket API
25+
26+
### Establishing a Connection
27+
Connect to the WebSocket server at `ws://localhost:8000/`.
28+
29+
### Message Format
30+
Messages must follow the LMC format with start and end flags. For detailed specifications, see the [LMC messages documentation](https://docs.openinterpreter.com/protocols/lmc-messages).
31+
32+
Basic message structure:
33+
```json
34+
{"role": "user", "type": "message", "start": true}
35+
{"role": "user", "type": "message", "content": "Your message here"}
36+
{"role": "user", "type": "message", "end": true}
37+
```
38+
39+
### Control Commands
40+
To control the server's behavior, send the following commands:
41+
42+
1. Stop execution:
43+
```json
44+
{"role": "user", "type": "command", "content": "stop"}
45+
```
46+
This stops all execution and message processing.
47+
48+
2. Execute code block:
49+
```json
50+
{"role": "user", "type": "command", "content": "go"}
51+
```
52+
This executes a generated code block and allows the agent to proceed.
53+
54+
**Important**: If `auto_run` is set to `False`, the agent will pause after generating code blocks. You must send the "go" command to continue execution.
55+
56+
### Completion Status
57+
The server indicates completion with the following message:
58+
```json
59+
{"role": "server", "type": "status", "content": "complete"}
60+
```
61+
Ensure your client watches for this message to determine when the interaction is finished.
62+
63+
### Error Handling
64+
If an error occurs, the server will send an error message in the following format:
65+
```json
66+
{"role": "server", "type": "error", "content": "Error traceback information"}
67+
```
68+
Your client should be prepared to handle these error messages appropriately.
69+
70+
### Example WebSocket Interaction
71+
Here's a simple example demonstrating the WebSocket interaction:
72+
73+
```python
74+
import websockets
75+
import json
76+
import asyncio
77+
78+
async def websocket_interaction():
79+
async with websockets.connect("ws://localhost:8000/") as websocket:
80+
# Send a message
81+
await websocket.send(json.dumps({"role": "user", "type": "message", "start": True}))
82+
await websocket.send(json.dumps({"role": "user", "type": "message", "content": "What's 2 + 2?"}))
83+
await websocket.send(json.dumps({"role": "user", "type": "message", "end": True}))
84+
85+
# Receive and process messages
86+
while True:
87+
message = await websocket.recv()
88+
data = json.loads(message)
89+
90+
if data.get("type") == "message":
91+
print(data.get("content", ""), end="", flush=True)
92+
elif data.get("type") == "error":
93+
print(f"Error: {data.get('content')}")
94+
elif data == {"role": "assistant", "type": "status", "content": "complete"}:
95+
break
96+
97+
asyncio.run(websocket_interaction())
98+
```
99+
100+
## HTTP API
101+
102+
### Modifying Settings
103+
To change server settings, send a POST request to `http://localhost:8000/settings`. The payload should conform to [the interpreter object's settings](https://docs.openinterpreter.com/settings/all-settings).
104+
105+
Example:
106+
```python
107+
import requests
108+
109+
settings = {
110+
"llm": {"model": "gpt-4"},
111+
"custom_instructions": "You only write Python code.",
112+
"auto_run": True,
113+
}
114+
response = requests.post("http://localhost:8000/settings", json=settings)
115+
print(response.status_code)
116+
```
117+
118+
### Retrieving Settings
119+
To get current settings, send a GET request to `http://localhost:8000/settings/{property}`.
120+
121+
Example:
122+
```python
123+
response = requests.get("http://localhost:8000/settings/custom_instructions")
124+
print(response.json())
125+
# Output: {"custom_instructions": "You only write react."}
126+
```
127+
128+
## Advanced Usage: Accessing the FastAPI App Directly
129+
130+
The FastAPI app is exposed at `async_interpreter.server.app`. This allows you to add custom routes or host the app using Uvicorn directly.
131+
132+
Example of adding a custom route and hosting with Uvicorn:
133+
134+
```python
135+
from interpreter import AsyncInterpreter
136+
from fastapi import FastAPI
137+
import uvicorn
138+
139+
async_interpreter = AsyncInterpreter()
140+
app = async_interpreter.server.app
141+
142+
@app.get("/custom")
143+
async def custom_route():
144+
return {"message": "This is a custom route"}
145+
146+
if __name__ == "__main__":
147+
uvicorn.run(app, host="0.0.0.0", port=8000)
148+
```
149+
150+
## Using Docker
151+
152+
You can also run the server using Docker. First, build the Docker image from the root of the repository:
153+
154+
```bash
155+
docker build -t open-interpreter .
156+
```
157+
158+
Then, run the container:
159+
160+
```bash
161+
docker run -p 8000:8000 open-interpreter
162+
```
163+
164+
This will expose the server on port 8000 of your host machine.
165+
166+
## Best Practices
167+
1. Always handle the "complete" status message to ensure your client knows when the server has finished processing.
168+
2. If `auto_run` is set to `False`, remember to send the "go" command to execute code blocks and continue the interaction.
169+
3. Implement proper error handling in your client to manage potential connection issues, unexpected server responses, or server-sent error messages.
170+
4. Use the AsyncInterpreter class when working with the server in Python to ensure compatibility with asynchronous operations.
171+
5. When deploying in production, consider using the Docker container for easier setup and consistent environment across different machines.

0 commit comments

Comments
 (0)