You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -27,15 +25,29 @@ async_interpreter.server.run(port=8000) # Default port is 8000, but you can cus
27
25
Connect to the WebSocket server at `ws://localhost:8000/`.
28
26
29
27
### Message Format
30
-
Messages must follow the LMC format with start and end flags. For detailed specifications, see the [LMC messages documentation](https://docs.openinterpreter.com/protocols/lmc-messages).
28
+
Open Interpreter uses an extended version of OpenAI's message format called [LMC messages](https://docs.openinterpreter.com/protocols/lmc-messages) that allow for rich, multi-part messages. **Messages must be sent between start and end flags.** Here's the basic structure:
To control the server's behavior, send the following commands:
41
53
@@ -51,7 +63,7 @@ To control the server's behavior, send the following commands:
51
63
```
52
64
This executes a generated code block and allows the agent to proceed.
53
65
54
-
**Important**: If `auto_run` is set to `False`, the agent will pause after generating code blocks. You must send the "go" command to continue execution.
66
+
**Note**: If `auto_run` is set to `False`, the agent will pause after generating code blocks. You must send the "go" command to continue execution.
55
67
56
68
### Completion Status
57
69
The server indicates completion with the following message:
@@ -67,8 +79,46 @@ If an error occurs, the server will send an error message in the following forma
67
79
```
68
80
Your client should be prepared to handle these error messages appropriately.
69
81
70
-
### Example WebSocket Interaction
71
-
Here's a simple example demonstrating the WebSocket interaction:
82
+
## Code Execution Review
83
+
84
+
After code blocks are executed, you'll receive a review message:
85
+
86
+
```json
87
+
{
88
+
"role": "assistant",
89
+
"type": "review",
90
+
"content": "Review of the executed code, including safety assessment and potential irreversible actions."
91
+
}
92
+
```
93
+
94
+
This review provides important information about the safety and potential impact of the executed code. Pay close attention to these messages, especially when dealing with operations that might have significant effects on your system.
95
+
96
+
The `content` field of the review message may have two possible formats:
97
+
98
+
1. If the code is deemed completely safe, the content will be exactly `"<SAFE>"`.
99
+
2. Otherwise, it will contain an explanation of why the code might be unsafe or have irreversible effects.
100
+
101
+
Example of a safe code review:
102
+
```json
103
+
{
104
+
"role": "assistant",
105
+
"type": "review",
106
+
"content": "<SAFE>"
107
+
}
108
+
```
109
+
110
+
Example of a potentially unsafe code review:
111
+
```json
112
+
{
113
+
"role": "assistant",
114
+
"type": "review",
115
+
"content": "This code performs file deletion operations which are irreversible. Please review carefully before proceeding."
116
+
}
117
+
```
118
+
119
+
## Example WebSocket Interaction
120
+
121
+
Here's an example demonstrating the WebSocket interaction:
72
122
73
123
```python
74
124
import websockets
@@ -77,21 +127,25 @@ import asyncio
77
127
78
128
asyncdefwebsocket_interaction():
79
129
asyncwith websockets.connect("ws://localhost:8000/") as websocket:
# Output: {"custom_instructions": "You only write react."}
179
+
# Output: {"custom_instructions": "You only write Python code."}
126
180
```
127
181
128
-
## Advanced Usage: Accessing the FastAPI App Directly
182
+
## OpenAI-Compatible Endpoint
129
183
130
-
The FastAPI app is exposed at `async_interpreter.server.app`. This allows you to add custom routes or host the app using Uvicorn directly.
184
+
The server provides an OpenAI-compatible endpoint at `/openai`. This allows you to use the server with any tool or library that's designed to work with the OpenAI API.
131
185
132
-
Example of adding a custom route and hosting with Uvicorn:
186
+
### Chat Completions Endpoint
187
+
188
+
The chat completions endpoint is available at:
189
+
190
+
```
191
+
[server_url]/openai/chat/completions
192
+
```
193
+
194
+
To use this endpoint, set the `api_base` in your OpenAI client or configuration to `[server_url]/openai`. For example:
133
195
134
196
```python
135
-
from interpreter import AsyncInterpreter
136
-
from fastapi import FastAPI
137
-
import uvicorn
197
+
import openai
138
198
139
-
async_interpreter=AsyncInterpreter()
140
-
app=async_interpreter.server.app
199
+
openai.api_base="http://localhost:8000/openai"# Replace with your server URL if different
200
+
openai.api_key="dummy"# The key is not used but required by the OpenAI library
141
201
142
-
@app.get("/custom")
143
-
asyncdefcustom_route():
144
-
return {"message": "This is a custom route"}
202
+
response = openai.ChatCompletion.create(
203
+
model="gpt-3.5-turbo", # This model name is ignored, but required
204
+
messages=[
205
+
{"role": "system", "content": "You are a helpful assistant."},
206
+
{"role": "user", "content": "What's the capital of France?"}
207
+
]
208
+
)
145
209
146
-
if__name__=="__main__":
147
-
uvicorn.run(app, host="0.0.0.0", port=8000)
210
+
print(response.choices[0].message['content'])
148
211
```
149
212
213
+
Note that only the chat completions endpoint (`/chat/completions`) is implemented. Other OpenAI API endpoints are not available.
214
+
215
+
When using this endpoint:
216
+
- The `model` parameter is required but ignored.
217
+
- The `api_key` is required by the OpenAI library but not used by the server.
218
+
150
219
## Using Docker
151
220
152
221
You can also run the server using Docker. First, build the Docker image from the root of the repository:
@@ -163,9 +232,107 @@ docker run -p 8000:8000 open-interpreter
163
232
164
233
This will expose the server on port 8000 of your host machine.
165
234
235
+
## Acknowledgment Feature
236
+
237
+
When the `INTERPRETER_REQUIRE_ACKNOWLEDGE` environment variable is set to `"True"`, the server requires clients to acknowledge each message received. This feature ensures reliable message delivery in environments where network stability might be a concern.
238
+
239
+
### How it works
240
+
241
+
1. When this feature is enabled, each message sent by the server will include an `id` field.
242
+
2. The client must send an acknowledgment message back to the server for each received message.
243
+
3. The server will wait for this acknowledgment before sending the next message.
244
+
245
+
### Client Implementation
246
+
247
+
To implement this on the client side:
248
+
249
+
1. Check if each received message contains an `id` field.
250
+
2. If an `id` is present, send an acknowledgment message back to the server.
251
+
252
+
Here's an example of how to handle this in your WebSocket client:
253
+
254
+
```python
255
+
import json
256
+
import websockets
257
+
258
+
asyncdefhandle_messages(websocket):
259
+
asyncfor message in websocket:
260
+
data = json.loads(message)
261
+
262
+
# Process the message as usual
263
+
print(f"Received: {data}")
264
+
265
+
# Check if the message has an ID that needs to be acknowledged
266
+
if"id"in data:
267
+
ack_message = {
268
+
"ack": data["id"]
269
+
}
270
+
await websocket.send(json.dumps(ack_message))
271
+
print(f"Sent acknowledgment for message {data['id']}")
272
+
273
+
asyncdefmain():
274
+
uri ="ws://localhost:8000"
275
+
asyncwith websockets.connect(uri) as websocket:
276
+
await handle_messages(websocket)
277
+
278
+
# Run the async function
279
+
import asyncio
280
+
asyncio.run(main())
281
+
```
282
+
283
+
### Server Behavior
284
+
285
+
- If the server doesn't receive an acknowledgment within a certain timeframe, it will attempt to resend the message.
286
+
- The server will make multiple attempts to send a message before considering it failed.
287
+
288
+
### Enabling the Feature
289
+
290
+
To enable this feature, set the `INTERPRETER_REQUIRE_ACKNOWLEDGE` environment variable to `"True"` before starting the server:
0 commit comments