You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -27,35 +25,45 @@ async_interpreter.server.run(port=8000) # Default port is 8000, but you can cus
27
25
Connect to the WebSocket server at `ws://localhost:8000/`.
28
26
29
27
### Message Format
30
-
Messages must follow the LMC format with start and end flags. For detailed specifications, see the [LMC messages documentation](https://docs.openinterpreter.com/protocols/lmc-messages).
28
+
The server uses an extended message format that allows for rich, multi-part messages. Here's the basic structure:
This executes a generated code block and allows the agent to proceed.
57
65
58
-
**Important**: If `auto_run` is set to `False`, the agent will pause after generating code blocks. You must send the "go" command to continue execution.
66
+
**Note**: If `auto_run` is set to `False`, the agent will pause after generating code blocks. You must send the "go" command to continue execution.
59
67
60
68
### Completion Status
61
69
The server indicates completion with the following message:
@@ -71,8 +79,46 @@ If an error occurs, the server will send an error message in the following forma
71
79
```
72
80
Your client should be prepared to handle these error messages appropriately.
73
81
74
-
### Example WebSocket Interaction
75
-
Here's a simple example demonstrating the WebSocket interaction:
82
+
## Code Execution Review
83
+
84
+
After code blocks are executed, you'll receive a review message:
85
+
86
+
```json
87
+
{
88
+
"role": "assistant",
89
+
"type": "review",
90
+
"content": "Review of the executed code, including safety assessment and potential irreversible actions."
91
+
}
92
+
```
93
+
94
+
This review provides important information about the safety and potential impact of the executed code. Pay close attention to these messages, especially when dealing with operations that might have significant effects on your system.
95
+
96
+
The `content` field of the review message may have two possible formats:
97
+
98
+
1. If the code is deemed completely safe, the content will be exactly `"<SAFE>"`.
99
+
2. Otherwise, it will contain an explanation of why the code might be unsafe or have irreversible effects.
100
+
101
+
Example of a safe code review:
102
+
```json
103
+
{
104
+
"role": "assistant",
105
+
"type": "review",
106
+
"content": "<SAFE>"
107
+
}
108
+
```
109
+
110
+
Example of a potentially unsafe code review:
111
+
```json
112
+
{
113
+
"role": "assistant",
114
+
"type": "review",
115
+
"content": "This code performs file deletion operations which are irreversible. Please review carefully before proceeding."
116
+
}
117
+
```
118
+
119
+
## Example WebSocket Interaction
120
+
121
+
Here's an example demonstrating the WebSocket interaction:
76
122
77
123
```python
78
124
import websockets
@@ -81,21 +127,25 @@ import asyncio
81
127
82
128
asyncdefwebsocket_interaction():
83
129
asyncwith websockets.connect("ws://localhost:8000/") as websocket:
To change server settings, send a POST request to `http://localhost:8000/settings`. The payload should conform to [the interpreter object's settings](https://docs.openinterpreter.com/settings/all-settings).
157
+
To change server settings, send a POST request to `http://localhost:8000/settings`. The payload should conform to the interpreter object's settings.
# Output: {"custom_instructions": "You only write react."}
179
+
# Output: {"custom_instructions": "You only write Python code."}
180
+
```
181
+
182
+
## OpenAI-Compatible Endpoint
183
+
184
+
The server provides an OpenAI-compatible endpoint at `/openai`. This allows you to use the server with any tool or library that's designed to work with the OpenAI API.
185
+
186
+
### Chat Completions Endpoint
187
+
188
+
The chat completions endpoint is available at:
189
+
190
+
```
191
+
[server_url]/openai/chat/completions
192
+
```
193
+
194
+
To use this endpoint, set the `api_base` in your OpenAI client or configuration to `[server_url]/openai`. For example:
195
+
196
+
```python
197
+
import openai
198
+
199
+
openai.api_base ="http://localhost:8000/openai"# Replace with your server URL if different
200
+
openai.api_key ="dummy"# The key is not used but required by the OpenAI library
201
+
202
+
response = openai.ChatCompletion.create(
203
+
model="gpt-3.5-turbo", # This model name is ignored, but required
204
+
messages=[
205
+
{"role": "system", "content": "You are a helpful assistant."},
206
+
{"role": "user", "content": "What's the capital of France?"}
207
+
]
208
+
)
209
+
210
+
print(response.choices[0].message['content'])
130
211
```
131
212
213
+
Note that only the chat completions endpoint (`/chat/completions`) is implemented. Other OpenAI API endpoints are not available.
214
+
215
+
When using this endpoint:
216
+
- The `model` parameter is required but ignored.
217
+
- The `api_key` is required by the OpenAI library but not used by the server.
218
+
219
+
## Best Practices
220
+
221
+
1. Always handle the "complete" status message to ensure your client knows when the server has finished processing.
222
+
2. If `auto_run` is set to `False`, remember to send the "go" command to execute code blocks and continue the interaction.
223
+
3. Implement proper error handling in your client to manage potential connection issues, unexpected server responses, or server-sent error messages.
224
+
4. Use the AsyncInterpreter class when working with the server in Python to ensure compatibility with asynchronous operations.
225
+
5. Pay attention to the code execution review messages for important safety and operational information.
226
+
6. Utilize the multi-part user message structure for complex inputs, including file paths and images.
227
+
7. When sending file paths or image paths, ensure they are accessible to the server.
228
+
132
229
## Advanced Usage: Accessing the FastAPI App Directly
133
230
134
231
The FastAPI app is exposed at `async_interpreter.server.app`. This allows you to add custom routes or host the app using Uvicorn directly.
@@ -151,25 +248,4 @@ if __name__ == "__main__":
151
248
uvicorn.run(app, host="0.0.0.0", port=8000)
152
249
```
153
250
154
-
## Using Docker
155
-
156
-
You can also run the server using Docker. First, build the Docker image from the root of the repository:
157
-
158
-
```bash
159
-
docker build -t open-interpreter .
160
-
```
161
-
162
-
Then, run the container:
163
-
164
-
```bash
165
-
docker run -p 8000:8000 open-interpreter
166
-
```
167
-
168
-
This will expose the server on port 8000 of your host machine.
169
-
170
-
## Best Practices
171
-
1. Always handle the "complete" status message to ensure your client knows when the server has finished processing.
172
-
2. If `auto_run` is set to `False`, remember to send the "go" command to execute code blocks and continue the interaction.
173
-
3. Implement proper error handling in your client to manage potential connection issues, unexpected server responses, or server-sent error messages.
174
-
4. Use the AsyncInterpreter class when working with the server in Python to ensure compatibility with asynchronous operations.
175
-
5. When deploying in production, consider using the Docker container for easier setup and consistent environment across different machines.
251
+
This guide covers all aspects of using the server, including the WebSocket API, HTTP API, OpenAI-compatible endpoint, code execution review, and various features. It provides clear explanations and examples for users to understand how to interact with the server effectively.
0 commit comments