Skip to content

Commit 18728f2

Browse files
authored
Big server upgrades, documentation edits
Big server upgrades, documentation edits
2 parents 44192e2 + 72b9723 commit 18728f2

File tree

11 files changed

+531
-173
lines changed

11 files changed

+531
-173
lines changed

.github/workflows/python-package.yml

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -2,9 +2,9 @@ name: Build and Test
22

33
on:
44
push:
5-
branches: ["main"]
5+
branches: ["main", "development"]
66
pull_request:
7-
branches: ["main"]
7+
branches: ["main", "development"]
88

99
jobs:
1010
build:
@@ -25,11 +25,9 @@ jobs:
2525
curl -sSL https://install.python-poetry.org | python3 -
2626
- name: Install dependencies
2727
run: |
28-
# Update poetry to the latest version.
29-
poetry self update
3028
# Ensure dependencies are installed without relying on a lock file.
3129
poetry update
32-
poetry install
30+
poetry install -E server
3331
- name: Test with pytest
3432
run: |
3533
poetry run pytest -s -x -k test_

docs/server/usage.mdx

Lines changed: 25 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ async_interpreter.server.run(port=8000) # Default port is 8000, but you can cus
2525
Connect to the WebSocket server at `ws://localhost:8000/`.
2626

2727
### Message Format
28-
The server uses an extended message format that allows for rich, multi-part messages. Here's the basic structure:
28+
Open Interpreter uses an extended version of OpenAI's message format called [LMC messages](https://docs.openinterpreter.com/protocols/lmc-messages) that allow for rich, multi-part messages. **Messages must be sent between start and end flags.** Here's the basic structure:
2929

3030
```json
3131
{"role": "user", "start": true}
@@ -154,7 +154,7 @@ asyncio.run(websocket_interaction())
154154
## HTTP API
155155

156156
### Modifying Settings
157-
To change server settings, send a POST request to `http://localhost:8000/settings`. The payload should conform to the interpreter object's settings.
157+
To change server settings, send a POST request to `http://localhost:8000/settings`. The payload should conform to [the interpreter object's settings](https://docs.openinterpreter.com/settings/all-settings).
158158

159159
Example:
160160
```python
@@ -216,15 +216,21 @@ When using this endpoint:
216216
- The `model` parameter is required but ignored.
217217
- The `api_key` is required by the OpenAI library but not used by the server.
218218

219-
## Best Practices
219+
## Using Docker
220220

221-
1. Always handle the "complete" status message to ensure your client knows when the server has finished processing.
222-
2. If `auto_run` is set to `False`, remember to send the "go" command to execute code blocks and continue the interaction.
223-
3. Implement proper error handling in your client to manage potential connection issues, unexpected server responses, or server-sent error messages.
224-
4. Use the AsyncInterpreter class when working with the server in Python to ensure compatibility with asynchronous operations.
225-
5. Pay attention to the code execution review messages for important safety and operational information.
226-
6. Utilize the multi-part user message structure for complex inputs, including file paths and images.
227-
7. When sending file paths or image paths, ensure they are accessible to the server.
221+
You can also run the server using Docker. First, build the Docker image from the root of the repository:
222+
223+
```bash
224+
docker build -t open-interpreter .
225+
```
226+
227+
Then, run the container:
228+
229+
```bash
230+
docker run -p 8000:8000 open-interpreter
231+
```
232+
233+
This will expose the server on port 8000 of your host machine.
228234

229235
## Advanced Usage: Accessing the FastAPI App Directly
230236

@@ -248,4 +254,12 @@ if __name__ == "__main__":
248254
uvicorn.run(app, host="0.0.0.0", port=8000)
249255
```
250256

251-
This guide covers all aspects of using the server, including the WebSocket API, HTTP API, OpenAI-compatible endpoint, code execution review, and various features. It provides clear explanations and examples for users to understand how to interact with the server effectively.
257+
## Best Practices
258+
259+
1. Always handle the "complete" status message to ensure your client knows when the server has finished processing.
260+
2. If `auto_run` is set to `False`, remember to send the "go" command to execute code blocks and continue the interaction.
261+
3. Implement proper error handling in your client to manage potential connection issues, unexpected server responses, or server-sent error messages.
262+
4. Use the AsyncInterpreter class when working with the server in Python to ensure compatibility with asynchronous operations.
263+
5. Pay attention to the code execution review messages for important safety and operational information.
264+
6. Utilize the multi-part user message structure for complex inputs, including file paths and images.
265+
7. When sending file paths or image paths, ensure they are accessible to the server.

docs/usage/python/multiple-instances.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ def swap_roles(messages):
2424
agents = [agent_1, agent_2]
2525

2626
# Kick off the conversation
27-
messages = [{"role": "user", "message": "Hello!"}]
27+
messages = [{"role": "user", "type": "message", "content": "Hello!"}]
2828

2929
while True:
3030
for agent in agents:

0 commit comments

Comments
 (0)