Skip to content

Commit ee2e349

Browse files
jhammarstedtmaxkorp
authored andcommitted
Update server.mdx
Fixing some errors in the demo related to how the openai messages are parsed, make message_id to string to avoid validation error and set the openai_key directly in the client
1 parent 10e842e commit ee2e349

File tree

1 file changed

+119
-5
lines changed

1 file changed

+119
-5
lines changed

docs/quickstart/server.mdx

Lines changed: 119 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -218,6 +218,77 @@ async def agentic_chat_endpoint(input_data: RunAgentInput, request: Request):
218218

219219
message_id = str(uuid.uuid4())
220220

221+
return StreamingResponse(
222+
event_generator(),
223+
media_type="text/event-stream"
224+
)
225+
226+
if __name__ == "__main__":
227+
import uvicorn
228+
uvicorn.run(app, host="0.0.0.0", port=8000)
229+
```
230+
231+
Awesome! We are already sending `RunStartedEvent` and `RunFinishedEvent` events,
232+
which gives us a basic AG-UI compliant endpoint. Now let's make it do something
233+
useful.
234+
235+
## Implementing Basic Chat
236+
237+
Let's enhance our endpoint to call OpenAI's API and stream the responses back as
238+
AG-UI events:
239+
240+
```python
241+
from fastapi import FastAPI, Request
242+
from fastapi.responses import StreamingResponse
243+
from ag_ui.core import (
244+
RunAgentInput,
245+
Message,
246+
EventType,
247+
RunStartedEvent,
248+
RunFinishedEvent,
249+
TextMessageStartEvent,
250+
TextMessageContentEvent,
251+
TextMessageEndEvent
252+
)
253+
from ag_ui.encoder import EventEncoder
254+
import uuid
255+
from openai import OpenAI
256+
import dotenv
257+
import os
258+
259+
app = FastAPI(title="AG-UI Endpoint")
260+
261+
@app.post("/awp")
262+
async def my_endpoint(input_data: RunAgentInput):
263+
async def event_generator():
264+
# Create an event encoder to properly format SSE events
265+
encoder = EventEncoder()
266+
267+
# Send run started event
268+
yield encoder.encode(
269+
RunStartedEvent(
270+
type=EventType.RUN_STARTED,
271+
thread_id=input_data.thread_id,
272+
run_id=input_data.run_id
273+
)
274+
)
275+
276+
# Initialize OpenAI client
277+
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
278+
279+
# Convert AG-UI messages to OpenAI messages format
280+
openai_messages = []
281+
for msg in input_data.messages:
282+
if msg.role in ["user", "system", "assistant"]:
283+
openai_messages.append({
284+
"role": msg.role,
285+
"content": msg.content or ""
286+
})
287+
288+
# Generate a message ID for the assistant's response
289+
message_id = str(uuid.uuid4())
290+
291+
# Send text message start event
221292
yield encoder.encode(
222293
TextMessageStartEvent(
223294
type=EventType.TEXT_MESSAGE_START,
@@ -234,6 +305,24 @@ async def agentic_chat_endpoint(input_data: RunAgentInput, request: Request):
234305
)
235306
)
236307

308+
# Process the streaming response and send content events
309+
for chunk in stream:
310+
if (chunk.choices and
311+
len(chunk.choices) > 0 and
312+
chunk.choices[0].delta and
313+
hasattr(chunk.choices[0].delta, 'content') and
314+
chunk.choices[0].delta.content):
315+
316+
content = chunk.choices[0].delta.content
317+
yield encoder.encode(
318+
TextMessageContentEvent(
319+
type=EventType.TEXT_MESSAGE_CONTENT,
320+
message_id=message_id,
321+
delta=content
322+
)
323+
)
324+
325+
# Send text message end event
237326
yield encoder.encode(
238327
TextMessageEndEvent(
239328
type=EventType.TEXT_MESSAGE_END,
@@ -243,11 +332,11 @@ async def agentic_chat_endpoint(input_data: RunAgentInput, request: Request):
243332

244333
# Send run finished event
245334
yield encoder.encode(
246-
RunFinishedEvent(
247-
type=EventType.RUN_FINISHED,
248-
thread_id=input_data.thread_id,
249-
run_id=input_data.run_id
250-
),
335+
RunFinishedEvent(
336+
type=EventType.RUN_FINISHED,
337+
thread_id=input_data.thread_id,
338+
run_id=input_data.run_id
339+
)
251340
)
252341

253342
return StreamingResponse(
@@ -425,6 +514,31 @@ Let's break down what your server is doing:
425514
`TOOL_CALL_CHUNK`
426515
4. **Finish** – We emit `RUN_FINISHED` (or `RUN_ERROR` if something goes wrong)
427516

517+
Test your endpoint with:
518+
```bash
519+
curl -X POST http://localhost:8000/awp \
520+
-H "Content-Type: application/json" \
521+
-H "Accept: text/event-stream" \
522+
-d '{
523+
"threadId": "thread_123",
524+
"runId": "run_456",
525+
"state": {},
526+
"messages": [
527+
{
528+
"id": "msg_1",
529+
"role": "user",
530+
"content": "Hello, how are you?"
531+
}
532+
],
533+
"tools": [],
534+
"context": [],
535+
"forwardedProps": {}
536+
}'
537+
```
538+
539+
This implementation creates a fully functional AG-UI endpoint that processes
540+
messages and streams back the responses in real-time.
541+
428542
## Step 5 – Chat with your server
429543

430544
Reload the dojo page and start typing. You'll see GPT-4o streaming its answer in

0 commit comments

Comments
 (0)