Skip to content

Commit 1aa133f

Browse files
committed
Update server.mdx
Fixing some errors in the demo related to how the openai messages are parsed, make message_id to string to avoid validation error and set the openai_key directly in the client
1 parent 59a09a5 commit 1aa133f

File tree

1 file changed

+119
-5
lines changed

1 file changed

+119
-5
lines changed

docs/quickstart/server.mdx

Lines changed: 119 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -220,6 +220,77 @@ async def agentic_chat_endpoint(input_data: RunAgentInput, request: Request):
220220

221221
message_id = str(uuid.uuid4())
222222

223+
return StreamingResponse(
224+
event_generator(),
225+
media_type="text/event-stream"
226+
)
227+
228+
if __name__ == "__main__":
229+
import uvicorn
230+
uvicorn.run(app, host="0.0.0.0", port=8000)
231+
```
232+
233+
Awesome! We are already sending `RunStartedEvent` and `RunFinishedEvent` events,
234+
which gives us a basic AG-UI compliant endpoint. Now let's make it do something
235+
useful.
236+
237+
## Implementing Basic Chat
238+
239+
Let's enhance our endpoint to call OpenAI's API and stream the responses back as
240+
AG-UI events:
241+
242+
```python
243+
from fastapi import FastAPI, Request
244+
from fastapi.responses import StreamingResponse
245+
from ag_ui.core import (
246+
RunAgentInput,
247+
Message,
248+
EventType,
249+
RunStartedEvent,
250+
RunFinishedEvent,
251+
TextMessageStartEvent,
252+
TextMessageContentEvent,
253+
TextMessageEndEvent
254+
)
255+
from ag_ui.encoder import EventEncoder
256+
import uuid
257+
from openai import OpenAI
258+
import dotenv
259+
import os
260+
261+
app = FastAPI(title="AG-UI Endpoint")
262+
263+
@app.post("/awp")
264+
async def my_endpoint(input_data: RunAgentInput):
265+
async def event_generator():
266+
# Create an event encoder to properly format SSE events
267+
encoder = EventEncoder()
268+
269+
# Send run started event
270+
yield encoder.encode(
271+
RunStartedEvent(
272+
type=EventType.RUN_STARTED,
273+
thread_id=input_data.thread_id,
274+
run_id=input_data.run_id
275+
)
276+
)
277+
278+
# Initialize OpenAI client
279+
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
280+
281+
# Convert AG-UI messages to OpenAI messages format
282+
openai_messages = []
283+
for msg in input_data.messages:
284+
if msg.role in ["user", "system", "assistant"]:
285+
openai_messages.append({
286+
"role": msg.role,
287+
"content": msg.content or ""
288+
})
289+
290+
# Generate a message ID for the assistant's response
291+
message_id = str(uuid.uuid4())
292+
293+
# Send text message start event
223294
yield encoder.encode(
224295
TextMessageStartEvent(
225296
type=EventType.TEXT_MESSAGE_START,
@@ -236,6 +307,24 @@ async def agentic_chat_endpoint(input_data: RunAgentInput, request: Request):
236307
)
237308
)
238309

310+
# Process the streaming response and send content events
311+
for chunk in stream:
312+
if (chunk.choices and
313+
len(chunk.choices) > 0 and
314+
chunk.choices[0].delta and
315+
hasattr(chunk.choices[0].delta, 'content') and
316+
chunk.choices[0].delta.content):
317+
318+
content = chunk.choices[0].delta.content
319+
yield encoder.encode(
320+
TextMessageContentEvent(
321+
type=EventType.TEXT_MESSAGE_CONTENT,
322+
message_id=message_id,
323+
delta=content
324+
)
325+
)
326+
327+
# Send text message end event
239328
yield encoder.encode(
240329
TextMessageEndEvent(
241330
type=EventType.TEXT_MESSAGE_END,
@@ -245,11 +334,11 @@ async def agentic_chat_endpoint(input_data: RunAgentInput, request: Request):
245334

246335
# Send run finished event
247336
yield encoder.encode(
248-
RunFinishedEvent(
249-
type=EventType.RUN_FINISHED,
250-
thread_id=input_data.thread_id,
251-
run_id=input_data.run_id
252-
),
337+
RunFinishedEvent(
338+
type=EventType.RUN_FINISHED,
339+
thread_id=input_data.thread_id,
340+
run_id=input_data.run_id
341+
)
253342
)
254343

255344
return StreamingResponse(
@@ -427,6 +516,31 @@ Let's break down what your server is doing:
427516
`TOOL_CALL_CHUNK`
428517
4. **Finish** – We emit `RUN_FINISHED` (or `RUN_ERROR` if something goes wrong)
429518

519+
Test your endpoint with:
520+
```bash
521+
curl -X POST http://localhost:8000/awp \
522+
-H "Content-Type: application/json" \
523+
-H "Accept: text/event-stream" \
524+
-d '{
525+
"threadId": "thread_123",
526+
"runId": "run_456",
527+
"state": {},
528+
"messages": [
529+
{
530+
"id": "msg_1",
531+
"role": "user",
532+
"content": "Hello, how are you?"
533+
}
534+
],
535+
"tools": [],
536+
"context": [],
537+
"forwardedProps": {}
538+
}'
539+
```
540+
541+
This implementation creates a fully functional AG-UI endpoint that processes
542+
messages and streams back the responses in real-time.
543+
430544
## Step 5 – Chat with your server
431545

432546
Reload the dojo page and start typing. You'll see GPT-4o streaming its answer in

0 commit comments

Comments
 (0)