Skip to content

Commit 5ce34bf

Browse files
committed
initial support for function approval and minor ui fixes
1 parent 059b2fd commit 5ce34bf

File tree

21 files changed

+1122
-937
lines changed

21 files changed

+1122
-937
lines changed

python/packages/devui/README.md

Lines changed: 42 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,7 @@ devui ./agents --tracing framework
7878

7979
## OpenAI-Compatible API
8080

81-
DevUI provides a clean OpenAI-compatible API. Simply use your **agent/workflow name as the model**!
81+
For convenience, DevUI provides an OpenAI Responses backend API. This means you can run the backend and also use the OpenAI client sdk to connect to it. Use **agent/workflow name as the model**, and set streaming to `True` as needed.
8282

8383
```bash
8484
# Simple - use your entity name as the model
@@ -89,7 +89,6 @@ curl -X POST http://localhost:8080/v1/responses \
8989
"model": "weather_agent",
9090
"input": "Hello world"
9191
}
92-
9392
```
9493
9594
Or use the OpenAI Python SDK:
@@ -102,7 +101,6 @@ client = OpenAI(
102101
api_key="not-needed" # API key not required for local DevUI
103102
)
104103
105-
# Simple - just use your agent/workflow name as the model!
106104
response = client.responses.create(
107105
model="weather_agent", # Your agent/workflow name
108106
input="What's the weather in Seattle?"
@@ -137,7 +135,7 @@ response2 = client.responses.create(
137135
)
138136
```
139137
140-
**How it works:** OpenAI automatically prepends previous conversation items to each request and appends new items after completion. You don't need to manually pass message history.
138+
**How it works:** DevUI automatically retrieves the conversation's message history from the stored thread and passes it to the agent. You don't need to manually manage message history - just provide the same `conversation` ID for follow-up requests.
141139
142140
## CLI Options
143141
@@ -155,26 +153,65 @@ Options:
155153
156154
## Key Endpoints
157155
156+
## API Mapping
157+
158+
Given that DevUI offers an OpenAI Responses API, it internally maps messages and events from Agent Framework to OpenAI Responses API events (in `_mapper.py`). For transparency, this mapping is shown below:
159+
160+
| Agent Framework Content | OpenAI Event/Type | Status |
161+
| ------------------------------- | ---------------------------------------- | -------- |
162+
| `TextContent` | `response.output_text.delta` | Standard |
163+
| `TextReasoningContent` | `response.reasoning.delta` | Standard |
164+
| `FunctionCallContent` (initial) | `response.output_item.added` | Standard |
165+
| `FunctionCallContent` (args) | `response.function_call_arguments.delta` | Standard |
166+
| `FunctionResultContent` | `response.function_result.complete` | DevUI |
167+
| `ErrorContent` | `response.error` | Standard |
168+
| `UsageContent` | Final `Response.usage` field (not streamed) | Standard |
169+
| `WorkflowEvent` | `response.workflow_event.complete` | DevUI |
170+
| `DataContent`, `UriContent` | `response.trace.complete` | DevUI |
171+
172+
- **Standard** = OpenAI Responses API spec
173+
- **DevUI** = Custom extensions for Agent Framework features (workflows, traces, function results)
174+
175+
### OpenAI Responses API Compliance
176+
177+
DevUI follows the OpenAI Responses API specification for maximum compatibility:
178+
179+
**Standard OpenAI Types Used:**
180+
- `ResponseOutputItemAddedEvent` - Output item notifications (function calls)
181+
- `Response.usage` - Token usage (in final response, not streamed)
182+
- All standard text, reasoning, and function call events
183+
184+
**Custom DevUI Extensions:**
185+
- `response.function_result.complete` - Function execution results (DevUI executes functions, OpenAI doesn't)
186+
- `response.workflow_event.complete` - Agent Framework workflow events
187+
- `response.trace.complete` - Execution traces for debugging
188+
189+
These custom extensions are clearly namespaced and can be safely ignored by standard OpenAI clients.
190+
158191
### Entity Management
192+
159193
- `GET /v1/entities` - List discovered agents/workflows
160194
- `GET /v1/entities/{entity_id}/info` - Get detailed entity information
161195
- `POST /v1/entities/add` - Add entity from URL (for gallery samples)
162196
- `DELETE /v1/entities/{entity_id}` - Remove remote entity
163197
164198
### Execution (OpenAI Responses API)
199+
165200
- `POST /v1/responses` - Execute agent/workflow (streaming or sync)
166201
167202
### Conversations (OpenAI Standard)
203+
168204
- `POST /v1/conversations` - Create conversation
169205
- `GET /v1/conversations/{id}` - Get conversation
170206
- `POST /v1/conversations/{id}` - Update conversation metadata
171207
- `DELETE /v1/conversations/{id}` - Delete conversation
172-
- `GET /v1/conversations?agent_id={id}` - List conversations *(DevUI extension)*
208+
- `GET /v1/conversations?agent_id={id}` - List conversations _(DevUI extension)_
173209
- `POST /v1/conversations/{id}/items` - Add items to conversation
174210
- `GET /v1/conversations/{id}/items` - List conversation items
175211
- `GET /v1/conversations/{id}/items/{item_id}` - Get conversation item
176212
177213
### Health
214+
178215
- `GET /health` - Health check
179216
180217
## Implementation

python/packages/devui/agent_framework_devui/_executor.py

Lines changed: 34 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -232,7 +232,6 @@ async def _execute_agent(
232232
logger.debug(f"Executing agent with text input: {user_message[:100]}...")
233233
else:
234234
logger.debug(f"Executing agent with multimodal ChatMessage: {type(user_message)}")
235-
236235
# Check if agent supports streaming
237236
if hasattr(agent, "run_stream") and callable(agent.run_stream):
238237
# Use Agent Framework's native streaming with optional thread
@@ -433,6 +432,40 @@ def _convert_openai_input_to_chat_message(
433432
elif file_url:
434433
contents.append(DataContent(uri=file_url, media_type=media_type))
435434

435+
elif content_type == "function_approval_response":
436+
# Handle function approval response (DevUI extension)
437+
try:
438+
from agent_framework import FunctionApprovalResponseContent, FunctionCallContent
439+
440+
request_id = content_item.get("request_id", "")
441+
approved = content_item.get("approved", False)
442+
function_call_data = content_item.get("function_call", {})
443+
444+
# Create FunctionCallContent from the function_call data
445+
function_call = FunctionCallContent(
446+
call_id=function_call_data.get("id", ""),
447+
name=function_call_data.get("name", ""),
448+
arguments=function_call_data.get("arguments", {}),
449+
)
450+
451+
# Create FunctionApprovalResponseContent with correct signature
452+
approval_response = FunctionApprovalResponseContent(
453+
approved, # positional argument
454+
id=request_id, # keyword argument 'id', NOT 'request_id'
455+
function_call=function_call, # FunctionCallContent object
456+
)
457+
contents.append(approval_response)
458+
logger.info(
459+
f"Added FunctionApprovalResponseContent: id={request_id}, "
460+
f"approved={approved}, call_id={function_call.call_id}"
461+
)
462+
except ImportError:
463+
logger.warning(
464+
"FunctionApprovalResponseContent not available in agent_framework"
465+
)
466+
except Exception as e:
467+
logger.error(f"Failed to create FunctionApprovalResponseContent: {e}")
468+
436469
# Handle other OpenAI input item types as needed
437470
# (tool calls, function results, etc.)
438471

python/packages/devui/agent_framework_devui/_mapper.py

Lines changed: 83 additions & 58 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,6 @@
2727
ResponseTextDeltaEvent,
2828
ResponseTraceEventComplete,
2929
ResponseUsage,
30-
ResponseUsageEventComplete,
3130
ResponseWorkflowEventComplete,
3231
)
3332

@@ -37,9 +36,8 @@
3736
EventType = Union[
3837
ResponseStreamEvent,
3938
ResponseWorkflowEventComplete,
40-
ResponseFunctionResultComplete,
39+
ResponseOutputItemAddedEvent,
4140
ResponseTraceEventComplete,
42-
ResponseUsageEventComplete,
4341
]
4442

4543

@@ -56,6 +54,9 @@ def __init__(self, max_contexts: int = 1000) -> None:
5654
self._conversion_contexts: OrderedDict[int, dict[str, Any]] = OrderedDict()
5755
self._max_contexts = max_contexts
5856

57+
# Track usage per request for final Response.usage (OpenAI standard)
58+
self._usage_accumulator: dict[str, dict[str, int]] = {}
59+
5960
# Register content type mappers for all 12 Agent Framework content types
6061
self.content_mappers = {
6162
"TextContent": self._map_text_content,
@@ -171,17 +172,31 @@ async def aggregate_to_response(self, events: Sequence[Any], request: AgentFrame
171172
status="completed",
172173
)
173174

174-
# Create usage object
175-
input_token_count = len(str(request.input)) // 4 if request.input else 0
176-
output_token_count = len(full_content) // 4
177-
178-
usage = ResponseUsage(
179-
input_tokens=input_token_count,
180-
output_tokens=output_token_count,
181-
total_tokens=input_token_count + output_token_count,
182-
input_tokens_details=InputTokensDetails(cached_tokens=0),
183-
output_tokens_details=OutputTokensDetails(reasoning_tokens=0),
184-
)
175+
# Get usage from accumulator (OpenAI standard)
176+
request_id = str(id(request))
177+
usage_data = self._usage_accumulator.get(request_id)
178+
179+
if usage_data:
180+
usage = ResponseUsage(
181+
input_tokens=usage_data["input_tokens"],
182+
output_tokens=usage_data["output_tokens"],
183+
total_tokens=usage_data["total_tokens"],
184+
input_tokens_details=InputTokensDetails(cached_tokens=0),
185+
output_tokens_details=OutputTokensDetails(reasoning_tokens=0),
186+
)
187+
# Cleanup accumulator
188+
del self._usage_accumulator[request_id]
189+
else:
190+
# Fallback: estimate if no usage was tracked
191+
input_token_count = len(str(request.input)) // 4 if request.input else 0
192+
output_token_count = len(full_content) // 4
193+
usage = ResponseUsage(
194+
input_tokens=input_token_count,
195+
output_tokens=output_token_count,
196+
total_tokens=input_token_count + output_token_count,
197+
input_tokens_details=InputTokensDetails(cached_tokens=0),
198+
output_tokens_details=OutputTokensDetails(reasoning_tokens=0),
199+
)
185200

186201
return OpenAIResponse(
187202
id=f"resp_{uuid.uuid4().hex[:12]}",
@@ -229,6 +244,7 @@ def _get_or_create_context(self, request: AgentFrameworkRequest) -> dict[str, An
229244
"item_id": f"msg_{uuid.uuid4().hex[:8]}",
230245
"content_index": 0,
231246
"output_index": 0,
247+
"request_id": str(request_key), # For usage accumulation
232248
# Track active function calls: {call_id: {name, item_id, args_chunks}}
233249
"active_function_calls": {},
234250
}
@@ -272,10 +288,11 @@ async def _convert_agent_update(self, update: Any, context: dict[str, Any]) -> S
272288

273289
if content_type in self.content_mappers:
274290
mapped_events = await self.content_mappers[content_type](content, context)
275-
if isinstance(mapped_events, list):
276-
events.extend(mapped_events)
277-
else:
278-
events.append(mapped_events)
291+
if mapped_events is not None: # Handle None returns (e.g., UsageContent)
292+
if isinstance(mapped_events, list):
293+
events.extend(mapped_events)
294+
else:
295+
events.append(mapped_events)
279296
else:
280297
# Graceful fallback for unknown content types
281298
events.append(await self._create_unknown_content_event(content, context))
@@ -315,10 +332,11 @@ async def _convert_agent_response(self, response: Any, context: dict[str, Any])
315332

316333
if content_type in self.content_mappers:
317334
mapped_events = await self.content_mappers[content_type](content, context)
318-
if isinstance(mapped_events, list):
319-
events.extend(mapped_events)
320-
else:
321-
events.append(mapped_events)
335+
if mapped_events is not None: # Handle None returns (e.g., UsageContent)
336+
if isinstance(mapped_events, list):
337+
events.extend(mapped_events)
338+
else:
339+
events.append(mapped_events)
322340
else:
323341
# Graceful fallback for unknown content types
324342
events.append(await self._create_unknown_content_event(content, context))
@@ -331,8 +349,8 @@ async def _convert_agent_response(self, response: Any, context: dict[str, Any])
331349
from agent_framework import UsageContent
332350

333351
usage_content = UsageContent(details=usage_details)
334-
usage_event = await self._map_usage_content(usage_content, context)
335-
events.append(usage_event)
352+
await self._map_usage_content(usage_content, context)
353+
# Note: _map_usage_content returns None - it accumulates usage for final Response.usage
336354

337355
except Exception as e:
338356
logger.warning(f"Error converting agent response: {e}")
@@ -506,7 +524,11 @@ def _get_active_function_call(self, content: Any, context: dict[str, Any]) -> di
506524
async def _map_function_result_content(
507525
self, content: Any, context: dict[str, Any]
508526
) -> ResponseFunctionResultComplete:
509-
"""Map FunctionResultContent to structured event.
527+
"""Map FunctionResultContent to custom DevUI event.
528+
529+
This is a DevUI extension - OpenAI doesn't stream function execution results
530+
because in their model, applications execute functions, not the API.
531+
Agent Framework executes functions, so we emit this event for debugging visibility.
510532
511533
IMPORTANT: Always use Agent Framework's call_id from the content.
512534
Do NOT generate a new call_id - it must match the one from the function call event.
@@ -518,16 +540,22 @@ async def _map_function_result_content(
518540
logger.warning("FunctionResultContent missing call_id - this will break call/result pairing")
519541
call_id = f"call_{uuid.uuid4().hex[:8]}" # Fallback only if truly missing
520542

543+
# Extract result
544+
result = getattr(content, "result", None)
545+
exception = getattr(content, "exception", None)
546+
547+
# Convert result to string
548+
output = result if isinstance(result, str) else json.dumps(result) if result is not None else ""
549+
550+
# Determine status
551+
status = "incomplete" if exception else "completed"
552+
553+
# Return custom DevUI event
521554
return ResponseFunctionResultComplete(
522555
type="response.function_result.complete",
523-
data={
524-
"call_id": call_id,
525-
"result": getattr(content, "result", None),
526-
"status": "completed" if not getattr(content, "exception", None) else "failed",
527-
"exception": str(getattr(content, "exception", None)) if getattr(content, "exception", None) else None,
528-
"timestamp": datetime.now().isoformat(),
529-
},
530556
call_id=call_id,
557+
output=output,
558+
status=status,
531559
item_id=context["item_id"],
532560
output_index=context["output_index"],
533561
sequence_number=self._next_sequence(context),
@@ -543,37 +571,34 @@ async def _map_error_content(self, content: Any, context: dict[str, Any]) -> Res
543571
sequence_number=self._next_sequence(context),
544572
)
545573

546-
async def _map_usage_content(self, content: Any, context: dict[str, Any]) -> ResponseUsageEventComplete:
547-
"""Map UsageContent to structured usage event."""
548-
# Store usage data in context for aggregation
549-
if "usage_data" not in context:
550-
context["usage_data"] = []
551-
context["usage_data"].append(content)
574+
async def _map_usage_content(self, content: Any, context: dict[str, Any]) -> None:
575+
"""Accumulate usage data for final Response.usage field.
552576
577+
OpenAI does NOT stream usage events. Usage appears only in final Response.
578+
This method accumulates usage data per request for later inclusion in Response.usage.
579+
580+
Returns:
581+
None - no event emitted (usage goes in final Response.usage)
582+
"""
553583
# Extract usage from UsageContent.details (UsageDetails object)
554584
details = getattr(content, "details", None)
555-
total_tokens = 0
556-
prompt_tokens = 0
557-
completion_tokens = 0
585+
total_tokens = getattr(details, "total_token_count", 0) or 0
586+
prompt_tokens = getattr(details, "input_token_count", 0) or 0
587+
completion_tokens = getattr(details, "output_token_count", 0) or 0
558588

559-
if details:
560-
total_tokens = getattr(details, "total_token_count", 0) or 0
561-
prompt_tokens = getattr(details, "input_token_count", 0) or 0
562-
completion_tokens = getattr(details, "output_token_count", 0) or 0
589+
# Accumulate for final Response.usage
590+
request_id = context.get("request_id", "default")
591+
if request_id not in self._usage_accumulator:
592+
self._usage_accumulator[request_id] = {"input_tokens": 0, "output_tokens": 0, "total_tokens": 0}
563593

564-
return ResponseUsageEventComplete(
565-
type="response.usage.complete",
566-
data={
567-
"usage_data": details.to_dict() if details and hasattr(details, "to_dict") else {},
568-
"total_tokens": total_tokens,
569-
"completion_tokens": completion_tokens,
570-
"prompt_tokens": prompt_tokens,
571-
"timestamp": datetime.now().isoformat(),
572-
},
573-
item_id=context["item_id"],
574-
output_index=context["output_index"],
575-
sequence_number=self._next_sequence(context),
576-
)
594+
self._usage_accumulator[request_id]["input_tokens"] += prompt_tokens
595+
self._usage_accumulator[request_id]["output_tokens"] += completion_tokens
596+
self._usage_accumulator[request_id]["total_tokens"] += total_tokens
597+
598+
logger.debug(f"Accumulated usage for {request_id}: {self._usage_accumulator[request_id]}")
599+
600+
# NO EVENT RETURNED - usage goes in final Response only
601+
return
577602

578603
async def _map_data_content(self, content: Any, context: dict[str, Any]) -> ResponseTraceEventComplete:
579604
"""Map DataContent to structured trace event."""

0 commit comments

Comments
 (0)