Skip to content

Managed Agents: emit ContextTraceEmitter events so langextract/langfuse traces are non-emptyΒ #1427

@MervinPraison

Description

@MervinPraison

Problem

Managed agents emit no trace events. The langextract + langfuse bridges landed in #1420 listen to ContextTraceEmitter (agent_start / tool_call_* / llm_response / agent_end), but both AnthropicManagedAgent._execute_sync (src/praisonai/praisonai/integrations/managed_agents.py:377-406) and LocalManagedAgent._execute_sync (src/praisonai/praisonai/integrations/managed_local.py:548-577) bypass this pipeline entirely. Consequence: a run with praisonai --observe langextract managed run "…" produces an empty HTML.

Acceptance criteria

  • AnthropicManagedAgent._execute_sync emits agent_start before SSE stream, tool_call_start/end around each agent.tool_use event, llm_response when aggregated text is available, agent_end on session.status_idle.
  • LocalManagedAgent._execute_sync emits the same five event types around agent.chat(...) (already partially covered via inner Agent but managed-level agent_start/end must be explicit).
  • praisonai --observe langextract managed run "Write a haiku" (or --observe langfuse) produces a non-empty HTML / Langfuse trace with the managed agent visible.
  • Zero-overhead when no emitter is installed (get_context_emitter() returns the disabled singleton β€” see praisonaiagents/trace/context_events.py).
  • Unit tests with a fake ContextTraceSinkProtocol confirm event sequence and counts.
  • Real agentic test (gated): Agent(backend=ManagedAgent()).start("Say hi") with an installed ContextListSink shows β‰₯ 2 events.

Implementation plan

  1. In both _execute_sync methods: emitter = get_context_emitter() at the top (cheap), then emitter.agent_start(self._cfg.get("name","Agent"), {"input": prompt, "goal": ...}) before sending the user message; matching emitter.agent_end(...) in finally:.
  2. In _process_events of AnthropicManagedAgent: on agent.tool_use call emitter.tool_call_start(agent_name, tool_name, input); no direct "end" event from Anthropic, so emit tool_call_end synthetically right after (or on the next event) with duration_ms from a local timer.
  3. On agent.message aggregated text: call emitter.llm_response(agent_name, response_content=joined_text) once per turn.
  4. For LocalManagedAgent, the inner Agent.chat() already emits context events via chat_mixin; still add managed-level agent_start/end to mark the boundary.
  5. No core changes needed β€” all emission happens from the wrapper.

Files

Modify:

  • src/praisonai/praisonai/integrations/managed_agents.py β€” _execute_sync, _process_events.
  • src/praisonai/praisonai/integrations/managed_local.py β€” _execute_sync.

Tests:

  • src/praisonai-agents/tests/managed/test_managed_trace_events.py (new) β€” uses ContextListSink + trace_context.

Invariants

  • Protocol-driven core untouched.
  • No new deps; emission is opt-in via get_context_emitter().
  • Backward-compatible.

References

cc @claude β€” per .windsurf/workflows/e2e-analysis-issue-pr-merge.md.

Metadata

Metadata

Assignees

No one assigned

    Labels

    claudeAuto-trigger Claude analysisenhancementNew feature or request

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions