You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Follow-up to #1412 / PR #1413. Post-merge local validation (pytest + real agentic Agent.start() + praisonai langextract render <yaml> + --observe langextract) surfaced a wiring gap that makes the langextract trace empty for almost all real-world agent flows. Fix posted in #1420.
Observed failure
$ praisonai langextract render simple.yaml -o render.html --no-open
...
Error: Trace was not rendered to render.html
# Python path β same gapfrompraisonai.observabilityimportLangextractSink, LangextractSinkConfigfrompraisonaiagents.trace.protocolimportTraceEmitter, set_default_emitterfrompraisonaiagentsimportAgentsink=LangextractSink(config=LangextractSinkConfig(output_path="trace.html"))
set_default_emitter(TraceEmitter(sink=sink, enabled=True))
Agent(instructions="...", name="w", llm="gpt-4o-mini").start("...") # runs finesink.close()
# -> sink._events == [] # zero events captured, HTML not written
Root cause
LangextractSink is wired through get_default_emitter() / ActionEvent, but the core runtime emits rich lifecycle events exclusively via ContextTraceEmitter / ContextTraceSinkProtocol (praisonaiagents/trace/context_events.py). Grep of the whole core shows only twoActionEvent producers:
File
Line
Event
praisonaiagents/agent/router_agent.py
253
RouterAgent token usage (output)
praisonaiagents/agents/agents.py
2344
PlanningAgent plan_created
All agent_start, agent_end, tool_call_start, tool_call_end, llm_request, llm_response flow through the context emitter only (chat_mixin.py, tool_execution.py, unified_execution_mixin.py).
Same architectural gap affects LangfuseSink (_setup_langfuse_observability uses the identical pattern in cli/app.py).
chat_mixin.llm_response payload quality. The emitter gets response_content=str(final_response), which serialises the entire ChatCompletion object. The HTML final_output extraction shows this verbose repr instead of the actual message text. A small upstream change in chat_mixin.py to pass final_response.choices[0].message.content (with fallback) would dramatically improve the visualization. Touches core SDK, so left for a follow-up.
Langfuse mirror fix. Apply the same ContextTraceSinkProtocol bridge pattern for LangfuseSink in _setup_langfuse_observability. Likely broken today for the same reason.
@claude please pick this up once #1420 merges and implement follow-ups 1β4 per the AGENTS.md principles (protocol-driven, lazy imports, no core regressions, tests-first).
Context
Follow-up to #1412 / PR #1413. Post-merge local validation (pytest + real agentic
Agent.start()+praisonai langextract render <yaml>+--observe langextract) surfaced a wiring gap that makes the langextract trace empty for almost all real-world agent flows. Fix posted in #1420.Observed failure
Root cause
LangextractSinkis wired throughget_default_emitter()/ActionEvent, but the core runtime emits rich lifecycle events exclusively viaContextTraceEmitter/ContextTraceSinkProtocol(praisonaiagents/trace/context_events.py). Grep of the whole core shows only twoActionEventproducers:praisonaiagents/agent/router_agent.pyoutput)praisonaiagents/agents/agents.pyplan_createdAll
agent_start,agent_end,tool_call_start,tool_call_end,llm_request,llm_responseflow through the context emitter only (chat_mixin.py,tool_execution.py,unified_execution_mixin.py).Same architectural gap affects
LangfuseSink(_setup_langfuse_observabilityuses the identical pattern incli/app.py).Fix shipped in #1420 (wrapper-only)
_ContextToActionBridgeadapter implementingContextTraceSinkProtocolforwardsContextEventβActionEvent.LangextractSink.context_sink()exposes the bridge._setup_langextract_observabilityandlangextract renderinstall aContextTraceEmitter(sink=sink.context_sink(), enabled=True)viaset_context_emitter.trace.html(3036 B) +trace.jsonl(1356 B).Follow-ups (out of scope for #1420)
chat_mixin.llm_responsepayload quality. The emitter getsresponse_content=str(final_response), which serialises the entireChatCompletionobject. The HTMLfinal_outputextraction shows this verbose repr instead of the actual message text. A small upstream change inchat_mixin.pyto passfinal_response.choices[0].message.content(with fallback) would dramatically improve the visualization. Touches core SDK, so left for a follow-up.ContextTraceSinkProtocolbridge pattern forLangfuseSinkin_setup_langfuse_observability. Likely broken today for the same reason.praisonaiagents/tools/langextract_tools.py. Original Integration: Add langextract as a local visual trace layer (observability HTML viewer + CLI)Β #1412 plan listed a first-classLangExtractToolwrappinglangextract.extractfor agents to call. Not yet implemented.agents_generator.py:1112hadimport osinside a conditional that shadowed the module-levelos, causingUnboundLocalErrorin_run_praisonaiwheneveracp/lspwere disabled (i.e. any default YAML run). Fixed in fix(langextract): bridge ContextTraceEmitter so real agent events produce a non-empty traceΒ #1420 as part of the blocker unblocker, but should be re-reviewed on its own.Acceptance criteria for this tracking issue
llm_responsecontent wiring in corechat_mixin.LangfuseSinkcontext-emitter bridge.langextract_tools.pytool registration.PraisonAIDocs/docs/observability/langextract.mdxupdated with the--observe langextractflow and therender/viewCLI examples.References
praisonaiagents/trace/protocol.py,praisonaiagents/trace/context_events.py@claude please pick this up once #1420 merges and implement follow-ups 1β4 per the AGENTS.md principles (protocol-driven, lazy imports, no core regressions, tests-first).