Support trace_llm_output=False to Disable LLM Output Tracing for Privacy and multi-node LLM agents #5430
Closed
mislam77-git
started this conversation in
Ideas
Replies: 1 comment 2 replies
-
Thanks for raising the issue! I think your premise is false. LangGraph does not trace things unless you opt-in to tracing. |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
LangGraph currently collects and stores all AIMessage content from LLM calls as part of its workflow tracing. While this is valuable for debugging and replayability, in some use cases — especially those involving sensitive or regulated data — it's important to disable or redact the LLM outputs from being persisted in the trace.
I’d like to propose a simple flag, e.g., trace_llm_output=False, that can be passed to the Graph or GraphBuilder to suppress storing LLM-generated content in the trace.
Use Case / Motivation:
In privacy-sensitive environments (e.g., finance, healthcare, internal tools), storing natural language output from LLMs may violate data protection policies or internal compliance controls. Another use case is when there are multiple workflow Langraph nodes and each node uses an LLM call. All LLM outputs are streamed, while I might want only the last node output. However, want to use each LLM output to determine the next node execution. Teams may want to:
Mask or redact, or nullify LLM responses
Avoid logging or storing free-text content entirely
Still keep workflow structure, state transitions, and execution history
Proposed Solution:
Add a new config option, e.g.:
graph = Graph(..., trace_llm_output=False)
This flag would:
Detect AIMessage objects returned from LLM nodes
Replace the .content field with an empty string ("") or a configurable placeholder
Still preserve node execution, metadata, and other state transitions
Example Patch:
I'm happy to contribute the patch! A minimal version would include logic like:
if not self.trace_llm_output and isinstance(output, AIMessage):
output = AIMessage(content="", additional_kwargs=output.additional_kwargs)
This feature would make LangGraph safer and more flexible for enterprise/regulated environments, and enable broader adoption in security-conscious teams.
Beta Was this translation helpful? Give feedback.
All reactions