Guarding tool calls #424
Unanswered
wanyixu199
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi everyone, I’m collecting real-world security issues involving tool-using agents (LangGraph/LangChain). If you’ve seen an agent make an unsafe tool call or leak data in a surprising way, I’d love to hear the pattern (even high-level/redacted).
In return, I can share a tiny harness + guardrail approach that blocks risky tool calls and captures an evidence trail for debugging.
Beta Was this translation helpful? Give feedback.
All reactions