Replies: 1 comment
-
|
Yes, Guardrails can work with LangGraph! Here's how: Integration PatternGuardrails operates at the LLM call level, so you can use it within LangGraph nodes: from guardrails import Guard
from langgraph.graph import StateGraph
# Define your guard
guard = Guard.from_rail(...)
# Use in a LangGraph node
def validated_llm_node(state):
# Guardrails validates the LLM output
result = guard(
llm_api=openai.chat.completions.create,
messages=state["messages"]
)
return {"validated_output": result.validated_output}
# Build graph
graph = StateGraph(State)
graph.add_node("validated_llm", validated_llm_node)State-Based ValidationFor more complex flows, you can also validate state transitions: def validate_state_transition(state):
# Use Guardrails to validate state before proceeding
if not guard.validate(state["proposed_action"]):
return {"action": "rejected", "reason": guard.error}
return {"action": state["proposed_action"]}Key Points
The two frameworks complement each other rather than overlap. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I want to check if this can also be used for langgraph ?
Beta Was this translation helpful? Give feedback.
All reactions