Skip to content

Commit 25f6d86

Browse files
mcavdarlnhsingh
andauthored
Refactor safety response handling in guardrails.mdx (#1452)
## Overview When the judge LLM returns “UNSAFE”, the agent enters an infinite loop. ## Type of change **Type:** Update existing documentation / bug ## Related issues/PRs <!-- Link to related issues, feature PRs, or discussions (if applicable) To automatically close an issue when this PR is merged, use closing keywords: - "closes #123" or "fixes #123" or "resolves #123" For regular references without auto-closing, just use: - "#123" or "See issue #123" Examples: - closes #456 (will auto-close issue #456 when PR is merged) - See #789 for context (will reference but not auto-close issue #789) --> - GitHub issue: closes #1435 - Feature PR: <!-- For LangChain employees, if applicable: --> - Linear issue: - Slack thread: ## Checklist <!-- Put an 'x' in all boxes that apply --> - [x] I have read the [contributing guidelines](README.md) - [x] I have tested my changes locally using `docs dev` - [x] All code examples have been tested and work correctly - [x] I have used **root relative** paths for internal links - [x] I have updated navigation in `src/docs.json` if needed (Internal team members only / optional): Create a preview deployment as necessary using the [Create Preview Branch workflow](https://github.com/langchain-ai/docs/actions/workflows/create-preview-branch.yml) ## Additional notes <!-- Any other information that would be helpful for reviewers --> --------- Co-authored-by: Lauren Hirata Singh <[email protected]>
1 parent b3faca4 commit 25f6d86

File tree

1 file changed

+2
-14
lines changed

1 file changed

+2
-14
lines changed

src/oss/langchain/guardrails.mdx

Lines changed: 2 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -484,13 +484,7 @@ class SafetyGuardrailMiddleware(AgentMiddleware):
484484
result = self.safety_model.invoke([{"role": "user", "content": safety_prompt}])
485485

486486
if "UNSAFE" in result.content:
487-
return {
488-
"messages": [{
489-
"role": "assistant",
490-
"content": "I cannot provide that response. Please rephrase your request."
491-
}],
492-
"jump_to": "end"
493-
}
487+
last_message.content = "I cannot provide that response. Please rephrase your request."
494488

495489
return None
496490

@@ -537,13 +531,7 @@ def safety_guardrail(state: AgentState, runtime: Runtime) -> dict[str, Any] | No
537531
result = safety_model.invoke([{"role": "user", "content": safety_prompt}])
538532

539533
if "UNSAFE" in result.content:
540-
return {
541-
"messages": [{
542-
"role": "assistant",
543-
"content": "I cannot provide that response. Please rephrase your request."
544-
}],
545-
"jump_to": "end"
546-
}
534+
last_message.content = "I cannot provide that response. Please rephrase your request."
547535

548536
return None
549537

0 commit comments

Comments
 (0)