Adding post-generation guardrails: RARR #27913
opertifelipe
announced in
Ideas
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Checked
Feature request
Addinf a guardrails class RARR (Researching and Revising What Language Models Say, Using Language Models) integrated with the other LangChain components that can be used for virtual assistants and RAG solutions to detect and reduce hallucinations.
Motivation
I believe it is important to start adding post-generation guardrails inside LangChain. Always more, it is requested an high reliability of the answers virtual assistants and invest in developing tools helping to reduce hallucination or detect hallucination could be a valuable feature.
RARR (Researching and Revising What Language Models Say, Using Language Models) from https://arxiv.org/pdf/2210.08726 could be one of the valuable option to include.
Proposal (If applicable)
No response
Beta Was this translation helpful? Give feedback.
All reactions