Utilize Cleanlab's trustworthy language model (TLM) to catch LLM hallucinations #26721
AshishSardana
announced in
Ideas
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Checked
Feature request
Cleanlab provides trustworthy language model (TLM) that adds a trustworthiness score to every LLM response, letting one know which outputs are reliable and which ones need extra scrutiny.
An integration of Cleanlab's TLM with LangChain would allow developers to take advantage of the trustworthiness in agentic/RAG applications.
Motivation
As with most LLM based applications, the most popular Enterprise use-cases are based on RAG. The
trustworthiness_score
is useful in enhancing reliability by - routing requests appropriately, tracking unchecked hallucinations, and more.Cleanlab's TLM has gained traction proving its usefulness.
Proposal (If applicable)
Raised a working basic PR #26412 that integrates Cleanlab in langchain-community package, with an example notebook.
Beta Was this translation helpful? Give feedback.
All reactions