Support evaluation and guardrails LM providers #25305
Closed
rafaelsandroni
announced in
Ideas
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Checked
Feature request
After facing a couple of hallucinations problems implementing AI agents, we decided to implement some language models to filter and evaluate the content between the app and llm provider. Where could we integrate it into langchain? Are any folder in the community integrations that are appropriate for it?
Motivation
Usually it's a common problem for large companies using langchain to deploy gen ai apps.
Proposal (If applicable)
Integrates it with evaluations and guardrails providers (e.g., https://metatext.ai/docs)
Beta Was this translation helpful? Give feedback.
All reactions