Model hallucinations and guardrails #174
IlamaranMagesh
started this conversation in
General
Replies: 2 comments 1 reply
-
|
This is a problem. There is so much work to done all around. |
Beta Was this translation helpful? Give feedback.
0 replies
-
Definitely ! There are techniques to enforce facts in these types of summaries (used among others by popular AI chats) but the priority while building was mostly to make a first functional prototype of a parent memory class and a few implementations. Of course, it would be awesome to have something more scientific if you're willing to work on this :) |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Looking to discuss in detail about the model reponses and how we handling it.
Right now, I see we are asking the model itself to summarise the short-term memories and add the information to the long-term memory (here) or summarise long-term memory in the memory modules.
We do not have any implementations of fact checking the hallucinations or guardrails. LLMs may sometimes, well, usually forget which may lead to poor performance of the simulation or none at all and not as it is expected to run. This is not an architectural flaw but relying entirely on the AI will cause issues.
What are the plans about it, if any? I'm willing to work on this but since this is a huge part/feature. I would like to discuss with the maintainers and owners first.
@jackiekazil @colinfrisch @wang-boyu
Beta Was this translation helpful? Give feedback.
All reactions