Q&A and LLM Caching #10171
Replies: 1 comment
-
🤖 Based on the information available in the LangChain repository, it seems that there isn't a built-in mechanism for handling user feedback in the caching process. However, you can certainly implement this feature in your application using LangChain. As mentioned in the context, you can introduce a feedback mechanism where users can flag unsatisfactory responses. Once a response is flagged, you can use the def handle_feedback(self, feedback: dict):
"""
Handle user feedback. If the feedback indicates the response is unsatisfactory,
delete the response from the cache.
"""
if feedback['satisfactory'] == False:
document_id = feedback['document_id']
self.delete_by_document_id(document_id) In this code, Please note that this is a simple implementation and you might need to adjust it according to your specific needs and application design. For example, you might want to add additional checks or logging, or handle feedback in a different way. You can find the If you need further assistance or have more specific questions about implementing this feature, feel free to ask. SourcesThis response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Assuming LLM Caching using GPTCache or others is turned on. How does on ensure that wrong answers dont end up getting cached? For. e.g a question might be answered unsatisfactorily. If this is cached, potentially the same answer is returned later also.
In other words, can user feedback be used to control if a response has to be cached or not?
Beta Was this translation helpful? Give feedback.
All reactions