-
Notifications
You must be signed in to change notification settings - Fork 5k
Description
This issue is for a: (mark with an x)
- [ ] bug report -> please search issues before submitting
- [x ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)
This is not really a bug so I am marking it as feature
We are productionizing this PoC for corporate usage and found out few things that make the bot works better / smoother and generate more predictable queries that are sent to Cognitive Search - at least in our tests.
Right now in code, optimized search query in chatreadretrieveread.py is generated by gluing together:
prompt -> few shots -> whole history -> user query prefixed by "Generate search query for: "
This works pretty well, however on longer conversation chain we found out that that query can get messy, as after few shots there is real conversation history - with question answers - which seems out of place here.
We came up with a simple idea of keeping history of user questions and queries generated by the bot as separate field in the request and response, which allows as to bounce these and keep the backend stateless.
So in the end - after implementation - generation of optimized search query messages would look like this:
prompt -> few shots -> query messages history (user queries along with optimized query response from OpenAI) -> user query prefixed by "Generate search query for: "
If you guys think this approach sounds good, I can open PR with proposed changes. Thanks!