-
Notifications
You must be signed in to change notification settings - Fork 5k
Closed
Labels
Description
This issue is for a: (mark with an x
)
- [ ] bug report -> please search issues before submitting
- [x] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)
Expected/desired behavior
Description:
I'm currently delving into different search scenarios that rely exclusively on vector-based searches. A couple of questions arise:
Does enlarging the query size improve the accuracy of results?
Would utilizing a quicker, albeit possibly less capable model like gpt-3.5-turbo for queries and smarter gpt-4 for answers be effective?
To implement this approach, there's a need for:
Configurability between the GPT model used for queries and the one used for answers.
The ability to set query limits tailored for different scenarios, such as text-based versus vector-based searches.
Any insights or suggestions on these matters would be greatly appreciated.
Thanks! We'll be in touch soon.