Skip to content

Exploration of Query Size and Model Configuration in Vector-Based Searches #818

@zhongshuai-cao

Description

@zhongshuai-cao

This issue is for a: (mark with an x)

- [ ] bug report -> please search issues before submitting
- [x] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Expected/desired behavior

Description:
I'm currently delving into different search scenarios that rely exclusively on vector-based searches. A couple of questions arise:

Does enlarging the query size improve the accuracy of results?
Would utilizing a quicker, albeit possibly less capable model like gpt-3.5-turbo for queries and smarter gpt-4 for answers be effective?
To implement this approach, there's a need for:

Configurability between the GPT model used for queries and the one used for answers.
The ability to set query limits tailored for different scenarios, such as text-based versus vector-based searches.
Any insights or suggestions on these matters would be greatly appreciated.


Thanks! We'll be in touch soon.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions