How to influence the OpenAIFunctionsAgent on creating the initial search query #9623
Replies: 1 comment 2 replies
-
🤖 Hello, From your description, it seems like the issue lies in the way the initial search query is being generated. The However, the problem you're facing seems to be more related to the generation of the initial search query rather than the handling of One way to improve this is by modifying the DEFAULT_QUERY_PROMPT = PromptTemplate(
input_variables=["question"],
template="""You are an AI language model assistant. Your task is
to generate 5 different versions of the given user
question to retrieve relevant documents from a vector database.
By generating multiple perspectives on the user question,
your goal is to help the user overcome some of the limitations
of distance-based similarity search. Provide these alternative
questions separated by newlines. Original question: {question}""",
) Another way to improve the search query creation is to modify the I hope this helps! Let me know if you have any other questions. Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I've build an agent with a multi query retriever with sources and some other things. The thing is I use an AgentExecutor where an OpenAIFunctionsAgent is loaded. In the debugging mode I see that the user input is transformed to a search query for one of the tools. But it's doing it in a very bad manner and therefor the retrieval is not giving useful documents.
So when the query is something like "What kind of insurances does company X has?", the search query will be "X". Of course no results are giving which is answering the question, so the model responds that it doesn't now. It look likes this:
Looking at the OpenAIFunctionsAgent code I do see these lines of code:
I do have the feeling that this part is the 'problem'. Does anybody knows how to solve this issue?
Beta Was this translation helpful? Give feedback.
All reactions