What is the proper way to implement follow-up queries natively? #1079
Replies: 1 comment 2 replies
-
To implement native follow-up queries—especially when running LLMs locally—use the PQASession and PaperQAEnvironment abstractions to manage session state and context. These classes track and expose prior answers and contexts, enabling seamless chaining of queries without relying on remote API state. A recent update (see PR #1032) introduced the ability to group contexts by question, which is particularly useful for follow-up queries. You can enable this by setting answer.group_contexts_by_question to True in your AnswerSettings. This causes the answer context to be organized by the originating question, making it easier for the model to reference prior answers and context in subsequent queries. The test_aquery_groups_contexts_by_question demonstrates this pattern in practice, showing how PQASession can be used to tag and group contexts for follow-up queries. This approach works the same way for both local and remote LLMs, as the session and environment abstractions handle context management internally, regardless of where the model is hosted. For local LLMs, ensure your environment is configured to use a local backend (like ollama), and then use PQASession to persist and chain context and answers as needed for your workflow. For more details, see the documentation and code changes in PR #1032. To reply, just mention @dosu. How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
PR #911 exposes the prior answer for enabling asking follow up queries.
I wonder what is the proper way to implement when running LLMs locally? (PQASession and PaperQAEnvironment, perhaps?)
Beta Was this translation helpful? Give feedback.
All reactions