Skip to content

Fix search query generation when model generates thinking tags#88

Open
William John Shipman (williamjshipman) wants to merge 2 commits intolangchain-ai:mainfrom
williamjshipman:main
Open

Fix search query generation when model generates thinking tags#88
William John Shipman (williamjshipman) wants to merge 2 commits intolangchain-ai:mainfrom
williamjshipman:main

Conversation

@williamjshipman

I ran into an issue using LMStudio with the qwq-32b, deepseek-r1-distill-qwen-7b and deepseek-r1-llama-8b models. In the generate_query and reflect_on_summary steps, the models return text with thinking tags before the actual JSON. With the DeepSeek models, I also saw that DeepSeek recommended using to enclose an example of the desired output. This pull request updates the reflect_on_summary prompt with example tags and moves the stripping of think tags to earlier both reflect_on_summary and generate_query so the JSON part of the response can be parsed.

@pcuci

Came across: https://ollama.com/MFDoom/deepseek-r1-tool-calling:32b

I wonder if this would also solve the structured output expectations(?)

@williamjshipman

I gave this model a quick test, although just the 1.5b parameter version. Still fails to generate JSON.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants