Qwen3 <think> label not rendered #9215
Replies: 3 comments
-
What are you using as endpoint? I have tried Qwen3 with both local Ollama and remote Openrouter and thinking process is rendered correctly. |
Beta Was this translation helpful? Give feedback.
-
I'm using it on my own server with vLLM. The configuration on Librechat is pretty similar to the default one, I didn't change too much. The vLLM cmd is similar to the documentation one too (I've configured some more parameters, but pretty close to this one): vllm serve Qwen/Qwen3-30B-A3B-Thinking-2507 --max-model-len 262144 --enable-reasoning --reasoning-parser deepseek_r1 |
Beta Was this translation helpful? Give feedback.
-
The LLM is not including the opening
This is right from the HF link you shared: |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
What happened?
Using Qwen3 reasoning models (even models like Deepseek R1 Distiled Qwen) like Qwen/Qwen3-30B-A3B-Thinking-2507 the label is not rendered, so on the chat doesn't change the output format correctly.
Reading the docs (https://huggingface.co/Qwen/Qwen3-30B-A3B-Thinking-2507) they said "Additionally, to enforce model thinking, the default chat template automatically includes . Therefore, it is normal for the model's output to contain only without an explicit opening tag."
Version Information
docker images | grep librechat
ghcr.io/danny-avila/librechat-dev latest 8d938efaacda 28 hours ago 1.56GB
ghcr.io/danny-avila/librechat-rag-api-dev-lite latest 2f812738600a 4 days ago 1.67GB
Steps to Reproduce
What browsers are you seeing the problem on?
No response
Relevant log output
Screenshots
No response
Code of Conduct
Beta Was this translation helpful? Give feedback.
All reactions