-
Notifications
You must be signed in to change notification settings - Fork 25
Open
Description
Which version of LM Studio?
0.3.33 (build 1)
Which operating system?
Windows 11
What is the bug?
It always fails to remove tokens from cache and clears the entire cache, making the model do the entire decoding from the beginning
Screenshots
Logs
Add any relevant logs.
To Reproduce
- Download Qwen3 Next and Set Config mentioned in the screenshot
- Open VS Code, use any open-source Agent like Continue Dev to send a message to LM Studio
- Check logs for clear entire cache
NOTE: It does not happens when using lm studio's built in chat
prskid1000datainsights, prskid1000 and di-copilot
Metadata
Metadata
Assignees
Labels
No labels