**Which version of LM Studio?** LM Studio 0.3.23 (Build 3) **Which operating system?** Windows 11 **What is the bug?** LLM: openai/gpt-oss-120b After loading GPT-OSS-120B, GPU Offload 36/36 Context Length 8192 Offload KV Cache to GPU Memory : OFF The model outputs my 1st input correctly as expected. When I typed 2nd query, the model outputs GGGGGGGGG forever... After I cleared all messages, I typed 2nd query again, then the model outputs correctly as expected. **Screenshots** <img width="938" height="122" alt="Image" src="https://github.com/user-attachments/assets/cbf16744-3082-4362-8ab5-19d6d5a18e37" /> **Logs** Unexpected empty grammar stack after accepting piece: G **To Reproduce** Described each steps in "What is the bug" section.