🔗 arXiv | 📄 PDF | 🌐 Project Page
Preprint. Under review.
*Equal Contribution, †Corresponding Author

Overview of Self-Braking Tuning: Through a specialized data construction method and training strategy, our self-braking model is able to spontaneously halt overthinking.
Self-Braking Tuning is a novel framework that unlocks the potential of large reasoning models to autonomously identify and terminate redundant reasoning, enabling the models to regulate their own reasoning processes without relying on external control mechanisms.
During fine-tuning, we use the Megatron-LM framework, with related parameters specified in configs/train.yaml
; for evaluation, we employ the vLLM framework as the inference engine, with corresponding parameters located in configs/evaluation.yaml
.
Here, we provide a complete data construction framework that can be applied to nearly any long-chain tuning dataset, generating corresponding self-braking data accordingly.
In Let LLMs Break Free from Overthinking via Self-Braking Tuning, we performed self-braking tuning based on the OpenR1-Math dataset. In fact, this approach is applicable to any long-chain reasoning dataset, as long as the reasoning segments are wrapped with <think>
and </think>
tags. It is worth noting that, prior to training, it is recommended to keep the model's max_position_embeddings set to 32,768. In addition, to extend the context length from 4k to 32k, we increase the RoPE frequency to 300k.
Our method requires access to an LLM, and the recommended way to provide this is by setting:
export APIKEY=<your_key>
Tip: To provide a convenient default option, we use the OpenAI API key. However, for large-scale datasets, it is recommended to deploy open-source models locally using vLLM or other frameworks, and to leverage efficient methods such as batch processing for better scalability and cost efficiency.
pip install -r requirements.txt
python models/model_download.py
python data/datasets/download_benchmarks.py
python data/datasets/download_OpenR1-Math.py
python data/preprocessing/build_sbt-e.py
python data/preprocessing/build_sbt-d.py
Refer to the config Settings in the following file:
train.yaml
: Training settingsevalution.yaml
: Evaluation settings
If you find our work helpful, feel free to give us a cite.
@misc{zhao2025letllmsbreakfree,
title={Let LLMs Break Free from Overthinking via Self-Braking Tuning},
author={Haoran Zhao and Yuchen Yan and Yongliang Shen and Haolei Xu and Wenqi Zhang and Kaitao Song and Jian Shao and Weiming Lu and Jun Xiao and Yueting Zhuang},
year={2025},
eprint={2505.14604},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.14604},
}
If you have any questions, please contact us by email: [email protected]