Skip to content

"CyclicReflex: Improving Large Reasoning Models via Cyclical Reflection Token Scheduling" by Chongyu Fan, Yihua Zhang, Jinghan Jia, Alfred Hero, Sijia Liu

License

Notifications You must be signed in to change notification settings

OPTML-Group/CyclicReflex

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CyclicReflex: Improving Large Reasoning Models via Cyclical Reflection Token Scheduling

preprint issues License: MIT GitHub top language GitHub repo size GitHub stars

Teaser
Figure 1: Schematic overview of CyclicReflex. The rightmost subfigure presents a comparison of final answer accuracy between CyclicReflex, the original LRM, and decoding variants using TIP and S1.

This is the official code repository for the paper CyclicReflex: Improving Large Reasoning Models via Cyclical Reflection Token Scheduling.

Abstract

Large reasoning models (LRMs), such as OpenAI's o1 and DeepSeek-R1, harness test-time scaling to perform multi-step reasoning for complex problem-solving. This reasoning process, executed before producing final answers, is often guided by special juncture tokens or textual segments that prompt self-evaluative reflection. We refer to these transition markers and reflective cues as "reflection tokens" (e.g., "wait", "but", "alternatively"). In this work, we treat reflection tokens as a "resource" and introduce the problem of resource allocation, aimed at improving the test-time compute performance of LRMs by adaptively regulating the frequency and placement of reflection tokens. Through empirical analysis, we show that both excessive and insufficient use of reflection tokens, referred to as over-reflection and under-reflection, can degrade model performance. To better understand and manage this trade-off, we draw an analogy between reflection token usage and learning rate scheduling in optimization. Building on this insight, we propose cyclical reflection token scheduling (termed CyclicReflex), a decoding strategy that dynamically modulates reflection token logits using a position-dependent triangular waveform. Experiments on MATH500, AIME2024/2025, and AMC2023 demonstrate that CyclicReflex consistently improves performance across model sizes (1.5B-8B), outperforming standard decoding and more recent approaches such as TIP (thought switching penalty) and S1.

Getting Started

Contributors

Cite This Work

@article{fan2025cyclicreflex,
  title={CyclicReflex: Improving Large Reasoning Models via Cyclical Reflection Token Scheduling},
  author={Fan, Chongyu and Zhang, Yihua and Jia, Jinghan and Hero, Alfred and Liu, Sijia},
  journal={arXiv preprint arXiv:2506.11077},
  year={2025}
}

About

"CyclicReflex: Improving Large Reasoning Models via Cyclical Reflection Token Scheduling" by Chongyu Fan, Yihua Zhang, Jinghan Jia, Alfred Hero, Sijia Liu

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages