Question Regarding On-Policy Distillation in Qwen3 #1798
Unanswered
caiyuchen-ustc
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Dear Qwen Team,
In the Qwen3 technical report, Table 21 mentions the use of on-policy distillation. During my reproduction experiments, I found that when no sentence-level importance sampling or clipping is applied, the student model’s errors are progressively amplified, eventually leading to training collapse.
Could you clarify whether your on-policy distillation implementation includes any stabilization mechanisms, such as sentence-level importance weighting, ratio clipping, or other constraints? If so, I would appreciate it if you could share which techniques were used in practice.
Thank you for your help.
Best regards,
Cai Yuchen
Beta Was this translation helpful? Give feedback.
All reactions