Skip to content

Conversation

slimfrkha
Copy link

@slimfrkha slimfrkha commented Jun 13, 2025

Issue

Current code cannot reproduce official LCB scores from open source models.
Example: qwen3-14b thinking mode on LCB v5.

  • Reported in paper (temperature=0.6, top_p=0.95, top-k=20, max_new_tokens=32768): 63.5
  • current code (temperature=0.6, top_p=0.95, max_new_tokens=32768, n_repeat=2): 56.15

Score is trailing 7 pts behind. This can't be explained by seeding / randomness of generations.

Problem

  • Prompt is not formatted the same as LCB paper / official git repo
  • current code evaluator (originally from novaSky/Skythoughts) is different from official LCB github repo

Solution

Apply changes from official LCB github repo for prompt formatting and Evaluator.

Results

results after fix (temperature=0.6, top_p=0.95, max_new_tokens=32768, n_repeat=2): 61.00

This is more inline with official results. Difference in score is small and is related probably either to seeding or different n_repeat value.

@slimfrkha
Copy link
Author

Hi @neginraoof,
Could you take a quick look at this PR when you get a chance? Just need your feedback to decide whether to keep it open or close it. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant