Skip to content

Fix (brevitas_examples/llm): correct batch size for lm_eval#1430

Merged
Giuseppe5 merged 2 commits intoXilinx:devfrom
Giuseppe5:fix_lm_eval
Dec 15, 2025
Merged

Fix (brevitas_examples/llm): correct batch size for lm_eval#1430
Giuseppe5 merged 2 commits intoXilinx:devfrom
Giuseppe5:fix_lm_eval

Conversation

@Giuseppe5
Copy link
Collaborator

Reason for this PR

lm_eval test was incredibly slow, mostly due to the fact that batch size was not handled properly

Changes Made in this PR

Now we pass the batch size to the model, which speeds up the computation considerably.

Passing the batch size to the evaluator does not produce any effect, mostly because we pass a torch.nn.Module to the evaluator, which means that a lot of the evaluator args/kwargs are ignored.

Testing Summary

N/A as lm_eval is an option al dependency.
Do we want any?
@nickfraser @pablomlago

Copy link
Collaborator

@pablomlago pablomlago left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Consider adding a test similar to the one that we have for lighteval or open an issue otherwise to do it in the future. Also, it might be worth considering a future refactoring of this evaluation pipeline, similar to the one done in 1379

@Giuseppe5
Copy link
Collaborator Author

I am merging this and opening another PR for tests

@Giuseppe5 Giuseppe5 merged commit 6ffd10c into Xilinx:dev Dec 15, 2025
28 of 29 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants