Fix (brevitas_examples/llm): correct batch size for lm_eval#1430
Merged
Giuseppe5 merged 2 commits intoXilinx:devfrom Dec 15, 2025
Merged
Fix (brevitas_examples/llm): correct batch size for lm_eval#1430Giuseppe5 merged 2 commits intoXilinx:devfrom
Giuseppe5 merged 2 commits intoXilinx:devfrom
Conversation
pablomlago
approved these changes
Dec 15, 2025
Collaborator
pablomlago
left a comment
There was a problem hiding this comment.
LGTM. Consider adding a test similar to the one that we have for lighteval or open an issue otherwise to do it in the future. Also, it might be worth considering a future refactoring of this evaluation pipeline, similar to the one done in 1379
Collaborator
Author
|
I am merging this and opening another PR for tests |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Reason for this PR
lm_eval test was incredibly slow, mostly due to the fact that batch size was not handled properly
Changes Made in this PR
Now we pass the batch size to the model, which speeds up the computation considerably.
Passing the batch size to the evaluator does not produce any effect, mostly because we pass a torch.nn.Module to the evaluator, which means that a lot of the evaluator args/kwargs are ignored.
Testing Summary
N/A as lm_eval is an option al dependency.
Do we want any?
@nickfraser @pablomlago