-
Notifications
You must be signed in to change notification settings - Fork 32
Open
Description
I'm benchmarking SVD-LLM to compare it to some relevant approaches. Particularly, for Llama-2-7b-hf model. I've got the results, after fine-tuning a 20% compressed model. Is this in line with what anyone of you have observed? Maybe the authors can confirm?
{
"eval_harness_shot=0/boolq": 0.6419,
"eval_harness_shot=0/mmlu": 0.25075,
"eval_harness_shot=0/openbookqa": 0.262,
"eval_harness_shot=5/nq_open": 0.09861
}
Metadata
Metadata
Assignees
Labels
No labels