-
Notifications
You must be signed in to change notification settings - Fork 1
Open
Description
Hi, I'm trying to reproduce the Llama-2-7B results in your paper (ACC and F1 in Zero Shot) but the values are very different. How did you execute the tasks with Llama 2? In my case I only wrote this:
!python main.py \
--model_name llama2_7b \
--path_model meta-llama/Llama-2-7b-hf \
--task {task_type} \
--data_name {dataset_name} \
--num_train {num_train}
in a Kaggle Notebook and used the 2 x Tesla T4 accelerator.
For example I'm getting 18.04 (F1 score) on IAM claims and not 60.14. What I'm doing wrong?
P.S. I changed also the tensor_parallel_size arg in modeling.py from 1 to 2.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels