Skip to content

Self trained zephyr-7b-dpo-qlora MT-bench score dropped to 1.88 #188

@jltchiu

Description

@jltchiu

Hi, I just followed recipes/zephyr-7b-beta/dpo/config_qlora.yaml and hope to replicate the experiments. I was training on A10G, with 1 gpu, and the only modification I did was reducing the train_batch_size from 4 to 1 (due to memory constraint). However, my output models zephyr-7b-dpo-qlora only has mt-score of 1.88. I also did a mt-score benchmark with the downloaded zephyr-7b-sft-qlora and it had mt-bench score of 6.37 (which seems relatively normal). Does anyone else also have difficulties replicating this dpo experiments with qlora? Or is the batch size a critical difference for training?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions