Skip to content

batch_size=18 in run_scripts /full_finetuning.sh #2

@ddgoede

Description

@ddgoede

Hi there, is it correct that the batch size is set to 18 and the number of epochs to 400 in the full_finetuning script? It seems like an unusual number for a batch size, and also fine tuning with the hyperparameter set-up as in the full_finetuning.sh seems computationally infeasible; extrapolating my measurements it would take more than 25 days to train like this on 8x H100 GPU.

Second, the README mentions that the evaluation code will be uploaded soon. Could you provide any indication as to when you expect to release this code?

Thanks!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions