Skip to content

mixed precision trainingΒ #18

@nicolas-dufour

Description

@nicolas-dufour

Hey,
I've worked on reimplementing RIN based on the authors repo and this repo but I cannot manage to make it work with mixed precision and I see you do make use mixed precision here.
When naivly switching to bfloat16 or float 16, my model get stuck in a weird state:
image
left: bfloat16; right float32

Did you encounter such issues in your implementation? If so do you have some pointers to make it work?

Thanks!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions