-
Notifications
You must be signed in to change notification settings - Fork 78
Open
Labels
Description
User story
As a user, I want to use bfloat16 data types in lava-dl, with compatibility for PyTorch's torch.amp (Automatic Mixed Precision), to accelerate inference and training processes while maintaining numerical accuracy. This will allow for efficient computation and memory savings, leveraging the mixed precision capabilities of PyTorch to optimize performance for large-scale spiking neural networks (SNNs).
Conditions of satisfaction
- The software should support
bfloat16data types for all relevant operations, including both training and inference. - Integration with
torch.ampshould be seamless, allowing users to easily switch betweenfloat32andbfloat16or use automatic mixed precision without significant code changes. - The numerical stability and accuracy of operations with
bfloat16should be validated, ensuring compatibility with PyTorch's mixed precision training workflows. - Documentation should include guidelines on using
bfloat16withtorch.amp, any limitations, and best practices for users.
Reactions are currently unavailable