-
Notifications
You must be signed in to change notification settings - Fork 4
feat: implement mini-batch training support #11
Copy link
Copy link
Open
Labels
enhancementNew feature or requestNew feature or requestoptimizationthis issue optimizes some aspect of the librarythis issue optimizes some aspect of the librarypriority: high
Description
Description
Currently, the training loop in SheafNN::train() processes cochain data one by one, which is both inefficient and unstable as heck. We need to implement mini-batch training support to address this.
Details
- Add
batch_sizeparameter totrain()andtrain_debug()methods - Create
DataLoaderutility struct for batching supervised data pairs - Modify forward pass to handle batched inputs
- Update gradient computation to accumulate gradients across batch
- Add batch-wise loss computation and averaging
- Update optimizer step to handle batched gradients properly
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or requestoptimizationthis issue optimizes some aspect of the librarythis issue optimizes some aspect of the librarypriority: high