Skip to content

feat: implement mini-batch training support #11

@FiberedSkies

Description

@FiberedSkies

Description

Currently, the training loop in SheafNN::train() processes cochain data one by one, which is both inefficient and unstable as heck. We need to implement mini-batch training support to address this.

Details

  • Add batch_size parameter to train() and train_debug() methods
  • Create DataLoader utility struct for batching supervised data pairs
  • Modify forward pass to handle batched inputs
  • Update gradient computation to accumulate gradients across batch
  • Add batch-wise loss computation and averaging
  • Update optimizer step to handle batched gradients properly

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions