Rust rewrite and extension not started yet. Only legacy python implementation finished.
Vision: To build a simple, transparent, and educative deep learning library from scratch in Rust.
Mission: To provide a clear and performant implementation of modern deep learning models (MLPs, CNNs, RNNs, Transformers), empowering developers and learners to understand the inner workings of neural networks.
- Educative: Written for clarity and learning, not just for use.
- Transparent: No black boxes. Every line of code will be understandable.
- Performant & Safe: Leveraging Rust's strengths for a fast and reliable library.
This project started back in May 2025, right after I binge-watched Andrej Karpathy's incredible "Neural Networks: Zero to Hero" series on YouTube. To really solidify my understanding, I built a simple, pure Python implementation of a vectorized autograd engine and a basic MLP, extending micrograd. You can find the original Python prototype here.
Now, after some time and a few other projects under my belt, I'm diving back in. My goal is to deepen my own understanding of deep learning by rewriting this library in Rust, and extending it significantly. I'll be documenting my journey as I build a simple, yet performant library that prioritizes interpretability.
A minimal autograd engine and MLP with vectorized operations.
- Goal: Rewrite the core autograd engine and MLP in idiomatic Rust.
- Status: Currently in the architectural planning stage.
- Implement Convolutional Neural Networks (CNNs).
- Implement Recurrent Neural Networks (RNNs) / LSTMs.
- Implement a complete Transformer architecture.
- Add support for GPU operations.
Instructions on how to build and run the Rust version will be added once the foundation is complete.
neuron/
├── Cargo.toml
├── src/
│ ├── lib.rs // Main library crate
│ ├── tensor.rs // Core tensor and autograd engine
│ ├── layers.rs // Neural network layers (Linear, Conv2d, etc.)
│ ├── models/ // Model implementations
│ │ ├── mlp.rs
│ │ ├── cnn.rs
│ │ └── transformer.rs
│ ├── optim.rs // Optimizers (AdamW, SGD)
│ └── losses.rs // Loss functions
└── examples/
└── train_mlp.rs // Example usage
This project thrives on community contributions. If you are interested in helping build a transparent and educational deep learning library, please feel free to open an issue or submit a pull request. All contributions are welcome!
This repository is released under the MIT License.