Skip to content

Latest commit

ย 

History

History
850 lines (614 loc) ยท 24.9 KB

File metadata and controls

850 lines (614 loc) ยท 24.9 KB

๐Ÿง  Zero to Neural Networks

Neural Network Animation

The Ultimate Beginner's Guide to Understanding & Building Neural Networks from Scratch

Made with NumPy Python License From Scratch

๐Ÿ“š Why This? โ€ข ๐Ÿš€ Quick Start โ€ข ๐Ÿ“– Content โ€ข ๐Ÿ“‘ Index โ€ข ๐Ÿ’ก Projects


๐ŸŽฏ What Makes This THE BEST Resource?

โœจ For Pure Beginners

This isn't just another neural network tutorial. Here's why this is THE definitive resource for learning neural networks from scratch:

๐Ÿ”ฅ 1. Zero to Hero Approach

  • ๐Ÿ“Œ No Prerequisites: Start with basic math, end with deep learning
  • ๐Ÿงฎ Math Made Simple: Every equation explained in plain English with proper LaTeX formatting
  • ๐Ÿ’ป Code from Scratch: Build everything using only NumPy (no black boxes!)
  • ๐ŸŽ“ Learn by Doing: Hands-on Jupyter notebooks for every concept

๐ŸŽฏ 2. Complete Learning System

๐Ÿ“– Theory โ†’ ๐Ÿงฎ Math โ†’ ๐Ÿ’ป Code โ†’ ๐Ÿงช Practice โ†’ ๐Ÿš€ Projects

๐ŸŒŸ 3. What You Get

  • โœ… 8 Comprehensive Modules covering everything from neurons to optimizers
  • โœ… Interactive Jupyter Notebooks with live code examples
  • โœ… Visual Explanations with diagrams and animations
  • โœ… Real Implementation - build actual working neural networks
  • โœ… 11+ Reference Books curated for deep learning
  • โœ… 10+ Cheat Sheets for quick reference
  • โœ… Research Papers to understand the foundations
  • โœ… Micrograd Tutorial by Andrej Karpathy included

๐Ÿ’Ž 4. Why "From Scratch" Matters

๐Ÿšซ Using Libraries Only โœ… Building from Scratch
Black box understanding Crystal clear intuition
Copy-paste coding Deep comprehension
Stuck when things break Debug like a pro
Surface-level knowledge Master-level expertise

๐ŸŽ“ 5. Perfect for

  • ๐ŸŽฏ Complete Beginners wanting to understand AI/ML
  • ๐Ÿ’ป Developers transitioning to machine learning
  • ๐ŸŽ“ Students preparing for AI/ML courses or interviews
  • ๐Ÿ”ฌ Researchers needing solid fundamentals
  • ๐Ÿง  Curious Minds who want to know how AI really works

๐Ÿš€ Quick Start

๐Ÿ“‹ Prerequisites

# Just Python and NumPy!
pip install numpy jupyter matplotlib

๐Ÿƒ Get Started in 3 Steps

# 1. Clone this repository
git clone <your-repo-url>
cd "Neural Networks"

# 2. Start with the basics
jupyter notebook "01.Neural Network Introduction/Intro.md"

# 3. Follow the learning path below!

๐Ÿ“š Complete Learning Path

๐ŸŒฑ Phase 1: Foundations (Start Here!)

What you'll learn:

  • ๐Ÿง  What is a neural network?
  • ๐Ÿ”ข The fundamental formula: xโ‚wโ‚ + xโ‚‚wโ‚‚ + b
  • โšก Why activation functions matter
  • ๐ŸŽฏ Your first neuron from scratch

Files:

  • ๐Ÿ“„ Intro.md - Conceptual foundation
  • ๐Ÿ““ NeuralNetworks_Coding_From_Scratch_Part1.ipynb - Hands-on coding

๐Ÿ—๏ธ 02. Coding a Dense Layer

What you'll learn:

  • ๐Ÿ”— How neurons connect in layers
  • ๐Ÿงฎ Matrix operations for efficiency
  • ๐Ÿ’ป Building your first dense layer
  • ๐Ÿ“Š Forward propagation implementation

Files:

  • ๐Ÿ““ Dense_layer.ipynb - Complete implementation

What you'll learn:

  • ๐ŸŸข Sigmoid - For probabilities (0 to 1)
  • ๐Ÿ”ต Tanh - Zero-centered outputs (-1 to 1)
  • ๐Ÿ”ฅ ReLU - The modern default (fast & effective)
  • โšก Leaky ReLU - Fixing dying neurons
  • ๐Ÿ”ข Softmax - Multi-class classification

Files:

  • ๐Ÿ“„ Explanation_of_activation_layers.md - Theory & use cases
  • ๐Ÿ““ activation_functions.ipynb - All activations coded from scratch

Visual Guide:

Function Range Best For
Sigmoid (0, 1) Binary classification output
Tanh (-1, 1) Hidden layers (older networks)
ReLU [0, โˆž) Hidden layers (default choice)
Softmax (0, 1) sum=1 Multi-class output

๐Ÿ”ฅ Phase 2: Training Neural Networks

What you'll learn:

  • ๐Ÿ“ Calculus for neural networks
  • ๐Ÿ”— Chain rule explained simply
  • ๐Ÿ“Š Computing gradients
  • ๐ŸŽฏ Why derivatives matter for learning

Files:

  • ๐Ÿ“„ partial_derivatives_explantion.md - Math foundations
  • ๐Ÿ“„ gradient_derivative.md - Gradient computation

๐Ÿ”„ 05. Backpropagation โญ CRITICAL

What you'll learn:

  • ๐Ÿง  The backbone of neural networks
  • ๐Ÿ”„ How networks learn from mistakes
  • ๐Ÿงฎ Computing gradients efficiently
  • ๐Ÿ’ป Full implementation from scratch
  • ๐ŸŽฏ Training on real data (spiral dataset)
  • ๐Ÿ”ฅ Cross-entropy loss implementation

Files:

  • ๐Ÿ“„ 01.Backpropogation_explanation.md - Complete theory
  • ๐Ÿ“„ 02.backpropogation_manual_calculation.md - Step-by-step math
  • ๐Ÿ““ 03.backpropogation.ipynb - Interactive tutorial
  • ๐Ÿ““ 04.Spiral_data_backpropogation.ipynb - Real-world example
  • ๐Ÿ“ Implemention_backpropogation_crossentropyloss/ - Advanced implementation
    • ๐Ÿ“„ 01.Implemention_backpropogation_crossentropyloss.md - Theory
    • ๐Ÿ““ code.ipynb - Complete code

Why This is Essential:

Without backpropagation, neural networks cannot learn. This is the most important algorithm in deep learning!


What you'll learn:

  • ๐Ÿ”ข Why we use matrix operations in neural networks
  • ๐Ÿงฎ How input transpose appears in gradient computation
  • ๐Ÿ“Š Shape reasoning for weight gradients
  • ๐Ÿ’ป Matrix-based backpropagation implementation
  • ๐ŸŽฏ Forward and backward pass with matrices

Files:

  • ๐Ÿ“„ explanation.md - Complete matrix mathematics
  • ๐Ÿ““ manual_cal_coding.ipynb - Manual calculations with code

Key Insight:

Forward pass distributes input through weights; backward pass distributes error through transposed weights.


What you'll learn:

  • ๐Ÿ“‰ Batch Gradient Descent
  • ๐ŸŽฒ Stochastic Gradient Descent (SGD)
  • ๐Ÿ“Š Mini-batch Gradient Descent
  • โšก When to use each variant

Files:

  • ๐Ÿ“„ Types_of_GD.md - Explanation with code examples

๐Ÿš€ 08. Optimizers

What you'll learn:

  • ๐Ÿ“‰ Gradient Descent basics
  • ๐Ÿƒ Momentum - Accelerated learning with velocity
  • ๐Ÿ“Š Adagrad - Adaptive learning rates per parameter
  • ๐Ÿ”ฅ RMSProp - Root Mean Square Propagation
  • โšก Adam - The industry standard (Adaptive Moment Estimation)

Files:

  • ๐Ÿ“„ explantion.md - Overview of all optimizers
  • ๐Ÿ“ 1.Momentum/ - Momentum optimizer
    • ๐Ÿ“„ explanation.md - Theory
    • ๐Ÿ““ code.ipynb - Implementation
  • ๐Ÿ“ 2.Adagrad/ - Adagrad optimizer
    • ๐Ÿ“„ explanation.md - Complete guide
  • ๐Ÿ“ 3.Rmsprop/ - RMSProp optimizer
    • ๐Ÿ“„ explanation.md - Detailed explanation
  • ๐Ÿ“ 4.Adam_Optimiser/ - Adam optimizer
    • ๐Ÿ“„ explanation.md - Industry standard guide

Optimizer Comparison:

Optimizer Learning Rate Best For
SGD Fixed Simple problems
Momentum Fixed + velocity Escaping local minima
Adagrad Adaptive per parameter Sparse data
RMSProp Adaptive with decay RNNs, non-stationary
Adam Adaptive + momentum Default choice (most cases)

๐Ÿš€ Phase 3: Advanced Topics & Bonus Content

๐ŸŽ Bonus Resources

A comprehensive collection of premium learning materials to deepen your understanding.

11 carefully curated books covering theory to practice:

  • ๐Ÿ“• Neural Networks and Deep Learning - Michael Nielsen
  • ๐Ÿ“— Deep Learning From Scratch - Practical implementation
  • ๐Ÿ“˜ Fundamentals of Deep Learning - Comprehensive guide
  • ๐Ÿ“™ Applied Deep Learning - Real-world applications
  • ๐Ÿ““ Deep Learning with Python - Franรงois Chollet
  • ๐Ÿ“” Programming PyTorch - Framework mastery
  • ๐Ÿ“– Generative Deep Learning - Creative AI
  • ๐Ÿ“š NN from Scratch (Reference Book) - Your main companion
  • ๐Ÿ“ Deep Learning Course Notes - Condensed wisdom
  • ๐Ÿ“‹ DL Notes - Quick reference
๐Ÿ“Š Cheat Sheets

10 essential quick-reference guides:

  • ๐Ÿง  Convolutional Neural Networks
  • ๐Ÿ”„ Recurrent Neural Networks
  • ๐Ÿค– Transformers & Large Language Models
  • ๐Ÿ’ก Deep Learning Tips & Tricks
  • ๐ŸŽฏ Reflex Models
  • ๐Ÿ“Š States Models
  • ๐Ÿ”ข Variables Models
  • ๐Ÿงฎ Logic Models
  • ๐ŸŒŸ Super Cheatsheet: Deep Learning
  • ๐Ÿš€ Super Cheatsheet: Artificial Intelligence

What you'll learn:

  • ๐Ÿ”ง Build an autograd engine from scratch
  • ๐Ÿง  Understand PyTorch internals
  • ๐ŸŽ“ Learn from Andrej Karpathy's legendary tutorial

Files:

  • ๐Ÿ““ 01.Intro.ipynb - Autograd implementation
๐Ÿ“„ Research Papers

Foundational papers that shaped modern AI


๐Ÿ“– Learning Resources Included

All premium resources are now organized in the Bonus/ folder for easy access!

๐Ÿ“š Books (11 Premium Resources)

Located in Bonus/Book_for_Deep_Learning/

  • ๐Ÿ“• Neural Networks and Deep Learning - Michael Nielsen
  • ๐Ÿ“— Deep Learning From Scratch - Practical implementation
  • ๐Ÿ“˜ Fundamentals of Deep Learning - Comprehensive guide
  • ๐Ÿ“™ Applied Deep Learning - Real-world applications
  • ๐Ÿ““ Deep Learning with Python - Franรงois Chollet
  • ๐Ÿ“” Programming PyTorch - Framework mastery
  • ๐Ÿ“– Generative Deep Learning - Creative AI
  • ๐Ÿ“š NN from Scratch (Reference Book) - Your main companion
  • ๐Ÿ“ Deep Learning Course Notes - Condensed wisdom
  • ๐Ÿ“‹ DL Notes - Quick reference

๐Ÿ“Š Cheat Sheets (10 Essential Guides)

Located in Bonus/Cheat_Sheet/

  • ๐Ÿง  Convolutional Neural Networks
  • ๐Ÿ”„ Recurrent Neural Networks
  • ๐Ÿค– Transformers & Large Language Models
  • ๐Ÿ’ก Deep Learning Tips & Tricks
  • ๐ŸŽฏ Reflex Models
  • ๐Ÿ“Š States Models
  • ๐Ÿ”ข Variables Models
  • ๐Ÿงฎ Logic Models
  • ๐ŸŒŸ Super Cheatsheet: Deep Learning
  • ๐Ÿš€ Super Cheatsheet: Artificial Intelligence

๐Ÿ“„ Research Papers

Located in Bonus/Research_paper_Deep_Learning/

Foundational papers that shaped modern AI


๐ŸŽ“ Learning Roadmap

๐Ÿ—บ๏ธ Recommended Path

graph TD
    A[01. Neural Network Intro] --> B[02. Dense Layers]
    B --> C[03. Activation Functions]
    C --> D[04. Partial Derivatives]
    D --> E[05. Backpropagation]
    E --> F[06. Matrix Mathematics]
    F --> G[07. Gradient Descent]
    G --> H[08. Optimizers]
    H --> I[Building Micrograd]
    I --> J[Real Projects]
Loading

โฑ๏ธ Time Commitment

Phase Topics Estimated Time
๐ŸŒฑ Foundations 01-03 1-2 weeks
๐Ÿ”ฅ Training 04-08 3-4 weeks
๐Ÿš€ Advanced Micrograd + Projects 2-4 weeks

Total: 6-10 weeks to master neural networks from scratch!


๐Ÿ’ก Hands-on Projects

๐ŸŽฏ What You'll Build

  1. ๐Ÿ”ข Single Neuron - Understand the basics
  2. ๐Ÿ—๏ธ Dense Neural Network - Multi-layer architecture
  3. ๐ŸŒ€ Spiral Dataset Classifier - Non-linear decision boundaries
  4. โœ๏ธ MNIST Digit Recognition - Classic computer vision
  5. ๐Ÿค– Autograd Engine - Build your own PyTorch

๐Ÿ“‘ Complete Content Index

๐Ÿ“‚ Core Modules (8 Chapters)

๐Ÿ“˜ 01. Neural Network Introduction - Foundation Concepts
  • Intro.md - What is a neural network?
  • NeuralNetworks_Coding_From_Scratch_Part1.ipynb - First neuron implementation
  • Key Topics: Neurons, weights, biases, basic formula
๐Ÿ—๏ธ 02. Coding a Dense Layer - Building Blocks
  • Dense_layer.ipynb - Complete dense layer from scratch
  • Key Topics: Matrix operations, layer connections, forward pass
โšก 03. Activation Functions - Non-linearity
  • Explanation_of_activation_layers.md - Theory and use cases
  • activation_functions.ipynb - All activations coded
  • Key Topics: Sigmoid, Tanh, ReLU, Leaky ReLU, Softmax
๐Ÿงฎ 04. Partial Derivatives - Calculus Foundations
  • partial_derivatives_explantion.md - Math foundations
  • gradient_derivative.md - Gradient computation
  • Key Topics: Chain rule, derivatives, gradient computation
๐Ÿ”„ 05. Backpropagation - The Learning Algorithm โญ
  • 01.Backpropogation_explanation.md - Complete theory
  • 02.backpropogation_manual_calculation.md - Step-by-step math
  • 03.backpropogation.ipynb - Interactive tutorial
  • 04.Spiral_data_backpropogation.ipynb - Real dataset
  • Implemention_backpropogation_crossentropyloss/
    • 01.Implemention_backpropogation_crossentropyloss.md - Advanced theory
    • code.ipynb - Full implementation
  • Key Topics: Gradient flow, chain rule, weight updates, cross-entropy
๐Ÿ”ข 06. Matrix Mathematics for Backpropagation - Deep Understanding
  • explanation.md - Why matrices matter
  • manual_cal_coding.ipynb - Manual calculations
  • Key Topics: Transpose operations, shape reasoning, efficient computation
๐Ÿ“‰ 07. Gradient Descent - Optimization Basics
  • Types_of_GD.md - All gradient descent variants
  • Key Topics: Batch GD, Stochastic GD, Mini-batch GD
๐Ÿš€ 08. Optimizers - Advanced Training
  • explantion.md - Overview of all optimizers
  • 1.Momentum/
    • explanation.md - Momentum theory
    • code.ipynb - Implementation
  • 2.Adagrad/
    • explanation.md - Adaptive learning rates
  • 3.Rmsprop/
    • explanation.md - RMSProp explained
  • 4.Adam_Optimiser/
    • explanation.md - Industry standard
  • Key Topics: SGD, Momentum, Adagrad, RMSProp, Adam

๐ŸŽ Bonus Content

๐Ÿ“š Books for Deep Learning (11 Premium Books)
  1. Neural Networks and Deep Learning - Michael Nielsen
  2. Deep Learning From Scratch
  3. Fundamentals of Deep Learning
  4. Applied Deep Learning
  5. Deep Learning with Python - Franรงois Chollet
  6. Programming PyTorch
  7. Generative Deep Learning
  8. NN from Scratch (Reference Book)
  9. Deep Learning Course Notes
  10. DL Notes
  11. Additional reference materials
๐Ÿ“Š Cheat Sheets (10 Essential Guides)
  1. Convolutional Neural Networks
  2. Recurrent Neural Networks
  3. Transformers & Large Language Models
  4. Deep Learning Tips & Tricks
  5. Reflex Models
  6. States Models
  7. Variables Models
  8. Logic Models
  9. Super Cheatsheet: Deep Learning
  10. Super Cheatsheet: Artificial Intelligence
๐ŸŽจ Building Micrograd - Andrej Karpathy's Tutorial
  • 01.Intro.ipynb - Build an autograd engine from scratch
  • Key Topics: Automatic differentiation, computational graphs, PyTorch internals
๐Ÿ“„ Research Papers - Foundational AI Papers
  • Collection of seminal papers in deep learning
  • Topics: Neural network architectures, training techniques, optimization

๐Ÿ—๏ธ Neural Network Architecture Overview

๐Ÿ“ What You'll Build

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚                    NEURAL NETWORK PIPELINE                   โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

๐Ÿ“ฅ INPUT LAYER
   โ†“
   [xโ‚, xโ‚‚, ..., xโ‚™]
   โ†“
๐Ÿงฑ DENSE LAYER 1 (Hidden)
   โ†“
   Zโ‚ = XยทWโ‚ + bโ‚
   โ†“
โšก ACTIVATION (ReLU/Sigmoid/Tanh)
   โ†“
   Aโ‚ = activation(Zโ‚)
   โ†“
๐Ÿงฑ DENSE LAYER 2 (Hidden)
   โ†“
   Zโ‚‚ = Aโ‚ยทWโ‚‚ + bโ‚‚
   โ†“
โšก ACTIVATION (ReLU)
   โ†“
   Aโ‚‚ = activation(Zโ‚‚)
   โ†“
๐Ÿงฑ OUTPUT LAYER
   โ†“
   Zโ‚ƒ = Aโ‚‚ยทWโ‚ƒ + bโ‚ƒ
   โ†“
โšก SOFTMAX (Classification) / LINEAR (Regression)
   โ†“
   ลท = softmax(Zโ‚ƒ)
   โ†“
โŒ LOSS FUNCTION
   โ†“
   L = CrossEntropy(ลท, y) or MSE(ลท, y)
   โ†“
๐Ÿ”„ BACKPROPAGATION
   โ†“
   โˆ‚L/โˆ‚Wโ‚ƒ, โˆ‚L/โˆ‚Wโ‚‚, โˆ‚L/โˆ‚Wโ‚
   โ†“
๐Ÿš€ OPTIMIZER (SGD/Adam/RMSProp)
   โ†“
   W = W - ฮทยทโˆ‡W
   โ†“
๐Ÿ” REPEAT UNTIL CONVERGENCE

๐ŸŽฏ Key Components You'll Master

Component What It Does Where You Learn It
๐Ÿงฑ Dense Layer Connects neurons Module 02
โšก Activation Adds non-linearity Module 03
๐Ÿ“‰ Loss Function Measures error Module 05
๐Ÿ”„ Backpropagation Computes gradients Module 05
๐Ÿงฎ Partial Derivatives Calculus foundation Module 04
๐Ÿ“Š Gradient Descent Updates weights Module 07
๐Ÿš€ Optimizers Smart weight updates Module 08
๐Ÿ”ข Matrix Operations Efficient computation Module 06

๐ŸŽ“ Learning Outcomes

After Completing This Course, You Will:

โœ… Understand how neural networks work at a fundamental level
โœ… Implement neural networks from scratch using only NumPy
โœ… Explain backpropagation, gradient descent, and optimization
โœ… Debug neural network training issues
โœ… Build real-world machine learning applications
โœ… Read and understand research papers
โœ… Transition easily to frameworks like PyTorch and TensorFlow
โœ… Interview confidently for ML/AI positions

๐Ÿง  Core Concepts Mastered

Fundamentals:

  • โœ… Neurons & Perceptrons
  • โœ… Forward Propagation
  • โœ… Activation Functions (Sigmoid, ReLU, Softmax, etc.)
  • โœ… Loss Functions (MSE, Cross-Entropy)
  • โœ… Backpropagation Algorithm
  • โœ… Gradient Descent & Variants
  • โœ… Matrix Operations for Neural Networks

Advanced Topics:

  • โœ… Momentum & Adaptive Learning Rates
  • โœ… Optimizer Comparison (SGD, Adam, RMSProp, Adagrad)
  • โœ… Batch vs Stochastic vs Mini-batch Training
  • โœ… Autograd Engines
  • โœ… Deep Network Architectures
  • โœ… Training Dynamics & Convergence

๐Ÿ› ๏ธ Repository Structure

๐Ÿ“ฆ Neural Networks from Scratch
โ”œโ”€โ”€ ๐Ÿ“ 01.Neural Network Introduction/
โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ Intro.md
โ”‚   โ””โ”€โ”€ ๐Ÿ““ NeuralNetworks_Coding_From_Scratch_Part1.ipynb
โ”œโ”€โ”€ ๐Ÿ“ 02.Coding a dense layer/
โ”‚   โ””โ”€โ”€ ๐Ÿ““ Dense_layer.ipynb
โ”œโ”€โ”€ ๐Ÿ“ 03.Activation Layer/
โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ Explanation_of_activation_layers.md
โ”‚   โ””โ”€โ”€ ๐Ÿ““ activation_functions.ipynb
โ”œโ”€โ”€ ๐Ÿ“ 04.Partial_Derivatives/
โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ partial_derivatives_explantion.md
โ”‚   โ””โ”€โ”€ ๐Ÿ“„ gradient_derivative.md
โ”œโ”€โ”€ ๐Ÿ“ 05.BackPropogation/
โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ 01.Backpropogation_explanation.md
โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ 02.backpropogation_manual_calculation.md
โ”‚   โ”œโ”€โ”€ ๐Ÿ““ 03.backpropogation.ipynb
โ”‚   โ”œโ”€โ”€ ๐Ÿ““ 04.Spiral_data_backpropogation.ipynb
โ”‚   โ””โ”€โ”€ ๐Ÿ“ Implemention_backpropogation_crossentropyloss/
โ”‚       โ”œโ”€โ”€ ๐Ÿ“„ 01.Implemention_backpropogation_crossentropyloss.md
โ”‚       โ””โ”€โ”€ ๐Ÿ““ code.ipynb
โ”œโ”€โ”€ ๐Ÿ“ 06.Why_matrices_imp_for_backpropogation/
โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ explanation.md
โ”‚   โ””โ”€โ”€ ๐Ÿ““ manual_cal_coding.ipynb
โ”œโ”€โ”€ ๐Ÿ“ 07.Gradient_Desent/
โ”‚   โ””โ”€โ”€ ๐Ÿ“„ Types_of_GD.md
โ”œโ”€โ”€ ๐Ÿ“ 08.Optimisers/
โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ explantion.md
โ”‚   โ”œโ”€โ”€ ๐Ÿ“ 1.Momentum/
โ”‚   โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ explanation.md
โ”‚   โ”‚   โ””โ”€โ”€ ๐Ÿ““ code.ipynb
โ”‚   โ”œโ”€โ”€ ๐Ÿ“ 2.Adagrad/
โ”‚   โ”‚   โ””โ”€โ”€ ๐Ÿ“„ explanation.md
โ”‚   โ”œโ”€โ”€ ๐Ÿ“ 3.Rmsprop/
โ”‚   โ”‚   โ””โ”€โ”€ ๐Ÿ“„ explanation.md
โ”‚   โ””โ”€โ”€ ๐Ÿ“ 4.Adam_Optimiser/
โ”‚       โ””โ”€โ”€ ๐Ÿ“„ explanation.md
โ”œโ”€โ”€ ๐Ÿ“ Bonus/
โ”‚   โ”œโ”€โ”€ ๐Ÿ“ Book_for_Deep_Learning/
โ”‚   โ”‚   โ””โ”€โ”€ ๐Ÿ“š 11 Premium Books
โ”‚   โ”œโ”€โ”€ ๐Ÿ“ Cheat_Sheet/
โ”‚   โ”‚   โ””โ”€โ”€ ๐Ÿ“Š 10 Essential Cheat Sheets
โ”‚   โ”œโ”€โ”€ ๐Ÿ“ Building_Micrograd_Andrej_Karpathy/
โ”‚   โ”‚   โ””โ”€โ”€ ๐Ÿ““ 01.Intro.ipynb
โ”‚   โ””โ”€โ”€ ๐Ÿ“ Research_paper_Deep_Learning/
โ”‚       โ””โ”€โ”€ ๐Ÿ“„ Foundational Papers
โ”œโ”€โ”€ ๐Ÿ“ Images/
โ”‚   โ””โ”€โ”€ ๐Ÿ–ผ๏ธ Visual Resources
โ””โ”€โ”€ ๐Ÿ“„ README.md (You are here!)

๐Ÿ“ˆ Your Learning Journey

๐ŸŽฏ Week-by-Week Plan

Week 1-2: Foundations ๐ŸŒฑ

  • Read Neural Network Introduction
  • Code your first neuron
  • Build a dense layer
  • Implement all activation functions
  • Milestone: Understand forward propagation

Week 3-4: The Math ๐Ÿงฎ

  • Master partial derivatives
  • Understand the chain rule
  • Learn gradient computation
  • Milestone: Comfortable with calculus for ML

Week 5-6: Backpropagation ๐Ÿ”ฅ

  • Study backpropagation theory
  • Manual calculations
  • Code backprop from scratch
  • Understand matrix operations in backprop
  • Train on spiral dataset
  • Milestone: Build a fully functional neural network

Week 7-8: Optimization โšก

  • Learn gradient descent variants
  • Implement SGD, Momentum, Adam
  • Compare optimizer performance
  • Milestone: Understand training dynamics

Week 9+: Advanced ๐Ÿš€

  • Build Micrograd
  • Work on real projects
  • Read research papers
  • Milestone: Master-level understanding

๐ŸŽ“ Study Tips

๐Ÿ’ก How to Use This Resource

  1. ๐Ÿ“– Read First: Start with the markdown explanations
  2. ๐Ÿงฎ Understand Math: Don't skip the equations - they're explained simply
  3. ๐Ÿ’ป Code Along: Type the code yourself, don't just read
  4. ๐Ÿ”„ Experiment: Change parameters, break things, fix them
  5. ๐Ÿ“ Take Notes: Write down insights in your own words
  6. ๐ŸŽฏ Build Projects: Apply concepts to real problems
  7. ๐Ÿ” Review: Revisit earlier topics as you progress

โš ๏ธ Common Pitfalls to Avoid

โŒ Rushing through theory to get to code
โŒ Copy-pasting without understanding
โŒ Skipping the math sections
โŒ Not experimenting with the code
โŒ Moving forward without mastering basics

โœ… Take your time with each concept
โœ… Type every line of code yourself
โœ… Work through the math step-by-step
โœ… Modify and experiment constantly
โœ… Build solid foundations before advancing


๐Ÿค Contributing

Found a bug? Have a suggestion? Want to add content?

  1. ๐Ÿด Fork the repository
  2. ๐ŸŒฟ Create a feature branch
  3. โœ๏ธ Make your changes
  4. ๐Ÿ“ค Submit a pull request

๐Ÿ“ž Support & Community

  • ๐Ÿ’ฌ Questions? Open an issue
  • ๐Ÿ› Found a bug? Report it
  • ๐Ÿ’ก Have an idea? Share it
  • โญ Like this? Star the repo!

๐Ÿ“œ License

This project is licensed under the MIT License - see the LICENSE file for details.


๐Ÿ™ Acknowledgments

๐Ÿ“š Inspired By

  • ๐ŸŽ“ Andrew Ng - Deep Learning Specialization
  • ๐Ÿง  Andrej Karpathy - Neural Networks: Zero to Hero
  • ๐Ÿ“– Michael Nielsen - Neural Networks and Deep Learning
  • ๐Ÿ”ฌ Ian Goodfellow - Deep Learning Book

๐ŸŒŸ Special Thanks

  • The open-source community
  • All the researchers who made their papers accessible
  • Everyone contributing to democratizing AI education

๐Ÿš€ Ready to Start?

Your Journey Begins Here! ๐Ÿ‘‡

# Start with the basics
cd "01.Neural Network Introduction"
jupyter notebook Intro.md

๐ŸŽฏ Remember:

"The best way to learn neural networks is to build them from scratch."

๐Ÿ’ช You've Got This!

Building neural networks from scratch might seem daunting, but you're in the right place. This resource has helped countless beginners become confident ML practitioners. You're next!


โญ If this helps you, please star the repository! โญ

Happy Learning! ๐Ÿš€๐Ÿง 

Made with โค๏ธ for aspiring AI engineers

โฌ† Back to Top