Skip to content

Latest commit

 

History

History
35 lines (16 loc) · 954 Bytes

File metadata and controls

35 lines (16 loc) · 954 Bytes

Lesson 04

04.01 Multi Layer Perceptron 04.01.01 Autograd 04.01.02 Datasets FashionMNIST and MNIST

04.02 Multi Layer Perceptron from scratch

04.03 Multi Layer Perceptron in Gluon

Background Information

Note 1:

Any linear function can be written in matrix product form. See an example here.

Note 2:

Linear Regression can be solved using Least Squares approximation. See example video here.

Note 3:

Any function can be approximated with a MLP, the so called Universal approximation theorem. See a visual proof here.