Skip to content

joshuagohos/multi-layer-perceptron

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Multi-Layer Perceptron and Backpropagation: Learning from Past Mistakes

Unpublished work by Joshua Oon Soo Goh [1] in fulfillment of Psych 591 Neural Network Modeling course conducted in Spring 2007 at the Dept. of Psychology, University of Illinois at Urbana-Champaign, IL, USA, instructed by John Hummel. This work extends the used of a multi-layer perceptron that updates its weights using error backpropagation as implemented in Rumelhart, Hinton, and Williams [2] by evaluating its performance with respect to target outputs. The primary code is writtern in R (backprop_assignment.R), which has been also ported to Jupyter Notebook format [4].

Bibliography & Notes

  1. Goh, J. O. S. (2007). Backpropagation in a non-linear layered network: learning from past mistakes. Unpublished. [pdf]
  2. Rumelhart, D., Hinton, G., & Williams, R. (1986). Learning internal representations by error propagation. MIT Press Cambridge, MA, USA.
  3. GIBMS 7015 Neural Networks Course, NTU COOL, National Taiwan University, https://cool.ntu.edu.tw/courses/45064/pages/neural-networks
  4. [3] uses the Multi-Layer Perceptron Colab instance binary_autoencoder.ipynb

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors