Unpublished work by Joshua Oon Soo Goh [1] in fulfillment of Psych 591 Neural Network Modeling course conducted in Spring 2007 at the Dept. of Psychology, University of Illinois at Urbana-Champaign, IL, USA, instructed by John Hummel. This work extends the used of a multi-layer perceptron that updates its weights using error backpropagation as implemented in Rumelhart, Hinton, and Williams [2] by evaluating its performance with respect to target outputs. The primary code is writtern in R (backprop_assignment.R), which has been also ported to Jupyter Notebook format [4].
Bibliography & Notes
- Goh, J. O. S. (2007). Backpropagation in a non-linear layered network: learning from past mistakes. Unpublished. [pdf]
- Rumelhart, D., Hinton, G., & Williams, R. (1986). Learning internal representations by error propagation. MIT Press Cambridge, MA, USA.
- GIBMS 7015 Neural Networks Course, NTU COOL, National Taiwan University, https://cool.ntu.edu.tw/courses/45064/pages/neural-networks
- [3] uses the Multi-Layer Perceptron Colab instance binary_autoencoder.ipynb