@@ -33,16 +33,35 @@ Important: Run the single experiment before in order to download MNIST dataset.
3333Then, use the simple runner script (modify parameters for sweep in ` runner.py ` ):
3434
3535```
36- cd experiments/Fig5
36+ cd experiments/MNIST_AUTOENCODER/ Fig5
3737python runner.py --algorithm PAL --run
3838python runner.py --algorithm PAL --linclass
3939python runner.py --algorithm PAL --gather
4040```
4141
42- Instead of PAL algorithm, choose from ` BP, FA or PAL ` .
42+ Instead of PAL algorithm, choose from ` BP, FA, DFA or PAL ` .
4343
4444- ` --run ` Will train the model, saving latent activation and model after every epoch.
4545- ` --linclass ` Will load the model files and run a linear classifier on the test set.
4646- ` --gather ` Will gather all results into a .npy file. Run linclass.ipynb to produce Fig. 5.
4747
4848These runs will altogether take about 2h on a high-end GPU (tested on Tesla P100).
49+
50+ ## Instructions for reproducing CIFAR-10 experiment with PAL
51+
52+ This will run the experiment required for Tab. 1 in the manuscript. See paper and supplementary information for network parameters.
53+
54+ Use the simple runner script (modify parameters for sweep in ` runner.py ` ):
55+
56+ ```
57+ cd experiments/CIFAR10/Tab1
58+ python runner.py --algorithm PAL --run
59+ python runner.py --algorithm PAL --gather
60+ ```
61+
62+ Instead of PAL algorithm, choose from ` BP, FA, DFA or PAL ` .
63+
64+ - ` --run ` Will train the model, saving latent activation and model after every epoch.
65+ - ` --gather ` Will gather all validation accuracies into a .npy file. Detailed results are saved in subfolders for each model.
66+
67+ These runs will altogether take about 10h on a high-end GPU (tested on Tesla P100).
0 commit comments