Skip to content

Commit 36faa13

Browse files
committed
add _keras2 to baseline filenames
1 parent 05c2321 commit 36faa13

File tree

6 files changed

+6
-6
lines changed

6 files changed

+6
-6
lines changed

Pilot1/P1B1/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@
3232

3333
```
3434
cd Pilot1/P1B1
35-
python p1b1_baseline.py
35+
python p1b1_baseline_keras2.py
3636
```
3737
The training and test data files will be downloaded the first time this is run and will be cached for future runs.
3838

File renamed without changes.

Pilot1/P1B2/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@
3131

3232
```
3333
cd Pilot1/P1B2
34-
python p1b2_baseline.py
34+
python p1b2_baseline_keras2.py
3535
```
3636
The training and test data files will be downloaded the first time this is run and will be cached for future runs.
3737

File renamed without changes.

Pilot1/P1B3/README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ Output dimensions: 1 (growth percentage)
3131

3232
```
3333
$ cd Pilot1/P1B3
34-
$ python p1b3_baseline.py
34+
$ python p1b3_baseline_keras2.py
3535
```
3636

3737
#### Example output
@@ -115,7 +115,7 @@ This benchmark can be run with additional or alternative molecular and drug feat
115115

116116
#### Use multiple cell line and drug feature sets
117117
```
118-
python p1b3_baseline.py --cell_features all --drug_features all --conv 10 10 1 5 5 1 -epochs 200
118+
python p1b3_baseline_keras2.py --cell_features all --drug_features all --conv 10 10 1 5 5 1 -epochs 200
119119
```
120120
This will train a convolution network for 200 epochs, using three sets of cell line features (gene expression, microRNA, proteome) and two sets of drug features (Dragon7 descriptors, encoded latent representation from Aspuru-Guzik's SMILES autoencoder), and will bring the total input feature dimension to 40K.
121121
```
@@ -132,13 +132,13 @@ The `--conv 10 10 1 5 5 1` parameter adds 2 convolution layers to the default 4-
132132

133133
#### Run a toy version of the benchmark
134134
```
135-
python p1b3_baseline.py --feature_subsample 500 -e 5 --train_steps 100 --val_steps 10 --test_steps 10
135+
python p1b3_baseline_keras2.py --feature_subsample 500 -e 5 --train_steps 100 --val_steps 10 --test_steps 10
136136
```
137137
This will take only minutes to run and can be used to test the environment setup. The `--feature_subsample 500` parameter instructs the benchmark to sample 500 random columns from each feature set. The steps parameters reduce the number of batches to use for each epoch.
138138

139139
#### Use locally-connected layers with batch normalization
140140
```
141-
python p1b3_baseline.py --conv 10 10 1 --pool 100 --locally_connected --optimizer adam --batch_normalization --batch_size 64
141+
python p1b3_baseline_keras2.py --conv 10 10 1 --pool 100 --locally_connected --optimizer adam --batch_normalization --batch_size 64
142142
```
143143
This example adds a locally-connected layer to the MLP and changes the optimizer and batch size. The locally connected layer is a convolution layer with unshared weights, so it tends to increase the number of parameters dramatically. Here we use a pooling size of 100 to reduce the parameters. This example also adds a batch normalization layer between any core layer and its activation. Batch normalization is known to speed up training in some settings.
144144

File renamed without changes.

0 commit comments

Comments
 (0)