Skip to content

Commit 2c953b8

Browse files
committed
Merge branch 'release_01' of github.com:ECP-CANDLE/Benchmarks into release_01
2 parents e7d27ac + 40ff9d4 commit 2c953b8

File tree

3 files changed

+42
-162
lines changed

3 files changed

+42
-162
lines changed

Pilot1/Uno/README.md

Lines changed: 40 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
Uno can be trained with a subset of dose response data sources. Here is an command line example of training with all 6 sources: CCLE, CTRP, gCSI, GDSC, NCI60 single drug response, ALMANAC drug pair response.
55

66
```
7-
uno_baseline_keras2.py --train_sources all --cache cache/all --use_landmark_genes --preprocess_rnaseq source_scale --no_feature_source --no_response_source
7+
uno_baseline_keras2.py --train_sources all --cache cache/all --use_landmark_genes True --preprocess_rnaseq source_scale --no_feature_source True --no_response_source True
88
Using TensorFlow backend.
99
Params: {'activation': 'relu', 'batch_size': 32, 'dense': [1000, 1000, 1000], 'dense_feature_layers': [1000, 1000, 1000], 'drop': 0, 'epochs': 10, 'learning_rate': None, 'loss':
1010
'mse', 'optimizer': 'adam', 'residual': False, 'rng_seed': 2018, 'save': 'save/uno', 'scaling': 'std', 'feature_subsample': 0, 'validation_split': 0.2, 'solr_root': '', 'timeout'
@@ -118,5 +118,43 @@ Between random pairs in y_val:
118118
Data points per epoch: train = 20158325, val = 5144721
119119
Steps per epoch: train = 629948, val = 160773
120120
Epoch 1/10
121-
8078/629948 [..............................] - ETA: 50:20:54 - loss: 0.1955 - mae: 0.2982 - r2: 0.2964
121+
629948/629948 [==============================] - 196053s 311ms/step - loss: 0.0993 - mae: 0.2029 - r2: 0.6316 - val_loss: 0.1473 - val_mae: 0.2404 - val_r2: 0.4770
122+
Current time ....196052.671
123+
Epoch 2/10
124+
629948/629948 [==============================] - 194858s 309ms/step - loss: 0.0872 - mae: 0.1890 - r2: 0.6755 - val_loss: 0.1469 - val_mae: 0.2393 - val_r2: 0.4771
125+
Current time ....390911.212
126+
Epoch 3/10
127+
629948/629948 [==============================] - 192603s 306ms/step - loss: 0.0848 - mae: 0.1861 - r2: 0.6840 - val_loss: 0.1486 - val_mae: 0.2409 - val_r2: 0.4720
128+
Current time ....583514.913
129+
Epoch 4/10
130+
629948/629948 [==============================] - 192734s 306ms/step - loss: 0.0836 - mae: 0.1846 - r2: 0.6885 - val_loss: 0.1500 - val_mae: 0.2417 - val_r2: 0.4657
131+
Current time ....776248.738
132+
Epoch 5/10
133+
629948/629948 [==============================] - 190948s 303ms/step - loss: 0.0829 - mae: 0.1836 - r2: 0.6912 - val_loss: 0.1498 - val_mae: 0.2412 - val_r2: 0.4678
134+
Current time ....967196.253
135+
Epoch 6/10
136+
629948/629948 [==============================] - 191344s 304ms/step - loss: 0.0824 - mae: 0.1829 - r2: 0.6931 - val_loss: 0.1506 - val_mae: 0.2417 - val_r2: 0.4631
137+
Current time ....1158540.613
138+
Epoch 7/10
139+
629948/629948 [==============================] - 195056s 310ms/step - loss: 0.0820 - mae: 0.1824 - r2: 0.6945 - val_loss: 0.1518 - val_mae: 0.2431 - val_r2: 0.4596
140+
Current time ....1353596.930
141+
Epoch 8/10
142+
629948/629948 [==============================] - 193873s 308ms/step - loss: 0.0817 - mae: 0.1820 - r2: 0.6956 - val_loss: 0.1525 - val_mae: 0.2428 - val_r2: 0.4570
143+
Current time ....1547470.041
144+
Epoch 9/10
145+
629948/629948 [==============================] - 191701s 304ms/step - loss: 0.0815 - mae: 0.1818 - r2: 0.6963 - val_loss: 0.1525 - val_mae: 0.2434 - val_r2: 0.4593
146+
Current time ....1739170.656
147+
Epoch 10/10
148+
629948/629948 [==============================] - 194420s 309ms/step - loss: 0.0813 - mae: 0.1815 - r2: 0.6971 - val_loss: 0.1528 - val_mae: 0.2432 - val_r2: 0.4600
149+
Current time ....1933590.940
150+
Comparing y_true and y_pred:
151+
mse: 0.1528
152+
mae: 0.2432
153+
r2: 0.4966
154+
corr: 0.7077
155+
```
156+
157+
Training Uno on all data sources is slow. The `--train_sources` parameter can be used to test the code with a smaller set of training data. An example command line is the following.
158+
```
159+
uno_baseline_keras2.py --train_sources CCLE --cache cache/CCLE --use_landmark_genes True --preprocess_rnaseq source_scale --no_feature_source True --no_response_source True
122160
```

Pilot1/Uno/save/uno.A=relu.B=32.E=10.O=adam.LR=None.CF=r.DF=df.D1=1000.D2=1000.D3=1000.log

Lines changed: 0 additions & 158 deletions
This file was deleted.

Pilot3/P3B2/p3b2_default_model.txt

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,8 +2,8 @@
22
data_url = 'http://ftp.mcs.anl.gov/pub/candle/public/benchmarks/P3B2/'
33
train_data = 'P3B2_data.tgz'
44
model_name = 'p3b2'
5-
rnn_size = 256
6-
epochs = 10
5+
rnn_size = 64
6+
epochs = 2
77
n_layers = 1
88
learning_rate = 0.01
99
drop = 0.0

0 commit comments

Comments
 (0)