Skip to content

Commit db3233d

Browse files
committed
update results in readme
1 parent e59c954 commit db3233d

File tree

1 file changed

+34
-37
lines changed

1 file changed

+34
-37
lines changed

P1B3/README.md

Lines changed: 34 additions & 37 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66

77
**Expected outcome**: Build a DNN that can predict growth percentage of a cell line treated with a new drug.
88

9-
### Benchmark Specs Requirements
9+
### Benchmark Specs Requirements
1010

1111
#### Description of the Data
1212
* Data source: Dose response screening results from NCI; 5-platform normalized expression data from NCI; Dragon7 generated drug descriptors based on 2D chemical structures from NCI
@@ -36,23 +36,25 @@ $ python p1b3_baseline.py
3636

3737
#### Example output
3838
```
39-
Using Theano backend.
40-
Using gpu device 0: Tesla K80 (CNMeM is enabled with initial size: 95.0% of memory, cuDNN 5004)
41-
Loaded 2642218 unique (D, CL) response sets.
42-
count 2.642218e+06
43-
mean 6.906977e+01
44-
std 4.860752e+01
45-
min -1.000000e+02
46-
25% 5.400000e+01
47-
50% 8.900000e+01
48-
75% 9.900000e+01
49-
max 2.990000e+02
50-
Name: GROWTH, dtype: float64
51-
Input dim = 998
39+
Using TensorFlow backend.
40+
Command line args = Namespace(activation='relu', batch_size=100, category_cutoffs=[0.0, 0.5], dense=[1000, 500, 100, 50], drop=0.1, drug_features='descriptors', epochs=20, feature_subsample=500, loss='mse', max_logconc=-4.0, min_logconc=-5.0, optimizer='adam', save='save', scaling='std', scramble=False, subsample='naive_balancing', train_samples=0, val_samples=0, verbose=False, workers=1)
41+
Loaded 2328562 unique (D, CL) response sets.
42+
Distribution of dose response:
43+
GROWTH
44+
count 1.004870e+06
45+
mean -1.357397e+00
46+
std 6.217888e+01
47+
min -1.000000e+02
48+
25% -5.600000e+01
49+
50% 0.000000e+00
50+
75% 4.600000e+01
51+
max 2.580000e+02
52+
Rows in train = 800068, val = 200017, test = 4785
53+
Input dim = 1001
5254
____________________________________________________________________________________________________
5355
Layer (type) Output Shape Param # Connected to
5456
====================================================================================================
55-
dense_1 (Dense) (None, 1000) 999000 dense_input_1[0][0]
57+
dense_1 (Dense) (None, 1000) 1002000 dense_input_1[0][0]
5658
____________________________________________________________________________________________________
5759
dropout_1 (Dropout) (None, 1000) 0 dense_1[0][0]
5860
____________________________________________________________________________________________________
@@ -66,31 +68,27 @@ dropout_3 (Dropout) (None, 100) 0 dense_3[0][0]
6668
____________________________________________________________________________________________________
6769
dense_4 (Dense) (None, 50) 5050 dropout_3[0][0]
6870
____________________________________________________________________________________________________
69-
dense_5 (Dense) (None, 1) 51 dense_4[0][0]
70-
====================================================================================================
71-
Total params: 1554701
71+
dropout_4 (Dropout) (None, 50) 0 dense_4[0][0]
7272
____________________________________________________________________________________________________
73+
dense_5 (Dense) (None, 1) 51 dropout_4[0][0]
74+
====================================================================================================
75+
Total params: 1,557,701
76+
Trainable params: 1,557,701
77+
Non-trainable params: 0
78+
7379
Epoch 1/20
74-
2113731/2113700 [==============================] - 1794s - loss: 0.2039 - val_loss: 0.1932
80+
800000/800000 [==============================] - 420s - loss: 0.2554 - val_loss: 0.2037 - val_acc: 0.7519 - test_loss: 1.0826 - test_acc: 0.5651
7581
Epoch 2/20
76-
2113751/2113700 [==============================] - 1791s - loss: 0.1915 - val_loss: 0.1869
82+
800000/800000 [==============================] - 426s - loss: 0.1885 - val_loss: 0.1620 - val_acc: 0.7720 - test_loss: 1.1407 - test_acc: 0.5689
7783
Epoch 3/20
78-
2113744/2113700 [==============================] - 1786s - loss: 0.1886 - val_loss: 0.1887
79-
Epoch 4/20
80-
2113717/2113700 [==============================] - 1773s - loss: 0.1873 - val_loss: 0.1889
81-
Epoch 5/20
82-
2113732/2113700 [==============================] - 1776s - loss: 0.1857 - val_loss: 0.2158
83-
Epoch 6/20
84-
2113719/2113700 [==============================] - 1791s - loss: 0.1856 - val_loss: 0.1926
85-
Epoch 7/20
86-
2113742/2113700 [==============================] - 1793s - loss: 0.1849 - val_loss: 0.1779
87-
Epoch 8/20
88-
2113720/2113700 [==============================] - 1784s - loss: 0.1843 - val_loss: 0.1863
89-
Epoch 9/20
90-
2113733/2113700 [==============================] - 1783s - loss: 0.1841 - val_loss: 0.1945
91-
Epoch 10/20
92-
2113764/2113700 [==============================] - 1792s - loss: 0.1843 - val_loss: 0.1889
93-
...
84+
800000/800000 [==============================] - 427s - loss: 0.1600 - val_loss: 0.1403 - val_acc: 0.7853 - test_loss: 1.1443 - test_acc: 0.5689
85+
... ...
86+
Epoch 18/20
87+
800000/800000 [==============================] - 349s - loss: 0.0912 - val_loss: 0.0881 - val_acc: 0.8339 - test_loss: 1.0033 - test_acc: 0.5653
88+
Epoch 19/20
89+
800000/800000 [==============================] - 418s - loss: 0.0898 - val_loss: 0.0844 - val_acc: 0.8354 - test_loss: 1.0039 - test_acc: 0.5652
90+
Epoch 20/20
91+
800000/800000 [==============================] - 343s - loss: 0.0894 - val_loss: 0.0849 - val_acc: 0.8354 - test_loss: 1.0039 - test_acc: 0.5652
9492
9593
```
9694

@@ -102,4 +100,3 @@ Cristina's results: Using the 5 layer MLP with standard normalization and sizes
102100
![Histogram of errors after 141 epochs](https://raw.githubusercontent.com/ECP-CANDLE/Benchmarks/master/P1B3/images/histo_It140.png)
103101

104102
![Measure vs Predicted percent growth after 141 epochs](https://raw.githubusercontent.com/ECP-CANDLE/Benchmarks/master/P1B3/images/meas_vs_pred_It140.png)
105-

0 commit comments

Comments
 (0)