Skip to content

Commit ffae8c6

Browse files
committed
2 parents 341cd7f + f690067 commit ffae8c6

File tree

5 files changed

+15
-8
lines changed

5 files changed

+15
-8
lines changed

Pilot2/P2B1/INSTALL.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,8 @@
11
## Installing Keras framework
22

3+
Here is an alternate set of directions for using the Spack tool
4+
(spack.io) to install Keras and associated tools.
5+
36
### Using spack
47

58
```
@@ -20,7 +23,6 @@ spack install py-matplotlib +image
2023

2124
```
2225
# Activate all of these tools in the spack python environment
23-
spack activate ipython
2426
spack activate py-ipython
2527
spack activate py-keras
2628
spack activate py-matplotlib

Pilot2/P2B2/INSTALL.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,8 @@
11
## Installing Keras framework
22

3+
Here is an alternate set of directions for using the Spack tool
4+
(spack.io) to install Keras and associated tools.
5+
36
### Using spack
47

58
```
@@ -20,7 +23,6 @@ spack install py-matplotlib +image
2023

2124
```
2225
# Activate all of these tools in the spack python environment
23-
spack activate ipython
2426
spack activate py-ipython
2527
spack activate py-keras
2628
spack activate py-matplotlib
@@ -31,5 +33,5 @@ module load py-ipython-5.1.0-gcc-4.9.3
3133
3234
# Lauch ipython and then run the example
3335
ipython
34-
[1]: run p2b1_baseline_keras1.py
36+
[1]: run p2b2_baseline_keras1.py
3537
```

Pilot2/README.md

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,9 @@
66

77
#### Description of the Data
88
* Data source: MD Simulation output as PDB files (coarse-grained bead simulation)
9-
* Input dimensions: ~1.26e6 per time step (6000 lipids x 30 beads per lipid x (position + velocity + type))
9+
* Input dimensions:
10+
* Long term target: ~1.26e6 per time step (6000 lipids x 30 beads per lipid x (position + velocity + type))
11+
* Current: ~288e3 per time step (6000 lipids x 12 beads per lipid x (position + type))
1012
* Output dimensions: 500
1113
* Latent representation dimension:
1214
* Sample size: O(10^6) for simulation requiring O(10^8) time steps
@@ -17,8 +19,6 @@
1719
* 3-component-system (DPPC-DOPC-CHOL)
1820
* af-restraints-290k
1921

20-
#### p2_small_baseline.npy
21-
2222
#### 3K lipids, 10 microseconds simulation, ~3000 frames:
2323
* Disordered - 3k_run10_10us.35fs-DPPC.10-DOPC.70-CHOL.20.dir
2424
* Ordered - 3k_run32_10us.35fs-DPPC.50-DOPC.10-CHOL.40.dir
@@ -34,6 +34,9 @@
3434
* ```--data-set=3k_Disordered```
3535
* ```--data-set=3k_Ordered```
3636
* ```--data-set=3k_Ordered_and_gel```
37+
* ```--data-set=6k_Disordered```
38+
* ```--data-set=6k_Ordered```
39+
* ```--data-set=6k_Ordered_and_gel```
3740

3841
### Data Set Release Notice
3942

Pilot3/P3B2/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
## P3B1: RNN-LSTM: A Generative Model for Clinical Path Reports
1+
## P3B2: RNN-LSTM: A Generative Model for Clinical Path Reports
22
**Overview**:Given a sample corpus of biomedical text such as clinical reports, build a deep learning network that can automatically generate synthetic text documents with valid clinical context.
33

44
**Relationship to core problem**:Labeled data is quite challenging to come by, specifically for patient data, since manual annotations are time consuming; hence, a core capability we intend to build is a “gold-standard” annotated data that is generated by deep learning networks to tune our deep text comprehension applications.

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ Pilot3 (P3) benchmarks are formed out of problems and data at the population lev
1515
Each of the problems (P1,P2,P3) informed the implementation of specific benchmarks, so P1B3 would be benchmark three of problem 1.
1616
At this point, we will refer to a benchmark by it's problem area and benchmark number. So it's natural to talk of the P1B1 benchmark. Inside each benchmark directory, there exists a readme file that contains an overview of the benchmark, a description of the data and expected outcomes along with instructions for running the benchmark code.
1717

18-
Over time, we will be adding implementations that make use of different tensor frameworks. The primary (baseline) benchmarks are implemented using tensorflow, and are named with '_baseline' in the name, for example p2b1_baseline.py.
18+
Over time, we will be adding implementations that make use of different tensor frameworks. The primary (baseline) benchmarks are implemented using keras, and are named with '_baseline' in the name, for example p3b1_baseline_keras2.py.
1919

2020
Implementations that use alternative tensor frameworks, such as mxnet or neon, will have the name of the framework in the name. Examples can be seen in the P1B3 benchmark contribs/ directory, for example:
2121
p1b3_mxnet.py

0 commit comments

Comments
 (0)