You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Note, `-XC` is optional; it is only necessary if you are planning on performing remote visualization, e.g. the output `.png` files from the below [section](#Learning-curves-and-ROC-per-epoch). Trusted X11 forwarding can be used with `-Y` instead of `-X` and may prevent timeouts, but it disables X11 SECURITY extension controls. Compression `-C` reduces the bandwidth usage and may be useful on slow connections.
11
12
12
-
### Sample usage on TigerGPU
13
+
### Sample installation on TigerGPU
13
14
14
15
Next, check out the source code from github:
15
16
```
@@ -48,7 +49,7 @@ python setup.py install
48
49
49
50
Where `my_env` should contain the Python packages as per `requirements-travis.txt` file.
50
51
51
-
####Common issue
52
+
### Common build issue: cluster's MPI library and `mpi4py`
52
53
53
54
Common issue is Intel compiler mismatch in the `PATH` and what you use in the module. With the modules loaded as above,
54
55
you should see something like this:
@@ -60,7 +61,9 @@ Especially note the presence of the CUDA directory in this path. This indicates
60
61
61
62
If you `conda activate` the Anaconda environment **after** loading the OpenMPI library, your application would be built with the MPI library from Anaconda, which has worse performance on this cluster and could lead to errors. See [On Computing Well: Installing and Running ‘mpi4py’ on the Cluster](https://oncomputingwell.princeton.edu/2018/11/installing-and-running-mpi4py-on-the-cluster/) for a related discussion.
62
63
63
-
#### Location of the data on Tigress
64
+
65
+
## Understanding and preparing the input data
66
+
### Location of the data on Tigress
64
67
65
68
The JET and D3D datasets contain multi-modal time series of sensory measurements leading up to deleterious events called plasma disruptions. The datasets are located in the `/tigress/FRNN` project directory of the [GPFS](https://www.ibm.com/support/knowledgecenter/en/SSPT3X_3.0.0/com.ibm.swg.im.infosphere.biginsights.product.doc/doc/bi_gpfs_overview.html) filesystem on Princeton University clusters.
All the configuration parameters are summarised in `examples/conf.yaml`. In this section, we highlight the important ones used to control the input data.
79
+
80
+
Currently, FRNN is capable of working with JET and D3D data as well as thecross-machine regime. The switch is done in the configuration file:
81
+
82
+
```yaml
83
+
paths:
84
+
...
85
+
data: 'jet_data_0D'
86
+
```
87
+
use `d3d_data` for D3D signals, use `jet_to_d3d_data` ir `d3d_to_jet_data` for cross-machine regime.
88
+
89
+
By default, FRNN will select, preprocess, and normalize all valid signals available in the above dataset. To chose only specific signals use:
90
+
```yaml
91
+
paths:
92
+
...
93
+
specific_signals: [q95,ip]
94
+
```
95
+
if left empty `[]` will use all valid signals defined on a machine. Only set this variable if you need a custom set of signals.
96
+
97
+
Other parameters configured in the `conf.yaml` include batch size, learning rate, neural network topology and special conditions foir hyperparameter sweeps.
98
+
99
+
### Preprocessing the input data
75
100
76
101
```bash
77
102
cd examples/
78
103
python guarantee_preprocessed.py
79
104
```
80
105
This will preprocess the data and save rescaled copies of the signals in `/tigress/<netid>/processed_shots`, `/tigress/<netid>/processed_shotlists` and `/tigress/<netid>/normalization`
81
106
82
-
You would only have to run preprocessing once for each dataset. The dataset is specified in the config file `examples/conf.yaml`:
107
+
Preprocessing must be performed only once per each dataset. For example, consider the following dataset specified in the config file `examples/conf.yaml`:
83
108
```yaml
84
109
paths:
85
110
data: jet_data_0D
86
111
```
87
112
Preprocessing this dataset takes about 20 minutes to preprocess in parallel and can normally be done on the cluster headnode.
88
113
89
-
#### Training and inference
114
+
### Current signals and notations
115
+
116
+
Signal name | Description
117
+
--- | ---
118
+
q95 | q95 safety factor
119
+
ip | plasma current
120
+
li | internal inductance
121
+
lm | Locked mode amplitude
122
+
dens | Plasma density
123
+
energy | stored energy
124
+
pin | Input Power (beam for d3d)
125
+
pradtot | Radiated Power
126
+
pradcore | Radiated Power Core
127
+
pradedge | Radiated Power Edge
128
+
pechin | ECH input power, not always on
129
+
pechin | ECH input power, not always on
130
+
betan | Normalized Beta
131
+
energydt | stored energy time derivative
132
+
torquein | Input Beam Torque
133
+
tmamp1 | Tearing Mode amplitude (rotating 2/1)
134
+
tmamp2 | Tearing Mode amplitude (rotating 3/2)
135
+
tmfreq1 | Tearing Mode frequency (rotating 2/1)
136
+
tmfreq2 | Tearing Mode frequency (rotating 3/2)
137
+
ipdirect | plasma current direction
138
+
139
+
## Training and inference
90
140
91
-
Use Slurm scheduler to perform batch or interactive analysis on TigerGPU cluster.
141
+
Use the Slurm job scheduler to perform batch or interactive analysis on TigerGPU cluster.
92
142
93
-
##### Batch analysis
143
+
### Batch job
94
144
95
-
For batch analysis, make sure to allocate 1 MPI process per GPU. Save the following to `slurm.cmd` file (or make changes to the existing `examples/slurm.cmd`):
145
+
For non-interactive batch analysis, make sure to allocate exactly 1 MPI process per GPU. Save the following to `slurm.cmd` file (or make changes to the existing `examples/slurm.cmd`):
96
146
97
147
```bash
98
148
#!/bin/bash
@@ -135,7 +185,7 @@ Optionally, add an email notification option in the Slurm configuration about th
135
185
#SBATCH --mail-type=ALL
136
186
```
137
187
138
-
##### Interactive analysis
188
+
### Interactive job
139
189
140
190
Interactive option is preferred for **debugging** or running in the **notebook**, for all other case batch is preferred.
141
191
The workflow is to request an interactive session:
@@ -165,65 +215,13 @@ $ which mpirun
165
215
166
216
[//]: # (This option appears to be redundant given the salloc options; "mpirun python mpi_learn.py" appears to work just the same.)
167
217
168
-
[//]: # (HOWEVER, "srun python mpi_learn.py", "srun --ntasks-per-node python mpi_learn.py", etc. NEVER works--- it just hangs without any output. Why?)
169
-
170
-
[//]: # (Consistent with https://www.open-mpi.org/faq/?category=slurm ?)
171
-
172
-
[//]: # (certain output seems to be repeated by ntasks-per-node, e.g. echoing the conf.yaml. Expected? Or, replace the print calls with print_unique)
218
+
[//]: # (HOWEVER, "srun python mpi_learn.py", "srun --ntasks-per-node python mpi_learn.py", etc. NEVER works--- it just hangs without any output. Why? ANSWER: salloc starts a session on the node using srun under the covers, which may consume a GPU in the allocation. Next srun call will hang due to a lack of required resources. Wrapper fix to this may have been extended from mpirun to srun on 2019-10-22)
173
219
174
-
175
-
### Understanding the data
176
-
177
-
All the configuration parameters are summarised in `examples/conf.yaml`. Highlighting the important ones to control the data.
178
-
Currently, FRNN is capable of working with JET and D3D data as well as cross-machine regime. The switch is done in the configuration file:
179
-
180
-
```yaml
181
-
paths:
182
-
...
183
-
data: 'jet_data_0D'
184
-
```
185
-
use `d3d_data` for D3D signals, use `jet_to_d3d_data` ir `d3d_to_jet_data` for cross-machine regime.
186
-
187
-
By default, FRNN will select, preprocess and normalize all valid signals available. To chose only specific signals use:
188
-
```yaml
189
-
paths:
190
-
...
191
-
specific_signals: [q95,ip]
192
-
```
193
-
if left empty `[]` will use all valid signals defined on a machine. Only use if need a custom set.
194
-
195
-
Other parameters configured in the conf.yaml include batch size, learning rate, neural network topology and special conditions foir hyperparameter sweeps.
196
-
197
-
### Current signals and notations
198
-
199
-
Signal name | Description
200
-
--- | ---
201
-
q95 | q95 safety factor
202
-
ip | plasma current
203
-
li | internal inductance
204
-
lm | Locked mode amplitude
205
-
dens | Plasma density
206
-
energy | stored energy
207
-
pin | Input Power (beam for d3d)
208
-
pradtot | Radiated Power
209
-
pradcore | Radiated Power Core
210
-
pradedge | Radiated Power Edge
211
-
pechin | ECH input power, not always on
212
-
pechin | ECH input power, not always on
213
-
betan | Normalized Beta
214
-
energydt | stored energy time derivative
215
-
torquein | Input Beam Torque
216
-
tmamp1 | Tearing Mode amplitude (rotating 2/1)
217
-
tmamp2 | Tearing Mode amplitude (rotating 3/2)
218
-
tmfreq1 | Tearing Mode frequency (rotating 2/1)
219
-
tmfreq2 | Tearing Mode frequency (rotating 3/2)
220
-
ipdirect | plasma current direction
221
-
222
-
### Visualizing learning
220
+
## Visualizing learning
223
221
224
222
A regular FRNN run will produce several outputs and callbacks.
225
223
226
-
#### TensorBoard visualization
224
+
### TensorBoard visualization
227
225
228
226
Currently supports graph visualization, histograms of weights, activations and biases, and scalar variable summaries of losses and accuracies.
229
227
@@ -262,19 +260,23 @@ mount_osxfuse: mount point <destination folder name on your laptop> is itself on
262
260
263
261
More aggressive options such as `umount -f <destination folder name on your laptop>` and alternative approaches may be necessary; see [discussion here](https://github.com/osxfuse/osxfuse/issues/45#issuecomment-21943107).
264
262
265
-
#### Learning curves and ROC per epoch
263
+
## Custom visualization
264
+
Besides TensorBoard summaries, you can visualize the accuracy of the trained FRNN model using the custom Python scripts and notebooks included in the repository.
265
+
266
+
### Learning curves, example shots, and ROC per epoch
266
267
267
-
Besides TensorBoard summaries you can produce the ROC curves for validation and test data as well as visualizations of shots:
268
+
You can produce the ROC curves for validation and test data as well as visualizations of shots by using:
268
269
```
269
270
cd examples/
270
271
python performance_analysis.py
271
272
```
272
-
this uses the resulting file produced as a result of training the neural network as an input, and produces several `.png` files with plots as an output.
273
+
The `performance_analysis.py` script uses the file produced as a result of training the neural network as an input, and produces several `.png` files with plots as an output.
273
274
274
-
In addition, you can check the scalar variable summaries for training loss, validation loss and validation ROC logged at `/tigress/<netid>/csv_logs` (each run will produce a new log file with a timestamp in name).
A sample code to analyze can be found in `examples/notebooks`. For instance:
277
+
In addition, you can check the scalar variable summaries for training loss, validation loss, and validation ROC logged at `/tigress/<netid>/csv_logs` (each run will produce a new log file with a timestamp in name).
277
278
279
+
Sample notebooks for analyzing the files in this directory can be found in `examples/notebooks/`. For instance, the [LearningCurves.ipynb](https://github.com/PPPLDeepLearning/plasma-python/blob/master/examples/notebooks/LearningCurves.ipynb) notebook contains a variation on the following code snippet:
The resulting plot should match the `train_loss` plot in the Scalars tab of the TensorBoard summary.
300
303
301
-
### Learning curve summaries per mini-batch
304
+
#### Learning curve summaries per mini-batch
302
305
303
-
To extract per mini-batch summaries, use the output produced by FRNN logged to the standard out (in case of the batch jobs, it will all be contained in the Slurm output file). Refer to the following notebook to perform the analysis of learning curve on a mini-batch level: [FRNN_scaling.ipynb](https://github.com/PPPLDeepLearning/plasma-python/blob/master/examples/notebooks/FRNN_scaling.ipynb)
306
+
To extract per mini-batch summaries, we require a finer granularity of checkpoint data than what it is logged to the per-epoch lines of `csv_logs/` files. We must directly use the output produced by FRNN logged to the standard output stream. In the case of the non-interactive Slurm batch jobs, it will all be contained in the Slurm output file, e.g. `slurm-3842170.out`. Refer to the following notebook to perform the analysis of learning curve on a mini-batch level: [FRNN_scaling.ipynb](https://github.com/PPPLDeepLearning/plasma-python/blob/master/examples/notebooks/FRNN_scaling.ipynb)
0 commit comments