Skip to content

Commit d013980

Browse files
authored
Reorganize sections
1 parent 80f3a72 commit d013980

File tree

1 file changed

+77
-74
lines changed

1 file changed

+77
-74
lines changed

docs/PrincetonUTutorial.md

Lines changed: 77 additions & 74 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
1-
## Tutorials
2-
*Last updated 2019-10-16.*
1+
# TigerGPU Tutorial
2+
*Last updated 2019-10-24.*
33

4+
## Building the package
45
### Login to TigerGPU
56

67
First, login to TigerGPU cluster headnode via ssh:
@@ -9,7 +10,7 @@ ssh -XC <yourusername>@tigergpu.princeton.edu
910
```
1011
Note, `-XC` is optional; it is only necessary if you are planning on performing remote visualization, e.g. the output `.png` files from the below [section](#Learning-curves-and-ROC-per-epoch). Trusted X11 forwarding can be used with `-Y` instead of `-X` and may prevent timeouts, but it disables X11 SECURITY extension controls. Compression `-C` reduces the bandwidth usage and may be useful on slow connections.
1112

12-
### Sample usage on TigerGPU
13+
### Sample installation on TigerGPU
1314

1415
Next, check out the source code from github:
1516
```
@@ -48,7 +49,7 @@ python setup.py install
4849

4950
Where `my_env` should contain the Python packages as per `requirements-travis.txt` file.
5051

51-
#### Common issue
52+
### Common build issue: cluster's MPI library and `mpi4py`
5253

5354
Common issue is Intel compiler mismatch in the `PATH` and what you use in the module. With the modules loaded as above,
5455
you should see something like this:
@@ -60,7 +61,9 @@ Especially note the presence of the CUDA directory in this path. This indicates
6061

6162
If you `conda activate` the Anaconda environment **after** loading the OpenMPI library, your application would be built with the MPI library from Anaconda, which has worse performance on this cluster and could lead to errors. See [On Computing Well: Installing and Running ‘mpi4py’ on the Cluster](https://oncomputingwell.princeton.edu/2018/11/installing-and-running-mpi4py-on-the-cluster/) for a related discussion.
6263

63-
#### Location of the data on Tigress
64+
65+
## Understanding and preparing the input data
66+
### Location of the data on Tigress
6467

6568
The JET and D3D datasets contain multi-modal time series of sensory measurements leading up to deleterious events called plasma disruptions. The datasets are located in the `/tigress/FRNN` project directory of the [GPFS](https://www.ibm.com/support/knowledgecenter/en/SSPT3X_3.0.0/com.ibm.swg.im.infosphere.biginsights.product.doc/doc/bi_gpfs_overview.html) filesystem on Princeton University clusters.
6669

@@ -71,28 +74,75 @@ ln -s /tigress/FRNN/shot_lists shot_lists
7174
ln -s /tigress/FRNN/signal_data signal_data
7275
```
7376

74-
#### Preprocessing
77+
### Configuring the dataset
78+
All the configuration parameters are summarised in `examples/conf.yaml`. In this section, we highlight the important ones used to control the input data.
79+
80+
Currently, FRNN is capable of working with JET and D3D data as well as thecross-machine regime. The switch is done in the configuration file:
81+
82+
```yaml
83+
paths:
84+
...
85+
data: 'jet_data_0D'
86+
```
87+
use `d3d_data` for D3D signals, use `jet_to_d3d_data` ir `d3d_to_jet_data` for cross-machine regime.
88+
89+
By default, FRNN will select, preprocess, and normalize all valid signals available in the above dataset. To chose only specific signals use:
90+
```yaml
91+
paths:
92+
...
93+
specific_signals: [q95,ip]
94+
```
95+
if left empty `[]` will use all valid signals defined on a machine. Only set this variable if you need a custom set of signals.
96+
97+
Other parameters configured in the `conf.yaml` include batch size, learning rate, neural network topology and special conditions foir hyperparameter sweeps.
98+
99+
### Preprocessing the input data
75100

76101
```bash
77102
cd examples/
78103
python guarantee_preprocessed.py
79104
```
80105
This will preprocess the data and save rescaled copies of the signals in `/tigress/<netid>/processed_shots`, `/tigress/<netid>/processed_shotlists` and `/tigress/<netid>/normalization`
81106

82-
You would only have to run preprocessing once for each dataset. The dataset is specified in the config file `examples/conf.yaml`:
107+
Preprocessing must be performed only once per each dataset. For example, consider the following dataset specified in the config file `examples/conf.yaml`:
83108
```yaml
84109
paths:
85110
data: jet_data_0D
86111
```
87112
Preprocessing this dataset takes about 20 minutes to preprocess in parallel and can normally be done on the cluster headnode.
88113

89-
#### Training and inference
114+
### Current signals and notations
115+
116+
Signal name | Description
117+
--- | ---
118+
q95 | q95 safety factor
119+
ip | plasma current
120+
li | internal inductance
121+
lm | Locked mode amplitude
122+
dens | Plasma density
123+
energy | stored energy
124+
pin | Input Power (beam for d3d)
125+
pradtot | Radiated Power
126+
pradcore | Radiated Power Core
127+
pradedge | Radiated Power Edge
128+
pechin | ECH input power, not always on
129+
pechin | ECH input power, not always on
130+
betan | Normalized Beta
131+
energydt | stored energy time derivative
132+
torquein | Input Beam Torque
133+
tmamp1 | Tearing Mode amplitude (rotating 2/1)
134+
tmamp2 | Tearing Mode amplitude (rotating 3/2)
135+
tmfreq1 | Tearing Mode frequency (rotating 2/1)
136+
tmfreq2 | Tearing Mode frequency (rotating 3/2)
137+
ipdirect | plasma current direction
138+
139+
## Training and inference
90140

91-
Use Slurm scheduler to perform batch or interactive analysis on TigerGPU cluster.
141+
Use the Slurm job scheduler to perform batch or interactive analysis on TigerGPU cluster.
92142

93-
##### Batch analysis
143+
### Batch job
94144

95-
For batch analysis, make sure to allocate 1 MPI process per GPU. Save the following to `slurm.cmd` file (or make changes to the existing `examples/slurm.cmd`):
145+
For non-interactive batch analysis, make sure to allocate exactly 1 MPI process per GPU. Save the following to `slurm.cmd` file (or make changes to the existing `examples/slurm.cmd`):
96146

97147
```bash
98148
#!/bin/bash
@@ -135,7 +185,7 @@ Optionally, add an email notification option in the Slurm configuration about th
135185
#SBATCH --mail-type=ALL
136186
```
137187

138-
##### Interactive analysis
188+
### Interactive job
139189

140190
Interactive option is preferred for **debugging** or running in the **notebook**, for all other case batch is preferred.
141191
The workflow is to request an interactive session:
@@ -165,65 +215,13 @@ $ which mpirun
165215

166216
[//]: # (This option appears to be redundant given the salloc options; "mpirun python mpi_learn.py" appears to work just the same.)
167217

168-
[//]: # (HOWEVER, "srun python mpi_learn.py", "srun --ntasks-per-node python mpi_learn.py", etc. NEVER works--- it just hangs without any output. Why?)
169-
170-
[//]: # (Consistent with https://www.open-mpi.org/faq/?category=slurm ?)
171-
172-
[//]: # (certain output seems to be repeated by ntasks-per-node, e.g. echoing the conf.yaml. Expected? Or, replace the print calls with print_unique)
218+
[//]: # (HOWEVER, "srun python mpi_learn.py", "srun --ntasks-per-node python mpi_learn.py", etc. NEVER works--- it just hangs without any output. Why? ANSWER: salloc starts a session on the node using srun under the covers, which may consume a GPU in the allocation. Next srun call will hang due to a lack of required resources. Wrapper fix to this may have been extended from mpirun to srun on 2019-10-22)
173219

174-
175-
### Understanding the data
176-
177-
All the configuration parameters are summarised in `examples/conf.yaml`. Highlighting the important ones to control the data.
178-
Currently, FRNN is capable of working with JET and D3D data as well as cross-machine regime. The switch is done in the configuration file:
179-
180-
```yaml
181-
paths:
182-
...
183-
data: 'jet_data_0D'
184-
```
185-
use `d3d_data` for D3D signals, use `jet_to_d3d_data` ir `d3d_to_jet_data` for cross-machine regime.
186-
187-
By default, FRNN will select, preprocess and normalize all valid signals available. To chose only specific signals use:
188-
```yaml
189-
paths:
190-
...
191-
specific_signals: [q95,ip]
192-
```
193-
if left empty `[]` will use all valid signals defined on a machine. Only use if need a custom set.
194-
195-
Other parameters configured in the conf.yaml include batch size, learning rate, neural network topology and special conditions foir hyperparameter sweeps.
196-
197-
### Current signals and notations
198-
199-
Signal name | Description
200-
--- | ---
201-
q95 | q95 safety factor
202-
ip | plasma current
203-
li | internal inductance
204-
lm | Locked mode amplitude
205-
dens | Plasma density
206-
energy | stored energy
207-
pin | Input Power (beam for d3d)
208-
pradtot | Radiated Power
209-
pradcore | Radiated Power Core
210-
pradedge | Radiated Power Edge
211-
pechin | ECH input power, not always on
212-
pechin | ECH input power, not always on
213-
betan | Normalized Beta
214-
energydt | stored energy time derivative
215-
torquein | Input Beam Torque
216-
tmamp1 | Tearing Mode amplitude (rotating 2/1)
217-
tmamp2 | Tearing Mode amplitude (rotating 3/2)
218-
tmfreq1 | Tearing Mode frequency (rotating 2/1)
219-
tmfreq2 | Tearing Mode frequency (rotating 3/2)
220-
ipdirect | plasma current direction
221-
222-
### Visualizing learning
220+
## Visualizing learning
223221

224222
A regular FRNN run will produce several outputs and callbacks.
225223

226-
#### TensorBoard visualization
224+
### TensorBoard visualization
227225

228226
Currently supports graph visualization, histograms of weights, activations and biases, and scalar variable summaries of losses and accuracies.
229227

@@ -262,19 +260,23 @@ mount_osxfuse: mount point <destination folder name on your laptop> is itself on
262260

263261
More aggressive options such as `umount -f <destination folder name on your laptop>` and alternative approaches may be necessary; see [discussion here](https://github.com/osxfuse/osxfuse/issues/45#issuecomment-21943107).
264262

265-
#### Learning curves and ROC per epoch
263+
## Custom visualization
264+
Besides TensorBoard summaries, you can visualize the accuracy of the trained FRNN model using the custom Python scripts and notebooks included in the repository.
265+
266+
### Learning curves, example shots, and ROC per epoch
266267

267-
Besides TensorBoard summaries you can produce the ROC curves for validation and test data as well as visualizations of shots:
268+
You can produce the ROC curves for validation and test data as well as visualizations of shots by using:
268269
```
269270
cd examples/
270271
python performance_analysis.py
271272
```
272-
this uses the resulting file produced as a result of training the neural network as an input, and produces several `.png` files with plots as an output.
273+
The `performance_analysis.py` script uses the file produced as a result of training the neural network as an input, and produces several `.png` files with plots as an output.
273274

274-
In addition, you can check the scalar variable summaries for training loss, validation loss and validation ROC logged at `/tigress/<netid>/csv_logs` (each run will produce a new log file with a timestamp in name).
275+
[//]: # (Add details about sig_161308test.npz, disruptive_alarms_test.npz, 4x metric* png, accum_disruptions.png, test_roc.npz)
275276

276-
A sample code to analyze can be found in `examples/notebooks`. For instance:
277+
In addition, you can check the scalar variable summaries for training loss, validation loss, and validation ROC logged at `/tigress/<netid>/csv_logs` (each run will produce a new log file with a timestamp in name).
277278

279+
Sample notebooks for analyzing the files in this directory can be found in `examples/notebooks/`. For instance, the [LearningCurves.ipynb](https://github.com/PPPLDeepLearning/plasma-python/blob/master/examples/notebooks/LearningCurves.ipynb) notebook contains a variation on the following code snippet:
278280
```python
279281
import pandas as pd
280282
import numpy as np
@@ -297,7 +299,8 @@ p.line(data['epoch'].values, data['train_loss'].values, legend="Test description
297299
p.legend.location = "top_right"
298300
show(p, notebook_handle=True)
299301
```
302+
The resulting plot should match the `train_loss` plot in the Scalars tab of the TensorBoard summary.
300303

301-
### Learning curve summaries per mini-batch
304+
#### Learning curve summaries per mini-batch
302305

303-
To extract per mini-batch summaries, use the output produced by FRNN logged to the standard out (in case of the batch jobs, it will all be contained in the Slurm output file). Refer to the following notebook to perform the analysis of learning curve on a mini-batch level: [FRNN_scaling.ipynb](https://github.com/PPPLDeepLearning/plasma-python/blob/master/examples/notebooks/FRNN_scaling.ipynb)
306+
To extract per mini-batch summaries, we require a finer granularity of checkpoint data than what it is logged to the per-epoch lines of `csv_logs/` files. We must directly use the output produced by FRNN logged to the standard output stream. In the case of the non-interactive Slurm batch jobs, it will all be contained in the Slurm output file, e.g. `slurm-3842170.out`. Refer to the following notebook to perform the analysis of learning curve on a mini-batch level: [FRNN_scaling.ipynb](https://github.com/PPPLDeepLearning/plasma-python/blob/master/examples/notebooks/FRNN_scaling.ipynb)

0 commit comments

Comments
 (0)