|
| 1 | +# Traverse Tutorial |
| 2 | +*Last updated July 2021.* |
| 3 | + |
| 4 | +## Building the package |
| 5 | +### Login to Traverse |
| 6 | + |
| 7 | +First, login to Traverse cluster headnode via ssh: |
| 8 | +``` |
| 9 | +ssh -XC <yourusername>@traverse.princeton.edu |
| 10 | +``` |
| 11 | +Note, `-XC` is optional; it is only necessary if you are planning on performing remote visualization, e.g. the output `.png` files from the below [section](#Learning-curves-and-ROC-per-epoch). Trusted X11 forwarding can be used with `-Y` instead of `-X` and may prevent timeouts, but it disables X11 SECURITY extension controls. Compression `-C` reduces the bandwidth usage and may be useful on slow connections. |
| 12 | + |
| 13 | +### Sample installation on Traverse |
| 14 | + |
| 15 | +Add the following flag to your environment. |
| 16 | + |
| 17 | +```bash |
| 18 | +export LD_PRELOAD=/usr/lib64/libpmix.so.2 |
| 19 | +``` |
| 20 | + |
| 21 | +The recommended way is to edit your `~/.bashrc` and then reload the environment as follows: |
| 22 | + |
| 23 | +```bash |
| 24 | +source ~/.bashrc |
| 25 | +``` |
| 26 | + |
| 27 | + |
| 28 | +Next, check out the source code from github: |
| 29 | +``` |
| 30 | +git clone https://github.com/PPPLDeepLearning/plasma-python |
| 31 | +cd plasma-python |
| 32 | +git checkout tf2 |
| 33 | +``` |
| 34 | + |
| 35 | +After that, create an isolated Anaconda environment and load CUDA drivers, an MPI compiler, and the HDF5 library. The suggested libraries are included with the repository, so simple sourcing `plasma-python/envs/traverse.cmd` will load these into your path. Traverse uses a PowerPC arcitecture, which is not as widely supported with many common libraries. The `python=3.6` option below sets the python version in this anaconda environment to the older python version 3.6, which will help down the line with installing libraries. |
| 36 | + |
| 37 | +``` |
| 38 | +#cd plasma-python |
| 39 | +module load anaconda3 |
| 40 | +conda env create --name my_env --file envs/requirements-traverse.yaml python=3.6 |
| 41 | +conda activate my_env |
| 42 | +``` |
| 43 | + |
| 44 | +Go into `envs/traverse.cmd` and modify the `conda activate` command to `conda activate my_env`. Run: |
| 45 | + |
| 46 | +``` |
| 47 | +source envs/traverse.cmd |
| 48 | +``` |
| 49 | +As of the latest update of this document (Summer 2021), the above modules correspond to the following versions on the Traverse system, given by `module list`: |
| 50 | +``` |
| 51 | +Currently Loaded Modulefiles: |
| 52 | + 1) anaconda3/2020.7 3) cudnn/cuda-9.2/7.6.3 5) hdf5/gcc/openmpi-1.10.2/1.10.0 |
| 53 | + 2) cudatoolkit/10.2 4) openmpi/cuda-11.0/gcc/4.0.4/64 |
| 54 | +``` |
| 55 | + |
| 56 | +Next, install the `plasma-python` package: |
| 57 | + |
| 58 | +```bash |
| 59 | +#conda activate my_env |
| 60 | +python setup.py install --user |
| 61 | +``` |
| 62 | + |
| 63 | +### Common runtime issue: when to load environment and when to call `sbatch` |
| 64 | +When queueing jobs on Traverse or running slurm managed scripts, *DO NOT* load your anaconda environment before doing so. This will cause a module loading issue. It is *highly* suggested that you build `plasma-python` in one terminal witht the anaconda environment loaded and run it in another without the anaconda environment loaded to avoid this issue. Alternatively, calling `module purge` before using slurm fixes this issue. |
| 65 | + |
| 66 | +### Commond build issue: creating anaconda environment fails |
| 67 | +On Traverse, pytorch has been observed to not install correctly. By default it is commented out, but if that's not the case the quick fix is to not intall it by commenting out the line 17 in `envs/requirements-traverse.yaml` |
| 68 | + |
| 69 | +``` |
| 70 | + 7 dependencies: |
| 71 | + 8 - python>=3.6.8 |
| 72 | + 9 - cython |
| 73 | + 10 - pip |
| 74 | + 11 - scipy |
| 75 | + 12 - pandas |
| 76 | + 13 - flake8 |
| 77 | + 14 - h5py<3.0.0 |
| 78 | + 15 - pyparsing |
| 79 | + 16 - pyyaml |
| 80 | + 17 #- pytorch>1.3 |
| 81 | +``` |
| 82 | + |
| 83 | +### Commond build issue: `xgboost` not installing |
| 84 | +On Traverse, `xgboost` doesn't seem to build right. Just ignore it for now by editing `setup.py` (comment line 35). Rebuild as above. |
| 85 | + |
| 86 | +``` |
| 87 | + 30 install_requires=[ |
| 88 | + 31 'pathos', |
| 89 | + 32 'matplotlib', |
| 90 | + 33 'hyperopt', |
| 91 | + 34 'mpi4py', |
| 92 | + 35 #'xgboost', |
| 93 | + 36 'scikit-learn', |
| 94 | + 37 'joblib', |
| 95 | + 38 ], |
| 96 | +``` |
| 97 | + |
| 98 | +### Common build issue: cluster's MPI library and `mpi4py` |
| 99 | + |
| 100 | +Common issue is Intel compiler mismatch in the `PATH` and what you use in the module. With the modules loaded as above, |
| 101 | +you should see something like this (as of summer 2021): |
| 102 | + |
| 103 | +``` |
| 104 | +$ which mpicc |
| 105 | +/usr/local/openmpi/cuda-11.0/4.0.4/gcc/x86_64/bin/mpicc |
| 106 | +``` |
| 107 | + |
| 108 | +In both cases, especially note the presence of the CUDA directory in this path. This indicates that the loaded OpenMPI library is [CUDA-aware](https://www.open-mpi.org/faq/?category=runcuda). |
| 109 | + |
| 110 | +If you `conda activate` the Anaconda environment **after** loading the OpenMPI library, your application would be built with the MPI library from Anaconda, which has worse performance on this cluster and could lead to errors. See [mpi4py on HPC Clusters](https://researchcomputing.princeton.edu/support/knowledge-base/mpi4py) for a related discussion. |
| 111 | + |
| 112 | + |
| 113 | +## Understanding and preparing the input data |
| 114 | +### Location of the data on Traverse |
| 115 | + |
| 116 | +Tigress is also avilable on Traverse, so this step is identical to TigerGPU. The JET and D3D datasets contain multi-modal time series of sensory measurements leading up to deleterious events called plasma disruptions. The datasets are located in the `/tigress/FRNN` project directory of the [GPFS](https://www.ibm.com/support/knowledgecenter/en/SSPT3X_3.0.0/com.ibm.swg.im.infosphere.biginsights.product.doc/doc/bi_gpfs_overview.html) filesystem on Princeton University clusters. |
| 117 | + |
| 118 | +For convenience, create following symbolic links: |
| 119 | +```bash |
| 120 | +cd /tigress/<netid> |
| 121 | +ln -s /tigress/FRNN/shot_lists shot_lists |
| 122 | +ln -s /tigress/FRNN/signal_data signal_data |
| 123 | +``` |
| 124 | + |
| 125 | +### Configuring the dataset |
| 126 | +All the configuration parameters are summarised in `examples/conf.yaml`. In this section, we highlight the important ones used to control the input data. |
| 127 | + |
| 128 | +Currently, FRNN is capable of working with JET and D3D data as well as thecross-machine regime. The switch is done in the configuration file: |
| 129 | +```yaml |
| 130 | +paths: |
| 131 | + ... |
| 132 | + data: 'jet_0D' |
| 133 | +``` |
| 134 | +
|
| 135 | +Older yaml files kept for archival purposes will denote this data set as follow: |
| 136 | +```yaml |
| 137 | +paths: |
| 138 | + ... |
| 139 | + data: 'jet_data_0D' |
| 140 | +``` |
| 141 | +use `d3d_data` for D3D signals, use `jet_to_d3d_data` ir `d3d_to_jet_data` for cross-machine regime. |
| 142 | + |
| 143 | +By default, FRNN will select, preprocess, and normalize all valid signals available in the above dataset. To chose only specific signals use: |
| 144 | +```yaml |
| 145 | +paths: |
| 146 | + ... |
| 147 | + specific_signals: [q95,ip] |
| 148 | +``` |
| 149 | +if left empty `[]` will use all valid signals defined on a machine. Only set this variable if you need a custom set of signals. |
| 150 | + |
| 151 | +Other parameters configured in the `conf.yaml` include batch size, learning rate, neural network topology and special conditions foir hyperparameter sweeps. |
| 152 | + |
| 153 | +On Traverse, the data is stored in the `tigress` filesystem. You will probably need to modify `conf.yaml` to point there by setting: |
| 154 | +```yaml |
| 155 | +fs_path: '/tigress/' |
| 156 | +... |
| 157 | +fs_path_output: '/tigress/' |
| 158 | +``` |
| 159 | + |
| 160 | + |
| 161 | +### Preprocessing the input data |
| 162 | + |
| 163 | +```bash |
| 164 | +cd examples/ |
| 165 | +python guarantee_preprocessed.py |
| 166 | +``` |
| 167 | +This will preprocess the data and save rescaled copies of the signals in `/tigress/<netid>/processed_shots`, `/tigress/<netid>/processed_shotlists` and `/tigress/<netid>/normalization` |
| 168 | + |
| 169 | +Preprocessing must be performed only once per each dataset. For example, consider the following dataset specified in the config file `examples/conf.yaml`: |
| 170 | +```yaml |
| 171 | +paths: |
| 172 | + data: jet_0D |
| 173 | +``` |
| 174 | +Preprocessing this dataset takes about 20 minutes to preprocess in parallel and can normally be done on the cluster headnode. |
| 175 | + |
| 176 | +### Current signals and notations |
| 177 | + |
| 178 | +Signal name | Description |
| 179 | +--- | --- |
| 180 | +q95 | q95 safety factor |
| 181 | +ip | plasma current |
| 182 | +li | internal inductance |
| 183 | +lm | Locked mode amplitude |
| 184 | +dens | Plasma density |
| 185 | +energy | stored energy |
| 186 | +pin | Input Power (beam for d3d) |
| 187 | +pradtot | Radiated Power |
| 188 | +pradcore | Radiated Power Core |
| 189 | +pradedge | Radiated Power Edge |
| 190 | +pechin | ECH input power, not always on |
| 191 | +pechin | ECH input power, not always on |
| 192 | +betan | Normalized Beta |
| 193 | +energydt | stored energy time derivative |
| 194 | +torquein | Input Beam Torque |
| 195 | +tmamp1 | Tearing Mode amplitude (rotating 2/1) |
| 196 | +tmamp2 | Tearing Mode amplitude (rotating 3/2) |
| 197 | +tmfreq1 | Tearing Mode frequency (rotating 2/1) |
| 198 | +tmfreq2 | Tearing Mode frequency (rotating 3/2) |
| 199 | +ipdirect | plasma current direction |
| 200 | + |
| 201 | +## Training and inference |
| 202 | + |
| 203 | +Use the Slurm job scheduler to perform batch or interactive analysis on TigerGPU cluster. |
| 204 | + |
| 205 | +### Batch job |
| 206 | + |
| 207 | +For non-interactive batch analysis, make sure to allocate exactly 1 MPI process per GPU. Save the following to `slurm.cmd` file (or make changes to the existing `examples/slurm.cmd`): |
| 208 | + |
| 209 | +```bash |
| 210 | +#!/bin/bash |
| 211 | +#SBATCH -t 01:30:00 |
| 212 | +#SBATCH -N X |
| 213 | +#SBATCH --ntasks-per-node=4 |
| 214 | +#SBATCH --ntasks-per-socket=2 |
| 215 | +#SBATCH --gres=gpu:4 |
| 216 | +#SBATCH -c 4 |
| 217 | +#SBATCH --mem-per-cpu=0 |
| 218 | +
|
| 219 | +module load anaconda3 |
| 220 | +conda activate my_env |
| 221 | +export OMPI_MCA_btl="tcp,self,vader" |
| 222 | +module load cudatoolkit cudann |
| 223 | +module load openmpi/cuda-8.0/intel-17.0/3.0.0/64 |
| 224 | +module load intel |
| 225 | +module load hdf5/intel-17.0/intel-mpi/1.10.0 |
| 226 | +
|
| 227 | +srun python mpi_learn.py |
| 228 | +
|
| 229 | +``` |
| 230 | +where `X` is the number of nodes for distibuted training and the total number of GPUs is `X * 4`. This configuration guarantees 1 MPI process per GPU, regardless of the value of `X`. |
| 231 | + |
| 232 | +Update the `num_gpus` value in `conf.yaml` to correspond to the total number of GPUs specified for your Slurm allocation. |
| 233 | + |
| 234 | +Submit the job with (assuming you are still in the `examples/` subdirectory): |
| 235 | +```bash |
| 236 | +#cd examples |
| 237 | +sbatch slurm.cmd |
| 238 | +``` |
| 239 | + |
| 240 | +And monitor it's completion via: |
| 241 | +```bash |
| 242 | +squeue -u <netid> |
| 243 | +``` |
| 244 | +Optionally, add an email notification option in the Slurm configuration about the job completion: |
| 245 | +``` |
| 246 | +#SBATCH --mail-user=<netid>@princeton.edu |
| 247 | +#SBATCH --mail-type=ALL |
| 248 | +``` |
| 249 | + |
| 250 | +### Interactive job |
| 251 | + |
| 252 | +Interactive option is preferred for **debugging** or running in the **notebook**, for all other case batch is preferred. |
| 253 | +The workflow is to request an interactive session: |
| 254 | + |
| 255 | +```bash |
| 256 | +salloc -N [X] --ntasks-per-node=4 --ntasks-per-socket=2 --gres=gpu:4 -c 4 --mem-per-cpu=0 -t 0-6:00 |
| 257 | +``` |
| 258 | + |
| 259 | +[//]: # (Note, the modules might not/are not inherited from the shell that spawns the interactive Slurm session. Need to reload anaconda module, activate environment, and reload other compiler/library modules) |
| 260 | + |
| 261 | +Re-load the above modules and reactivate your conda environment. Confirm that the correct CUDA-aware OpenMPI library is in your interactive Slurm sessions's shell search path: |
| 262 | +```bash |
| 263 | +$ which mpirun |
| 264 | +/usr/local/openmpi/cuda-8.0/3.0.0/intel170/x86_64/bin/mpirun |
| 265 | +``` |
| 266 | +Then, launch the application from the command line: |
| 267 | + |
| 268 | +```bash |
| 269 | +mpirun -N 4 python mpi_learn.py |
| 270 | +``` |
| 271 | +where `-N` is a synonym for `-npernode` in OpenMPI. Do **not** use `srun` to launch the job inside an interactive session. If |
| 272 | +you an encounter an error such as "unrecognized argument N", it is likely that your modules are incorrect and point to an Intel MPI distribution instead of CUDA-aware OpenMPI. Intel MPI is based on MPICH, which does not offer the `-npernode` option. You can confirm this by checking: |
| 273 | +```bash |
| 274 | +$ which mpirun |
| 275 | +/opt/intel/compilers_and_libraries_2019.3.199/linux/mpi/intel64/bin/mpirun |
| 276 | +``` |
| 277 | + |
| 278 | +[//]: # (This option appears to be redundant given the salloc options; "mpirun python mpi_learn.py" appears to work just the same.) |
| 279 | + |
| 280 | +[//]: # (HOWEVER, "srun python mpi_learn.py", "srun --ntasks-per-node python mpi_learn.py", etc. NEVER works--- it just hangs without any output. Why? ANSWER: salloc starts a session on the node using srun under the covers, which may consume a GPU in the allocation. Next srun call will hang due to a lack of required resources. Wrapper fix to this may have been extended from mpirun to srun on 2019-10-22) |
| 281 | + |
| 282 | +## Visualizing learning |
| 283 | + |
| 284 | +A regular FRNN run will produce several outputs and callbacks. |
| 285 | + |
| 286 | +### TensorBoard visualization |
| 287 | + |
| 288 | +Currently supports graph visualization, histograms of weights, activations and biases, and scalar variable summaries of losses and accuracies. |
| 289 | + |
| 290 | +The summaries are written in real time to `/tigress/<netid>/Graph`. For macOS, you can set up the `sshfs` mount of the [`/tigress`](https://researchcomputing.princeton.edu/storage/tigress) filesystem and view those summaries in your browser. |
| 291 | + |
| 292 | +To install SSHFS on a macOS system, you could follow the instructions here: |
| 293 | +https://github.com/osxfuse/osxfuse/wiki/SSHFS |
| 294 | +Or use [Homebrew](https://brew.sh/), `brew cask install osxfuse; brew install sshfs`. Note, to install and/or use `osxfuse` you may need to enable its kernel extension in: System Preferences → Security & Privacy → General |
| 295 | + |
| 296 | +After installation, execute: |
| 297 | +``` |
| 298 | +sshfs -o allow_other,defer_permissions [email protected]:/tigress/<netid>/ <destination folder name on your laptop>/ |
| 299 | +``` |
| 300 | +The local destination folder may be an existing (possibly nonempty) folder. If it does not exist, SSHFS will create the folder. You can confirm that the operation succeeded via the `mount` command, which prints the list of currently mounted filesystems if no arguments are given. |
| 301 | + |
| 302 | +Launch TensorBoard locally (assuming that it is installed on your local computer): |
| 303 | +``` |
| 304 | +python -m tensorboard.main --logdir <destination folder name on your laptop>/Graph |
| 305 | +``` |
| 306 | +A URL should be emitted to the console output. Navigate to this link in your browser. If the TensorBoard interface does not open, try directing your browser to `localhost:6006`. |
| 307 | + |
| 308 | +You should see something like: |
| 309 | + |
| 310 | + |
| 311 | + |
| 312 | +When you are finished with analyzing the summaries in TensorBoard, you may wish to unmount the remote filesystem: |
| 313 | +``` |
| 314 | +umount <destination folder name on your laptop> |
| 315 | +``` |
| 316 | +The local destination folder will remain present, but it will no longer contain the remote files. It will be returned to its previous state, either empty or containing the original local files. Note, the `umount` command is appropriate for macOS systems; some Linux systems instead offer the `fusermount` command. |
| 317 | + |
| 318 | +These commands may be useful when the SSH connection is lost and an existing mount point cannot be re-mounted, e.g. errors such as: |
| 319 | +``` |
| 320 | +mount_osxfuse: mount point <destination folder name on your laptop> is itself on a OSXFUSE volume |
| 321 | +``` |
| 322 | + |
| 323 | +More aggressive options such as `umount -f <destination folder name on your laptop>` and alternative approaches may be necessary; see [discussion here](https://github.com/osxfuse/osxfuse/issues/45#issuecomment-21943107). |
| 324 | + |
| 325 | +## Custom visualization |
| 326 | +Besides TensorBoard summaries, you can visualize the accuracy of the trained FRNN model using the custom Python scripts and notebooks included in the repository. |
| 327 | + |
| 328 | +### Learning curves, example shots, and ROC per epoch |
| 329 | + |
| 330 | +You can produce the ROC curves for validation and test data as well as visualizations of shots by using: |
| 331 | +``` |
| 332 | +cd examples/ |
| 333 | +python performance_analysis.py |
| 334 | +``` |
| 335 | +The `performance_analysis.py` script uses the file produced as a result of training the neural network as an input, and produces several `.png` files with plots as an output. |
| 336 | + |
| 337 | +[//]: # (Add details about sig_161308test.npz, disruptive_alarms_test.npz, 4x metric* png, accum_disruptions.png, test_roc.npz) |
| 338 | + |
| 339 | +In addition, you can check the scalar variable summaries for training loss, validation loss, and validation ROC logged at `/tigress/<netid>/csv_logs` (each run will produce a new log file with a timestamp in name). |
| 340 | + |
| 341 | +Sample notebooks for analyzing the files in this directory can be found in `examples/notebooks/`. For instance, the [LearningCurves.ipynb](https://github.com/PPPLDeepLearning/plasma-python/blob/master/examples/notebooks/LearningCurves.ipynb) notebook contains a variation on the following code snippet: |
| 342 | +```python |
| 343 | +import pandas as pd |
| 344 | +import numpy as np |
| 345 | +from bokeh.plotting import figure, show, output_file, save |
| 346 | +
|
| 347 | +data = pd.read_csv("<destination folder name on your laptop>/csv_logs/<name of the log file>.csv") |
| 348 | +
|
| 349 | +from bokeh.io import output_notebook |
| 350 | +output_notebook() |
| 351 | +
|
| 352 | +from bokeh.models import Range1d |
| 353 | +#optionally set the plotting range |
| 354 | +#left, right, bottom, top = -0.1, 31, 0.005, 1.51 |
| 355 | +
|
| 356 | +p = figure(title="Learning curve", y_axis_label="Training loss", x_axis_label='Epoch number') #,y_axis_type="log") |
| 357 | +#p.set(x_range=Range1d(left, right), y_range=Range1d(bottom, top)) |
| 358 | +
|
| 359 | +p.line(data['epoch'].values, data['train_loss'].values, legend="Test description", |
| 360 | + line_color="tomato", line_dash="dotdash", line_width=2) |
| 361 | +p.legend.location = "top_right" |
| 362 | +show(p, notebook_handle=True) |
| 363 | +``` |
| 364 | +The resulting plot should match the `train_loss` plot in the Scalars tab of the TensorBoard summary. |
| 365 | + |
| 366 | +#### Learning curve summaries per mini-batch |
| 367 | + |
| 368 | +To extract per mini-batch summaries, we require a finer granularity of checkpoint data than what it is logged to the per-epoch lines of `csv_logs/` files. We must directly use the output produced by FRNN logged to the standard output stream. In the case of the non-interactive Slurm batch jobs, it will all be contained in the Slurm output file, e.g. `slurm-3842170.out`. Refer to the following notebook to perform the analysis of learning curve on a mini-batch level: [FRNN_scaling.ipynb](https://github.com/PPPLDeepLearning/plasma-python/blob/master/examples/notebooks/FRNN_scaling.ipynb) |
0 commit comments