Skip to content

Commit c654d88

Browse files
authored
Merge branch 'main' into srde_rework
2 parents a54fa7b + 2f56c50 commit c654d88

File tree

14 files changed

+920
-7
lines changed

14 files changed

+920
-7
lines changed

docs/hpc/01_getting_started/02_getting_and_renewing_an_account.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@
1010

1111
This section deals with the eligibility for getting HPC accounts and the process to create new ones, renew existing oaccountsThis section deals with the eligibility for getting HPC accounts, the process to create new accounts, renew existing ones and touches on access policies after graduation fom NYU and access for non-NYU researchers.
1212

13-
:::note
13+
:::info
1414

1515
- All **sponsored accounts** will be created for a period of 12 months, at which point a renewal process is required to continue to use the clusters
1616
- Faculty, students and staff from the **NYU School of Medicine** require the sponsorship of an eligible NYU faculty member to access the NYU HPC clusters
Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
{
22
"label": "Navigating the Cluster",
3-
"position": 2,
3+
"position": 3,
44
}
Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,4 @@
11
{
22
"label": "Training and Support",
3+
"position": 4,
34
}

docs/hpc/05_slurm/_category_.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
{
22
"label": "Slurm",
3-
"position": 4,
3+
"position": 5,
44
}

docs/hpc/08_ood/_category_.json

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
{
2+
"label": "Open OnDemand",
3+
}
Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
# Conda Environments (Python, R)

docs/hpc/08_ood/datasets.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
# Datasets

docs/hpc/08_ood/open_on_demand.md

Lines changed: 192 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,192 @@
1+
# Open OnDemand (OOD) with Conda/Singularity
2+
3+
[Open OnDemand](https://ood.hpc.nyu.edu/) is a tool that allows users to launch Graphical User Interfaces (GUIs) based applications are accessible without modifying your HPC environment. You can log into the Open OnDemand interface at [https://ood.hpc.nyu.edu](https://ood.hpc.nyu.edu). Once logged in, select the **Interactive Apps** menu, select the desired application, and submit the job based on required resources and options.
4+
5+
## OOD + Singularity + conda
6+
This page describes how to use your Singularity with conda environment in Open OnDemand (OOD) GUI at Greene.
7+
8+
### Log Into Greene via the Terminal
9+
The following commands must be run from the terminal. Information on accessing via the terminal can be found at the [Connecting to the HPC page](../02_connecting_to_hpc/01_connecting_to_hpc.md).
10+
11+
### Preinstallation Warning
12+
:::warning
13+
If you have initialized Conda in your base environment, your prompt on Greene may show something like:
14+
```sh
15+
(base) [NETID@log-1 ~]$
16+
```
17+
then you must first comment out or remove this portion of your `~/.bashrc` file:
18+
19+
```bash
20+
# >>> conda initialize >>>
21+
# !! Contents within this block are managed by 'conda init' !!
22+
__conda_setup="$('/share/apps/anaconda3/2020.07/bin/conda' 'shell.bash' 'hook' 2> /dev/null)"
23+
if [ $? -eq 0 ]; then
24+
eval "$__conda_setup"
25+
else
26+
if [ -f "/share/apps/anaconda3/2020.07/etc/profile.d/conda.sh" ]; then
27+
. "/share/apps/anaconda3/2020.07/etc/profile.d/conda.sh"
28+
else
29+
export PATH="/share/apps/anaconda3/2020.07/bin:$PATH"
30+
fi
31+
fi
32+
unset __conda_setup
33+
# <<< conda initialize <<<
34+
```
35+
36+
The above code automatically makes your environment look for the default shared installation of Conda on the cluster and will sabotage any attempts to install packages to a Singularity environment. Once removed or commented out, log out and back into the cluster for a fresh environment.
37+
:::
38+
39+
### Prepare Overlay File
40+
```sh
41+
mkdir /scratch/$USER/my_env
42+
cd /scratch/$USER/my_env
43+
cp -rp /scratch/work/public/overlay-fs-ext3/overlay-15GB-500K.ext3.gz .
44+
gunzip overlay-15GB-500K.ext3.gz
45+
```
46+
Above we used the overlay file "overlay-15GB-500K.ext3.gz" which will contain all of the installed packages. There are more optional overlay files. You can find instructions on the following pages: [Singularity with Conda](./singularity_with_conda.md), [Squash File System and Singularity](./squash_file_system_and_singularity.md).
47+
48+
### Launch Singularity Environment for Installation
49+
```sh
50+
singularity exec --overlay /scratch/$USER/my_env/overlay-15GB-500K.ext3:rw /scratch/work/public/singularity/cuda12.3.2-cudnn9.0.0-ubuntu-22.04.4.sif /bin/bash
51+
```
52+
Above we used the Singularity OS image "cuda12.3.2-cudnn9.0.0-ubuntu-22.04.4.sif " which provides the base operating system environment for the conda environment. There are other Singularity OS images available at `/scratch/work/public/singularity`
53+
54+
Launching Singularity with the --overlay flag mounts the overlay file to a new directory: /ext3 - you will notice that when not using Singularity /ext3 is not available. Be sure that you have the Singularity prompt (Singularity>) and that /ext3 is available before the next step:
55+
```sh
56+
Singularity> ls -lah /ext3
57+
total 8.5K
58+
drwxrwxr-x. 2 root root 4.0K Oct 19 10:01 .
59+
drwx------. 29 root root 8.0K Oct 19 10:01 ..
60+
```
61+
62+
### Install Miniforge to Overlay File
63+
```sh
64+
wget --no-check-certificate https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-Linux-x86_64.sh
65+
sh Miniforge3-Linux-x86_64.sh -b -p /ext3/miniforge3
66+
```
67+
Next, create a wrapper script at /ext3/env.sh
68+
```sh
69+
touch /ext3/env.sh
70+
echo '#!/bin/bash' >> /ext3/env.sh
71+
echo 'unset -f which' >> /ext3/env.sh
72+
echo 'source /ext3/miniforge3/etc/profile.d/conda.sh' >> /ext3/env.sh
73+
echo 'export PATH=/ext3/miniforge3/bin:$PATH' >> /ext3/env.sh
74+
echo 'export PYTHONPATH=/ext3/miniforge3/bin:$PATH' >> /ext3/env.sh
75+
```
76+
Your /ext3/env.sh file should now contain the following:
77+
```bash
78+
#!/bin/bash
79+
unset -f which
80+
source /ext3/miniforge3/etc/profile.d/conda.sh
81+
export PATH=/ext3/miniforge3/bin:$PATH
82+
export PYTHONPATH=/ext3/miniforge3/bin:$PATH
83+
```
84+
The wrapper script will activate your conda environment, to which you will be installing your packages and dependencies.
85+
86+
Next, activate your conda environment with the following:
87+
```sh
88+
source /ext3/env.sh
89+
```
90+
91+
### Install Packages to Miniforge Environment
92+
Now that your environment is activated, you can update and install packages
93+
```sh
94+
conda config --remove channels defaults
95+
conda update -n base conda -y
96+
conda clean --all --yes
97+
conda install pip --yes
98+
conda install ipykernel --yes # Note: ipykernel is required to run as a kernel in the Open OnDemand Jupyter Notebooks
99+
```
100+
To confirm that your environment is appropriately referencing your Miniforge installation, try out the following:
101+
```sh
102+
unset which
103+
which conda
104+
# output: /ext3/miniforge3/bin/conda
105+
106+
which python
107+
# output: /ext3/miniforge3/bin/python
108+
109+
python --version
110+
# output: Python 3.8.5
111+
112+
which pip
113+
# output: /ext3/miniforge3/bin/pip
114+
```
115+
116+
Now use either conda install or pip to install your required python packages to the Miniforge environment.
117+
118+
To install larger packages, like Tensorflow, you must first start an interactive job with adequate compute and memory resources to install packages. The login nodes restrict memory to 2GB per user, which may cause some large packages to crash.
119+
```sh
120+
srun --cpus-per-task=2 --mem=10GB --time=04:00:00 --pty /bin/bash
121+
122+
# wait to be assigned a node
123+
124+
singularity exec --overlay /scratch/$USER/my_env/overlay-15GB-500K.ext3:rw /scratch/work/public/singularity/cuda12.3.2-cudnn9.0.0-ubuntu-22.04.4.sif /bin/bash
125+
126+
source /ext3/env.sh
127+
# activate the environment
128+
```
129+
130+
After it is running, you’ll be redirected to a compute node. From there, run singularity to setup on conda environment, same as you were doing on login node.
131+
132+
### Configure iPython Kernels
133+
To create a kernel named my_env copy the template files to your home directory.
134+
```sh
135+
mkdir -p ~/.local/share/jupyter/kernels
136+
cd ~/.local/share/jupyter/kernels
137+
cp -R /share/apps/mypy/src/kernel_template ./my_env # this should be the name of your Singularity env
138+
cd ./my_env
139+
140+
ls
141+
#kernel.json logo-32x32.png logo-64x64.png python # files in the ~/.local/share/jupyter/kernels directory
142+
```
143+
144+
To set the conda environment, edit the file named 'python' in /.local/share/jupyter/kernels/my_env/.
145+
146+
The python file is a wrapper script that the Jupyter notebook will use to launch your Singularity container and attach it to the notebook.
147+
148+
At the bottom of the file we have the template singularity command.
149+
```sh
150+
singularity exec $nv \
151+
--overlay /scratch/$USER/my_env/overlay-15GB-500K.ext3:ro \
152+
/scratch/work/public/singularity/cuda12.3.2-cudnn9.0.0-ubuntu-22.04.4.sif \
153+
/bin/bash -c "source /ext3/env.sh; $cmd $args"
154+
```
155+
:::warning
156+
If you used a different overlay (/scratch/$USER/my_env/overlay-15GB-500K.ext3 shown above) or .sif file (/scratch/work/public/singularity/cuda12.3.2-cudnn9.0.0-ubuntu-22.04.4.sif shown above), you MUST change those lines in the command above to the files you used.
157+
:::
158+
159+
Edit the default kernel.json file by setting PYTHON_LOCATION and KERNEL_DISPLAY_NAME using a text editor like nano/vim.
160+
```json
161+
{
162+
"argv": [
163+
"PYTHON_LOCATION",
164+
"-m",
165+
"ipykernel_launcher",
166+
"-f",
167+
"{connection_file}"
168+
],
169+
"display_name": "KERNEL_DISPLAY_NAME",
170+
"language": "python"
171+
}
172+
```
173+
to
174+
```json
175+
{
176+
"argv": [
177+
"/home/<Your NetID>/.local/share/jupyter/kernels/my_env/python",
178+
"-m",
179+
"ipykernel_launcher",
180+
"-f",
181+
"{connection_file}"
182+
],
183+
"display_name": "my_env",
184+
"language": "python"
185+
}
186+
```
187+
Update the `"<Your NetID>"` to your own NetID without the `"<>"` symbols.
188+
189+
### Launch an Open OnDemand Jupyter Notebook
190+
[https://ood.hpc.nyu.edu](https://ood.hpc.nyu.edu)
191+
192+
![OOD Launch](./static/OOD_launch.png)

0 commit comments

Comments
 (0)