You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/hpc/08_ood/open_on_demand.md
+5-1Lines changed: 5 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,6 +9,7 @@ This page describes how to use your Singularity with conda environment in Open O
9
9
The following commands must be run from the terminal. Information on accessing via the terminal can be found at the [Connecting to the HPC page](../02_connecting_to_hpc/01_connecting_to_hpc.md).
10
10
11
11
### Preinstallation Warning
12
+
:::warning
12
13
If you have initialized Conda in your base environment, your prompt on Greene may show something like:
13
14
```sh
14
15
(base) [NETID@log-1 ~]$
@@ -33,6 +34,7 @@ unset __conda_setup
33
34
```
34
35
35
36
The above code automatically makes your environment look for the default shared installation of Conda on the cluster and will sabotage any attempts to install packages to a Singularity environment. Once removed or commented out, log out and back into the cluster for a fresh environment.
***WARNING:*** If you used a different overlay (/scratch/$USER/my_env/overlay-15GB-500K.ext3 shown above) or .sif file (/scratch/work/public/singularity/cuda12.3.2-cudnn9.0.0-ubuntu-22.04.4.sif shown above), you MUST change those lines in the command above to the files you used.
155
+
:::warning
156
+
If you used a different overlay (/scratch/$USER/my_env/overlay-15GB-500K.ext3 shown above) or .sif file (/scratch/work/public/singularity/cuda12.3.2-cudnn9.0.0-ubuntu-22.04.4.sif shown above), you MUST change those lines in the command above to the files you used.
157
+
:::
154
158
155
159
Edit the default kernel.json file by setting PYTHON_LOCATION and KERNEL_DISPLAY_NAME using a text editor like nano/vim.
Copy file name to clipboardExpand all lines: docs/hpc/08_ood/singularity_with_conda.md
+13-4Lines changed: 13 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,6 +14,7 @@ Singularity is a free, cross-platform and open-source program that creates and e
14
14
15
15
## Using Singularity Overlays for Miniforge (Python & Julia)
16
16
### Preinstallation Warning
17
+
:::warning
17
18
If you have initialized Conda in your base environment, your prompt on Greene may show something like:
18
19
```sh
19
20
(base) [NETID@log-1 ~]$
@@ -38,6 +39,7 @@ unset __conda_setup
38
39
```
39
40
40
41
The above code automatically makes your environment look for the default shared installation of Conda on the cluster and will sabotage any attempts to install packages to a Singularity environment. Once removed or commented out, log out and back into the cluster for a fresh environment.
42
+
:::
41
43
42
44
### Miniforge Environment PyTorch Example
43
45
[Conda environments](https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html) allow users to create customizable, portable work environments and dependencies to support specific packages or versions of software for research. Common conda distributions include Anaconda, Miniconda and Miniforge. Packages are available via "channels". Popular channels include "conda-forge" and "bioconda". In this tutorial we shall use [Miniforge](https://github.com/conda-forge/miniforge) which sets "conda-forge" as the package channel. Traditional conda environments, however, also create a large number of files that can cut into quotas. To help reduce this issue, we suggest using [Singularity](https://docs.sylabs.io/guides/4.1/user-guide/), a container technology that is popular on HPC systems. Below is an example of how to create a pytorch environment using Singularity and Miniforge.
***Note:*** the end ':ro' addition at the end of the pytorch ext3 image starts the image in read-only mode. To add packages you will need to use ':rw' to launch it in read-write mode.
191
+
:::note
192
+
the end ':ro' addition at the end of the pytorch ext3 image starts the image in read-only mode. To add packages you will need to use ':rw' to launch it in read-write mode.
193
+
:::
190
194
191
195
### Using your Singularity Container in a SLURM Batch Job
192
196
Below is an example script of how to call a python script, in this case torch-test.py, from a SLURM batch job using your new Singularity image
@@ -291,7 +295,9 @@ source /ext3/env.sh
291
295
pip install tensorboard
292
296
```
293
297
294
-
***Note:*** Click here for information on how to configure your conda environment.
298
+
:::note
299
+
[Click here](./conda_environments.md) for information on how to configure your conda environment.
300
+
:::
295
301
296
302
Please also keep in mind that once the overlay image is opened in default read-write mode, the file will be locked. You will not be able to open it from a new process. Once the overlay is opened either in read-write or read-only mode, it cannot be opened in RW mode from other processes either. For production jobs to run, the overlay image should be open in read-only mode. You can run many jobs at the same time as long as they are run in read-only mode. In this ways, it will protect the computation software environment, software packages are not allowed to change when there are jobs running.
297
303
@@ -379,7 +385,10 @@ m = Model(with_optimizer(KNITRO.Optimizer))
379
385
optimize!(m)
380
386
```
381
387
382
-
You can add additional packages with commands like the one below (***NOTE***: Please do not install new packages when you have Julia jobs running, this may create issues with your Julia installation)
388
+
You can add additional packages with commands like the one below.
389
+
:::note
390
+
Please do not install new packages when you have Julia jobs running, this may create issues with your Julia installation)
0 commit comments