Skip to content

Commit 7c42301

Browse files
committed
added admonitions
1 parent f941ff1 commit 7c42301

File tree

3 files changed

+19
-5
lines changed

3 files changed

+19
-5
lines changed
Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
# Conda Environments (Python, R)

docs/hpc/08_ood/open_on_demand.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,7 @@ This page describes how to use your Singularity with conda environment in Open O
99
The following commands must be run from the terminal. Information on accessing via the terminal can be found at the [Connecting to the HPC page](../02_connecting_to_hpc/01_connecting_to_hpc.md).
1010

1111
### Preinstallation Warning
12+
:::warning
1213
If you have initialized Conda in your base environment, your prompt on Greene may show something like:
1314
```sh
1415
(base) [NETID@log-1 ~]$
@@ -33,6 +34,7 @@ unset __conda_setup
3334
```
3435

3536
The above code automatically makes your environment look for the default shared installation of Conda on the cluster and will sabotage any attempts to install packages to a Singularity environment. Once removed or commented out, log out and back into the cluster for a fresh environment.
37+
:::
3638

3739
### Prepare Overlay File
3840
```sh
@@ -150,7 +152,9 @@ singularity exec $nv \
150152
/scratch/work/public/singularity/cuda12.3.2-cudnn9.0.0-ubuntu-22.04.4.sif \
151153
/bin/bash -c "source /ext3/env.sh; $cmd $args"
152154
```
153-
***WARNING:*** If you used a different overlay (/scratch/$USER/my_env/overlay-15GB-500K.ext3 shown above) or .sif file (/scratch/work/public/singularity/cuda12.3.2-cudnn9.0.0-ubuntu-22.04.4.sif shown above), you MUST change those lines in the command above to the files you used.
155+
:::warning
156+
If you used a different overlay (/scratch/$USER/my_env/overlay-15GB-500K.ext3 shown above) or .sif file (/scratch/work/public/singularity/cuda12.3.2-cudnn9.0.0-ubuntu-22.04.4.sif shown above), you MUST change those lines in the command above to the files you used.
157+
:::
154158

155159
Edit the default kernel.json file by setting PYTHON_LOCATION and KERNEL_DISPLAY_NAME using a text editor like nano/vim.
156160
```json

docs/hpc/08_ood/singularity_with_conda.md

Lines changed: 13 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,7 @@ Singularity is a free, cross-platform and open-source program that creates and e
1414

1515
## Using Singularity Overlays for Miniforge (Python & Julia)
1616
### Preinstallation Warning
17+
:::warning
1718
If you have initialized Conda in your base environment, your prompt on Greene may show something like:
1819
```sh
1920
(base) [NETID@log-1 ~]$
@@ -38,6 +39,7 @@ unset __conda_setup
3839
```
3940

4041
The above code automatically makes your environment look for the default shared installation of Conda on the cluster and will sabotage any attempts to install packages to a Singularity environment. Once removed or commented out, log out and back into the cluster for a fresh environment.
42+
:::
4143

4244
### Miniforge Environment PyTorch Example
4345
[Conda environments](https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html) allow users to create customizable, portable work environments and dependencies to support specific packages or versions of software for research. Common conda distributions include Anaconda, Miniconda and Miniforge. Packages are available via "channels". Popular channels include "conda-forge" and "bioconda". In this tutorial we shall use [Miniforge](https://github.com/conda-forge/miniforge) which sets "conda-forge" as the package channel. Traditional conda environments, however, also create a large number of files that can cut into quotas. To help reduce this issue, we suggest using [Singularity](https://docs.sylabs.io/guides/4.1/user-guide/), a container technology that is popular on HPC systems. Below is an example of how to create a pytorch environment using Singularity and Miniforge.
@@ -186,7 +188,9 @@ singularity exec --overlay /scratch/<NetID>/pytorch-example/my_pytorch.ext3:ro /
186188
#output: /ext3/miniforge3/lib/python3.8/site-packages/torch/__init__.py
187189
#output: 1.8.0+cu111
188190
```
189-
***Note:*** the end ':ro' addition at the end of the pytorch ext3 image starts the image in read-only mode. To add packages you will need to use ':rw' to launch it in read-write mode.
191+
:::note
192+
the end ':ro' addition at the end of the pytorch ext3 image starts the image in read-only mode. To add packages you will need to use ':rw' to launch it in read-write mode.
193+
:::
190194

191195
### Using your Singularity Container in a SLURM Batch Job
192196
Below is an example script of how to call a python script, in this case torch-test.py, from a SLURM batch job using your new Singularity image
@@ -291,7 +295,9 @@ source /ext3/env.sh
291295
pip install tensorboard
292296
```
293297

294-
***Note:*** Click here for information on how to configure your conda environment.
298+
:::note
299+
[Click here](./conda_environments.md) for information on how to configure your conda environment.
300+
:::
295301

296302
Please also keep in mind that once the overlay image is opened in default read-write mode, the file will be locked. You will not be able to open it from a new process. Once the overlay is opened either in read-write or read-only mode, it cannot be opened in RW mode from other processes either. For production jobs to run, the overlay image should be open in read-only mode. You can run many jobs at the same time as long as they are run in read-only mode. In this ways, it will protect the computation software environment, software packages are not allowed to change when there are jobs running.
297303

@@ -379,7 +385,10 @@ m = Model(with_optimizer(KNITRO.Optimizer))
379385
optimize!(m)
380386
```
381387

382-
You can add additional packages with commands like the one below (***NOTE***: Please do not install new packages when you have Julia jobs running, this may create issues with your Julia installation)
388+
You can add additional packages with commands like the one below.
389+
:::note
390+
Please do not install new packages when you have Julia jobs running, this may create issues with your Julia installation)
391+
:::
383392
```
384393
~/julia/my-julia-writable -e 'using Pkg; Pkg.add(["Calculus", "LinearAlgebra"])'
385394
```
@@ -454,7 +463,7 @@ Knitro using the Interior-Point/Barrier Direct algorithm.
454463

455464
WARNING: The initial point is a stationary point and only the first order
456465
optimality conditions have been verified.
457-
466+
458467
EXIT: Locally optimal solution found.
459468

460469
Final Statistics

0 commit comments

Comments
 (0)