-
Notifications
You must be signed in to change notification settings - Fork 0
Software Setup ‐ Using a Singularity Definition File
This guide provides step-by-step instructions for setting up software using a Singularity Definition file
1. Log into the HPC Head Node
From your workstation, log into the HPC head node:
ssh $(whoami)@hpc.nbi.ac.uk2. Connect to the Software Node
From the HPC head node, connect to the software node by typing either software or ssh software23. If prompted, enter your password.
software3. Navigate to the /tmp Directory
To build the Singularity image, navigate to the /tmp directory on the software node:
mkdir -p /tmp/$(whoami) && cd /tmp/$(whoami)4. Build the Singularity Image
Modify the software name (tool) and version (1.2.3) as needed, along with the appropriate paths (i.e, change /path/to/testing to /ei/software/testing if installing there).
sudo singularity build tool-1.2.3.sif /point/to/your/singularity/tool-1.2.3.def5. Copy the Singularity Image to the Testing Directory
Move the built image to the testing (user-installs) location:
mkdir -p /path/to/testing/tool/1.2.3/{src,x86_64/bin}
cp -a tool-1.2.3.sif /path/to/testing/tool/1.2.3/x86_64/6. Create a singularity.exec Wrapper
This wrapper allows execution of the software from the Singularity image.
cd /path/to/testing/tool/1.2.3/x86_64/bin
cat > singularity.exec
#!/bin/bash
DIR=$(dirname "$(readlink -f "$0")")
img=tool-1.2.3.sif # ===> UPDATE here <=== #
script_name=$(basename "$0")
singularity exec "$DIR/../$img" "$script_name" "$@"For GPU-enabled tools, add --nv to the above last line as shown below:
singularity exec --nv "$DIR/../$img" "$script_name" "$@"7. Create Symbolic Links for Executables
If the software is installed under /opt/software/conda_env/bin, run:
cd /path/to/testing/tool/1.2.3/x86_64/bin
for i in $(singularity exec ../tool-1.2.3.sif ls /opt/software/conda_env/bin | xargs); do ln -s singularity.exec ${i}; doneIf the software is installed under /opt/software/another_location/bin, use:
cd /path/to/testing/tool/1.2.3/x86_64/bin
for i in $(singularity exec ../tool-1.2.3.sif ls /opt/software/another_location/bin | xargs); do ln -s singularity.exec ${i}; doneYou may see the following warning, which can be safely ignored. It appears because the singularity exec command is being run from the software node.
WARNING: Not mounting current directory: user bind control is disabled by system administrator8. Create a Source Wrapper
To enable the software environment, create a source script and include a brief usage guide. Optionally, provide additional notes, such as details about a database if it is installed as part of the software setup.
cd /path/to/testing/bin
$ cat > tool-1.2.3
#!/bin/bash
software="/path/to/testing/tool/1.2.3" # ===> UPDATE here <=== #
echo "Sourcing ${software}"
echo "Usage:"
echo " tool -help"
export PATH="${software}/x86_64/bin:$PATH"
echo
echo "Note: Additional notes, for example, database path /path/to/testing/tool/1.2.3/x86_64/database"
echoTest sourcing the wrapper from a new terminal
$ which tool-1.2.3
/path/to/testing/bin/tool-1.2.3
$ source tool-1.2.3
Sourcing /path/to/testing/tool/1.2.3
Usage:
tool -help
Note: Additional notes, for example, database path /path/to/testing/tool/1.2.3/x86_64/database9. Copy the Singularity Definition File
Store the original Singularity definition file in the src folder:
cd /path/to/testing/tool/1.2.3/src && cp -a /point/to/your/singularity/tool-1.2.3.def .That’s it! Your software should now be successfully set up using the Singularity Definition file. 🚀
- Induction
- HPC Best practice
- Job Arrays - RC documentation
- Methods to Improve I/O Performance - RC documentation
- Customising your bash profile for ease and efficiency
- Customise bash profile: Logging Your Command History Automatically
- Using the ei-gpu partition on the Earlham Institute computing cluster
- Using the GPUs at EI
- HPC Job Summary Tool
- EI Cloud (CyVerse)
- Git and GitHub
- Worked examples
- Job Arrays
- Using Parabricks on the GPUs
- dependencies
- Software installations
- Workflow management system
- Transfers
- Local (mounting HPC storage)
- Remote - <1gb (ood)
- Remote - <50gb (nbi drop off)
- Remote - No limit (globus)
- mv command