-
Notifications
You must be signed in to change notification settings - Fork 0
Home
This is the Earlham institute (EI) knowledge base, a wiki resource written and maintained by EI representatives. It's aim is to provide clear, specific and brief information for EI HPC users in a single place. This should be seen as a place to get started with a new tool, rather than a place to become an expert in that tool for instance.
There are two sidebars to the right:
- Pages has all pages in the wiki listed in alphabetical order (with a search function).
- Index groups pages by category instead. If you add a new page to the wiki, please add a link to it as an entry in the index sidebar.
Some of these are not pages on the wiki, but links to external sources (eg research computing's documentation, or the carpentries) where those resources are applicable for EI users.
If there is a topic that you find you need instruction on which is not covered in this knowledge base, please either contact your group Data Champion or Martin Ayling (either by email or using the EI ticketing portal).
- Induction
- HPC Best practice
- Job Arrays - RC documentation
- Methods to Improve I/O Performance - RC documentation
- Customising your bash profile for ease and efficiency
- Customise bash profile: Logging Your Command History Automatically
- Using the ei-gpu partition on the Earlham Institute computing cluster
- Using the GPUs at EI
- HPC Job Summary Tool
- EI Cloud (CyVerse)
- Git and GitHub
- Worked examples
- Job Arrays
- Using Parabricks on the GPUs
- dependencies
- Software installations
- Workflow management system
- Transfers
- Local (mounting HPC storage)
- Remote - <1gb (ood)
- Remote - <50gb (nbi drop off)
- Remote - No limit (globus)
- mv command