Skip to content

Conversation

@yandthj
Copy link
Collaborator

@yandthj yandthj commented Dec 22, 2025

No description provided.

## Project Storage: /projects

Each active project is granted a subdirectory under `/projects/<projectname>`. This is where the bulk of data is expected to be, and where jobs should generally be run from. Storage quotas are based on the allocation award.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you setup these quotas or is this via Lex ? Also thoughts on aurorahpc project space ?

The scratch filesystem on Gila compute node is a 79TB spinning disk Ceph filesystem, and is accessible from login and compute nodes. The default writable path for scratch use is `/scratch/<username>`.

## Temporary space: $TMPDIR

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@@ -0,0 +1,40 @@

# About Gila

Copy link

@ChrisLayton-NREL ChrisLayton-NREL Dec 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Needs need a bit of a rewrite as Gila has changed a bit. Need to make sure we cover
1- compute
2 -gpu
3 grace hoppers

*TODO: Update information about the allocations (include aurorahpc allocation info)*

## Accessing Gila
Access to Gila requires an NLR HPC account and permission to join an existing allocation. Please see the [System Access](https://www.nrel.gov/hpc/system-access.html) page for more information on accounts and allocations.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to add in info about aurorahpc and that it has access to limited resources BUT does not need a formal allocation. Abuse of this account will lead to removal from access perhaps ?

Access to Gila requires an NLR HPC account and permission to join an existing allocation. Please see the [System Access](https://www.nrel.gov/hpc/system-access.html) page for more information on accounts and allocations.

#### For NLR Employees:
To access Gila, log into the NLR network and connect via ssh:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need to clairfy about access to grace and non grace partitions.

Please see the [Help and Support Page](../../help.md) for further information on how to seek assistance with Gila or your NLR HPC account.

## Building code

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add info on salloc here ?
Gracehopper login allows builds , for now.


Do not build or run code on login nodes. Login nodes have limited CPU and memory available. Use a compute or GPU node instead. Simply start an interactive job on an appropriately provisioned node and partition for your work and do your builds there.

Similarly, build your projects under `/projects/your_project_name/` as home directories are **limited to 5GB** per user.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

its now 25G but I think just saying limited will cover it and then the dashboard can help with more specific numbers .

On Gila, modules are deployed and organized slightly differently than on other NLR HPC systems.
While the basic concepts of using modules remain the same, there are important differences in how modules are structured, discovered, and loaded. These differences are intentional and are designed to improve compatibility, reproducibility, and long-term maintainability. The upcoming sections of this document will walk through these differences step by step.

The module system used on this cluster is [Lmod](../../Environment/lmod.md).

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This has 2 versions now with the gracehoppers, is Walid on point for these docs ?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey Chris, I've created this file. Grasshopper updates will be added soon might not be in the same PR.

Gila's home directories are shared across all nodes. Each user has a quota of 5 GB. There is also /scratch/$USER and /projects spaces seen across all nodes.

### Partitions

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Update for gracehoppers


## Home Directories: /home

`/home` directories are mounted as `/home/<username>`. To check your usage in your /home directory, visit the [Gila Filesystem Dashboard](https://influx.hpc.nrel.gov/d/ch4vndd/ceph-filesystem-quotas?folderUid=fexgrdi5pt91ca&orgId=1&from=now-1h&to=now&timezone=browser&tab=queries). You can also check your home directory usage and quota by running the following commands:
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yandthj test from HPC VPN externally

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants