-
Notifications
You must be signed in to change notification settings - Fork 52
spack‐stack‐1.8.0 release documentation
Ready-to-use spack-stack 1.8.0 installations are available on the following, fully supported platforms. This version supports JEDI-Skylab, various UFS and related applications (UFS Weather Model, EMC Global Workflow, GSI, UFS Short Range Weather Application), and several NEPTUNE applications. Amazon Web Services AMI are available in the US East 1 or 2 regions.
Organization | System | Compilers | Location | Maintainers |
---|---|---|---|---|
HPC platforms | ||||
MSU | Hercules | GCC, Intel | /work/noaa/epic/role-epic/spack-stack/hercules/spack-stack-1.8.0 |
EPIC / JCSDA |
MSU | Orion | GCC, Intel | /work/noaa/epic/role-epic/spack-stack/orion/spack-stack-1.8.0 |
EPIC / JCSDA |
NASA | Discover SCU16 | GCC, Intel | /gpfsm/dswdev/jcsda/spack-stack/scu16/spack-stack-1.8.0 |
JCSDA |
NASA | Discover SCU17 | GCC, Intel | /gpfsm/dswdev/jcsda/spack-stack/scu17/spack-stack-1.8.0 |
JCSDA |
NCAR-Wyoming | Derecho | GCC, Intel | /glade/work/epicufsrt/contrib/spack-stack/derecho/spack-stack-1.8.0 |
EPIC / JCSDA |
NOAA (NCEP) | Acorn | Intel | /lfs/h1/emc/nceplibs/noscrub/spack-stack/spack-stack-1.8.0 |
NOAA-EMC |
NOAA (RDHPCS) | Gaea | Intel | /ncrc/proj/epic/spack-stack/spack-stack-1.8.0 |
EPIC / NOAA-EMC |
NOAA (RDHPCS) | Hera | GCC, Intel | /scratch1/NCEPDEV/nems/role.epic/spack-stack/spack-stack-1.8.0 |
EPIC / NOAA-EMC |
NOAA (RDHPCS) | Jet | GCC, Intel | /mnt/lfs4/HFIP/hfv3gfs/role.epic/spack-stack/spack-stack-1.8.0 |
EPIC / NOAA-EMC |
U.S. Navy (HPCMP) | Narwhal | GCC, Intel | /p/app/projects/NEPTUNE/spack-stack/spack-stack-1.8.0 |
NRL |
U.S. Navy (HPCMP) | Nautilus | GCC, Intel, oneAPI | /p/app/projects/NEPTUNE/spack-stack/spack-stack-1.8.0 |
NRL |
Univ. of Wisconsin | S4 | Intel | /data/prod/jedi/spack-stack/spack-stack-1.8.0 |
JCSDA |
Cloud platforms | ||||
Amazon Web Services | AMI Red Hat 8 | GCC | /home/ec2-user/spack-stack/spack-stack-1.8.0 |
JCSDA |
Amazon Web Services | Parallelcluster JCSDA | GCC, Intel | currently unavailable | JCSDA |
NOAA (RDHPCS) | RDHPCS Parallel Works | Intel | /contrib/spack-stack/spack-stack-1.8.0 |
EPIC / JCSDA |
To use one of the above installations via the system default environment module system, follow the instructions in the individual sections below. For questions or problems, please consult the known issues (MISSING LINK TO KNOWN ISSUES BELOW), the currently open GitHub issues, and the GitHub discussions first.
The following is a list of supplemental or "add-on" environments that are maintained through spack-stack. These environments are provided on a subset of the tier 1 platforms.
Environment name | Description | Platforms |
---|---|---|
gsi-addon-* | Supports GSI and related applications | |
ufswm-* | Supports UFS Weather Model with WCOSS2 package versions |
The following is required for using spack to build and run software with any of the compilers below.
module purge
module use /work/noaa/epic/role-epic/spack-stack/orion/modulefiles
For Intel:
module use /work/noaa/epic/role-epic/spack-stack/orion/spack-stack-1.8.0/envs/ue-intel-2021.9.0/install/modulefiles/Core
module load stack-intel/2021.9.0
module load stack-intel-oneapi-mpi/2021.9.0
module load stack-python/3.11.7
For GNU:
module use /work/noaa/epic/role-epic/spack-stack/orion/spack-stack-1.7.0/envs/ue-gcc-12.2.0/install/modulefiles/Core
module load stack-gcc/12.2.0
module load stack-openmpi/4.1.6
module load stack-python/3.11.7
FILL IN DETAILS AS FOR ORION ABOVE
FILL IN DETAILS AS FOR ORION ABOVE
FILL IN DETAILS AS FOR ORION ABOVE
The following is required for using spack to build and run software with Intel. Don't use module purge
on Narwhal!
umask 0022
module unload PrgEnv-cray
module load PrgEnv-intel/8.3.3
module unload intel
module load intel-classic/2023.2.0
module unload cray-mpich
module unload craype-network-ofi
module load craype-network-ucx
module load cray-mpich-ucx/8.1.21
module load libfabric/1.12.1.2.2.1
module unload cray-libsci
module load cray-libsci/23.05.1.4
module use /p/app/projects/NEPTUNE/spack-stack/spack-stack-1.8.0/envs/ue-intel-2021.10.0/install/modulefiles/Core
module load stack-intel/2021.10.0
module load stack-cray-mpich/8.1.21
module load stack-python/3.11.7
The following is required for using spack to build and run software with GNU. Don't use module purge
on Narwhal!
umask 0022
module unload PrgEnv-cray
module load PrgEnv-gnu/8.3.3
module unload gcc
module load gcc/10.3.0
module unload cray-mpich
module unload craype-network-ofi
module load craype-network-ucx
module load cray-mpich-ucx/8.1.21
module load libfabric/1.12.1.2.2.1
module unload cray-libsci
module load cray-libsci/23.05.1.4
module use /p/app/projects/NEPTUNE/spack-stack/spack-stack-1.8.0/envs/ue-gcc-10.3.0/install/modulefiles/Core
module load stack-gcc/10.3.0
module load stack-cray-mpich/8.1.21
module load stack-python/3.11.7
The following is required for using spack to build and run software with any of the compilers below.
module purge
For Intel classic:
module use /p/app/projects/NEPTUNE/spack-stack/spack-stack-1.8.0/envs/ue-intel-2021.5.0/install/modulefiles/Core
module load stack-intel/2021.5.0
module load stack-openmpi/4.1.6
module load stack-python/3.11.7
For GNU:
**MISSING**
For Intel oneAPI (LLVM compilers with ifort
instead of ifx
):
**MISSING**
Use a c6i.4xlarge instance or larger if running out of memory with AMI "skylab-8.0.0-redhat8" (see JEDI documentation at https://jointcenterforsatellitedataassimilation-jedi-docs.readthedocs-hosted.com/en/latest for more information).
For spack-stack-1.7.0
, run:
.. code-block:: console
ulimit -s unlimited scl_source enable gcc-toolset-11 module use /home/ec2-user/spack-stack/spack-stack-1.7.0/envs/unified-env-gcc-11.2.1/install/modulefiles/Core module load stack-gcc/11.2.1 module load stack-openmpi/5.0.1 module load stack-python/3.10.13
.. _Preconfigured_Sites_Tier2:
Tier 2 preconfigured site are not officially supported by spack-stack. As such, instructions for these systems may be provided here, in form of a README.md
in the site directory, or may not be available. Also, these site configs are not updated on the same regular basis as those of the tier 1 systems and therefore may be out of date and/or not working.
The following sites have site configurations in directory configs/sites/
:
- TACC Frontera (
configs/sites/frontera/
) - AWS Single Node with Nvidia (NVHPC) compilers (
configs/sites/aws-nvidia/
)
.. _Preconfigured_Sites_Casper:
The following is required for building new spack environments and for using spack to build and run software.
.. code-block:: console
module purge
export LMOD_TMOD_FIND_FIRST=yes module load ncarenv/23.10 module use /glade/work/epicufsrt/contrib/spack-stack/casper/modulefiles module load ecflow/5.8.4
.. _Configurable_Sites_CreateEnv:
The following instructions install a new spack environment on a pre-configured site. Instructions for creating a new site config on a configurable system (i.e. a generic Linux or macOS system) can be found in :numref:Section %s <NewSiteConfigs>
. The options for the spack stack
extension are explained in :numref:Section %s <SpackStackExtension>
.
.. code-block:: console
git clone --recurse-submodules https://github.com/jcsda/spack-stack.git cd spack-stack
source setup.sh
spack stack create env -h
(copies site-specific, application-specific, and common config files into the environment directory)
spack stack create env --site hera --template unified-dev --name unified-dev.hera.intel --compiler intel
cd envs/unified-dev.hera.intel/ spack env activate [-p] .
emacs spack.yaml emacs common/.yaml emacs site/.yaml
spack concretize | ${SPACK_STACK_DIR}/util/show_duplicate_packages.py -d [-c] log.concretize
spack install --source [--verbose] [--fail-fast]
spack module lmod refresh
spack stack setup-meta-modules
${SPACK_STACK_DIR}/util/check_permissions.sh
.. note::
You may want to capture the output from :code:spack concretize
and :code:spack install
comands in log files.
For example:
.. code-block:: bash
spack concretize 2>&1 | tee log.concretize
spack install [--verbose] [--fail-fast] 2>&1 | tee log.install
.. _Preconfigured_Sites_ExtendingEnvironments:
Additional packages (and their dependencies) or new versions of packages can be added to existing environments. It is recommended to take a backup of the existing environment directory (e.g. using rsync
) or test this first as described in :numref:Section %s <MaintainersSection_Testing_New_Packages>
, especially if new versions of packages are added that act themselves as dependencies for other packages. In some cases, adding new versions of packages will require rebuilding large portions of the stack, for example if a new version of hdf5
is needed. In this case, it is recommended to start over with an entirely new environment.
In the simplest case, a new package (and its basic dependencies) or a new version of an existing package that is not a dependency for other packages can be added as described in the following example for a new version of ecmwf-atlas
.
- Check if the package has any variants defined in the common (
env_dir/common/packages.yaml
) or site (env_dir/site/packages.yaml
) package config and make sure that these are reflected correctly in thespec
command:
.. code-block:: console
spack spec [email protected]
- Add package to environment specs:
.. code-block:: console
spack add [email protected]
- Run
concretize
step
.. code-block:: console
spack concretize
- Install
.. code-block:: console
spack install [--verbose] [--fail-fast]
Further information on how to define variants for new packages, how to use these non-standard versions correctly as dependencies, ..., can be found in the Spack Documentation <https://spack.readthedocs.io/en/latest>
_. Details on the spack stack
extension of the spack
are provided in :numref:Section %s <SpackStackExtension>
.
.. note::
Instead of spack add [email protected]
, spack concretize
and spack install
, one can also just use spack install [email protected]
after checking in the first step (spack spec
) that the package will be installed as desired.