|
1 | | -What if there was a way to avoid having to install a broad range of scientific software from scratch on every |
| 1 | +What if you could avoid installing a broad range of scientific software from scratch on every |
2 | 2 | supercomputer, cloud instance, or laptop you use or maintain, without compromising on performance? |
3 | 3 |
|
4 | | -Installing scientific software for supercomputers is known to be a tedious and time-consuming task. The application |
5 | | -software stack continues to deepen as the |
6 | | -High-Performance Computing (HPC) user community becomes more diverse, computational science expands rapidly, and the diversity of system architectures |
7 | | -increases. Simultaneously, we see a surge in interest in public cloud |
8 | | -infrastructures for scientific computing. Delivering optimised software installations and providing access to these |
9 | | -installations in a reliable, user-friendly, and reproducible way is a highly non-trivial task that affects application |
10 | | -developers, HPC user support teams, and the users themselves. |
| 4 | +Installing scientific software is known to be a tedious and time-consuming task. The software stack |
| 5 | +continues to deepen as computational science expands rapidly, the diversity of system architectures |
| 6 | +increases, and interest in public cloud infrastructures is surging. |
| 7 | +Providing access to optimised software installations in a reliable, user-friendly, and reproducible way |
| 8 | +is a highly non-trivial task that affects application developers, HPC user support teams, and the users themselves. |
11 | 9 |
|
12 | 10 | Although scientific research on supercomputers is fundamentally software-driven, |
13 | | -setting up and managing a software stack remains challenging and time-consuming. |
14 | | -In addition, parallel filesystems like GPFS and Lustre are known to be ill-suited for hosting software installations |
15 | | -that typically consist of a large number of small files. This can lead to surprisingly slow startup performance of |
16 | | -software, and may even negatively impact the overall performance of the system. |
17 | | -While workarounds for these issues such as using container images are prevalent, they come with caveats, |
18 | | -such as the significant size of these images, the required compatibility with the system MPI for distributing computing, |
19 | | -and complications with accessing specialized hardware resources like GPUs. |
| 11 | +setting up and managing a software stack remains challenging. |
| 12 | +Parallel filesystems like GPFS and Lustre are usually ill-suited for hosting software installations |
| 13 | +that involve a large number of small files, which can lead to slow software startup, and may even negatively impact |
| 14 | +overall system performance. |
| 15 | +While workarounds such as using container images are prevalent, they come with caveats, |
| 16 | +such as large image sizes, required compatibility with the system MPI, |
| 17 | +and issues with accessing GPUs. |
20 | 18 |
|
21 | | -This tutorial aims to address these challenges by introducing the attendees to a way to \emph{stream} |
22 | | -software installations via \emph{CernVM-FS}, a distributed read-only filesystem specifically designed |
23 | | -to efficiently distribute software across large-scale computing infrastructures. |
24 | | -The tutorial introduces the \emph{European Environment for Scientific Software Installations (EESSI)}, |
25 | | -a collaboration between various European HPC sites \& industry partners, with the common goal of |
26 | | -creating a shared repository of optimised scientific software installations (\emph{not} recipes) that can be used on a variety of |
| 19 | +This tutorial aims to address these challenges by introducing (i) \emph{CernVM-FS}, |
| 20 | +a distributed read-only filesystem designed to efficiently \emph{stream} software installations on-demand, |
| 21 | +and (ii) the \emph{European Environment for Scientific Software Installations (EESSI)}, |
| 22 | +a shared repository of optimised scientific software installations (\emph{not} recipes) that can be used on a variety of |
27 | 23 | systems, regardless of which flavor/version of Linux distribution or processor architecture is used, or whether it's a full size HPC |
28 | | -cluster, a cloud environment or a personal workstation. |
| 24 | +cluster, a cloud environment, or a personal workstation. |
29 | 25 |
|
30 | | -We cover the installation and configuration of CernVM-FS to access EESSI, the usage of EESSI, how to add software |
31 | | -installations to EESSI, how to install software on top of EESSI, and advanced topics like GPU support and performance |
32 | | -tuning. |
| 26 | +Its covers installing and configuring CernVM-FS, the usage of EESSI, |
| 27 | +installing software into and on top of EESSI, and advanced topics like GPU support and performance tuning. |
0 commit comments