Skip to content

Commit fd4d57b

Browse files
authored
Merge pull request #36 from boegel/abstract
tweaks to abstract to make it fit on a single page
2 parents ada15a1 + 97b3c22 commit fd4d57b

File tree

2 files changed

+22
-26
lines changed

2 files changed

+22
-26
lines changed

isc25/EESSI/abstract.tex

Lines changed: 20 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -1,32 +1,27 @@
1-
What if there was a way to avoid having to install a broad range of scientific software from scratch on every
1+
What if you could avoid installing a broad range of scientific software from scratch on every
22
supercomputer, cloud instance, or laptop you use or maintain, without compromising on performance?
33

4-
Installing scientific software for supercomputers is known to be a tedious and time-consuming task. The application
5-
software stack continues to deepen as the
6-
High-Performance Computing (HPC) user community becomes more diverse, computational science expands rapidly, and the diversity of system architectures
7-
increases. Simultaneously, we see a surge in interest in public cloud
8-
infrastructures for scientific computing. Delivering optimised software installations and providing access to these
9-
installations in a reliable, user-friendly, and reproducible way is a highly non-trivial task that affects application
10-
developers, HPC user support teams, and the users themselves.
4+
Installing scientific software is known to be a tedious and time-consuming task. The software stack
5+
continues to deepen as computational science expands rapidly, the diversity of system architectures
6+
increases, and interest in public cloud infrastructures is surging.
7+
Providing access to optimised software installations in a reliable, user-friendly, and reproducible way
8+
is a highly non-trivial task that affects application developers, HPC user support teams, and the users themselves.
119

1210
Although scientific research on supercomputers is fundamentally software-driven,
13-
setting up and managing a software stack remains challenging and time-consuming.
14-
In addition, parallel filesystems like GPFS and Lustre are known to be ill-suited for hosting software installations
15-
that typically consist of a large number of small files. This can lead to surprisingly slow startup performance of
16-
software, and may even negatively impact the overall performance of the system.
17-
While workarounds for these issues such as using container images are prevalent, they come with caveats,
18-
such as the significant size of these images, the required compatibility with the system MPI for distributing computing,
19-
and complications with accessing specialized hardware resources like GPUs.
11+
setting up and managing a software stack remains challenging.
12+
Parallel filesystems like GPFS and Lustre are usually ill-suited for hosting software installations
13+
that involve a large number of small files, which can lead to slow software startup, and may even negatively impact
14+
overall system performance.
15+
While workarounds such as using container images are prevalent, they come with caveats,
16+
such as large image sizes, required compatibility with the system MPI,
17+
and issues with accessing GPUs.
2018

21-
This tutorial aims to address these challenges by introducing the attendees to a way to \emph{stream}
22-
software installations via \emph{CernVM-FS}, a distributed read-only filesystem specifically designed
23-
to efficiently distribute software across large-scale computing infrastructures.
24-
The tutorial introduces the \emph{European Environment for Scientific Software Installations (EESSI)},
25-
a collaboration between various European HPC sites \& industry partners, with the common goal of
26-
creating a shared repository of optimised scientific software installations (\emph{not} recipes) that can be used on a variety of
19+
This tutorial aims to address these challenges by introducing (i) \emph{CernVM-FS},
20+
a distributed read-only filesystem designed to efficiently \emph{stream} software installations on-demand,
21+
and (ii) the \emph{European Environment for Scientific Software Installations (EESSI)},
22+
a shared repository of optimised scientific software installations (\emph{not} recipes) that can be used on a variety of
2723
systems, regardless of which flavor/version of Linux distribution or processor architecture is used, or whether it's a full size HPC
28-
cluster, a cloud environment or a personal workstation.
24+
cluster, a cloud environment, or a personal workstation.
2925

30-
We cover the installation and configuration of CernVM-FS to access EESSI, the usage of EESSI, how to add software
31-
installations to EESSI, how to install software on top of EESSI, and advanced topics like GPU support and performance
32-
tuning.
26+
Its covers installing and configuring CernVM-FS, the usage of EESSI,
27+
installing software into and on top of EESSI, and advanced topics like GPU support and performance tuning.

isc25/EESSI/main.tex

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,8 @@
5555

5656
\title{
5757
\textbf{\LARGE Streaming Optimised Scientific Software: an Introduction to CernVM-FS and EESSI}\\
58-
\vspace{2mm}{\Large \emph{ISC'25 tutorial proposal}}
58+
%\vspace{2mm}{\Large \emph{ISC'25 tutorial proposal}}
59+
\Large \emph{ISC'25 tutorial proposal}
5960
}
6061

6162
\date{}

0 commit comments

Comments
 (0)