@@ -16,39 +16,37 @@ <h2>The motivation behind <span class="psij-font">PSI/J</span></h2>
1616 Project</ a > which brings together a number of high level HPC tools developed
1717 by the members of ExaWorks. We noticed that most of these projects, as well as
1818 many of the community projects, implemented a software layer/library to interact
19- with HPC schedulers in order to insulate the core functionality from the
20- details of how things are specified for each scheduler. We also noticed that
21- the respective libraries mostly covered schedulers running on resources that
22- each team had access to. We figured that we could use our combined knowledge to
23- design a single API/library for this goal, a library that would be tested on all
24- resources that all ExaWorks teams have access to. We could then share this API
25- and library so that all high level HPC tools could benefit from it.
19+ with HPC schedulers in order to insulate the core functionality from the detailed
20+ scheduler specifications . We also noticed that
21+ the respective libraries were limited to schedulers running on resources that
22+ each team had access to. We used our combined knowledge to
23+ design a single API/library for this goal, one that would be tested on all
24+ resources that all ExaWorks teams have access to. We then shared this API
25+ and library so that all high level HPC tools can benefit from it.
2626 </ p >
2727 </ v-card >
2828
2929 < v-card align ="left " class ="mb-6 vbox ">
3030 < h2 > The complexity of testing HPC libraries</ h2 >
3131
3232 < p >
33- A major factor contributing to the difficulties in maintaining HPC software
33+ A major difficulty in maintaining HPC software
3434 tools is that access to HPC resources is generally limited to a small number
35- of clusters local to each team. Additionally, HPC resources tend to vary
36- widely depending on the institution that maintains them. Consequently, the
37- chances that software that is tested on resources that a HPC tool development
38- team has access to will encounter problems on other HPC resources is fairly
39- high. As mentioned above, a first step in addressing this problem is by
40- pooling the teams' respective resources for testing purposes. However,
35+ of clusters local to each team. Additionally, HPC resources vary
36+ widely depending on the institution that maintains them. Consequently,
37+ software that is tested on resources that a HPC tool development
38+ team has access to is likely to encounter problems on other HPC resources. A first step in addressing this problem is by
39+ pooling teams' respective resources for testing purposes.
4140 < span class ="psij-font "> PSI/J</ span > takes it a step further by exposing an
4241 infrastructure that allows any user < span class ="psij-font "> PSI/J</ span > user
43- to easily contribute test results to the < span class ="psij-font "> PSI/J</ span > ,
44- and do so automatically. This is a mutually beneficial relationship: the
45- < span class ="psij-font "> PSI/J</ span > community at large gains a certain level
46- of assurance that < span class ="psij-font "> PSI/J</ span > functions correctly on
42+ to easily contribute test results to < span class ="psij-font "> PSI/J</ span > ,
43+ and do so automatically. This is mutually beneficial: the
44+ < span class ="psij-font "> PSI/J</ span > community gains assurance that < span class ="psij-font "> PSI/J</ span > functions correctly on
4745 a wide range of resources, while users contributing tests have a mechanism to
4846 ensure that the < span class ="psij-font "> PSI/J</ span > team is aware of
4947 potential problems specific to their resources and can address them, thus
5048 ensuring that < span class ="psij-font "> PSI/J</ span > continues to function
51- correctly on specific resources.
49+ correctly on those resources.
5250 </ p >
5351 </ v-card >
5452
0 commit comments