Skip to content

Commit 19dbff6

Browse files
authored
Apply suggestions from code review
1 parent 4a7322e commit 19dbff6

File tree

7 files changed

+25
-25
lines changed

7 files changed

+25
-25
lines changed

docs/api.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ The most important classes in this library are ``Job`` and ``JobExecutor``,
55
followed by ``Launcher``.
66

77
The Job Class and Its Modifiers
8-
----------------
8+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
99

1010
The Job-related classes listed in this section (``Job``, ``JobSpec``,
1111
``ResourceSpec``, and ``JobAttributes``) are independent of
@@ -26,7 +26,7 @@ configuration options.
2626
:noindex:
2727

2828
Job Modifiers
29-
----------------
29+
^^^^^^^^^^^^^
3030

3131
There can be a lot of configuration information that goes into each
3232
resource manager job, including its walltime, partition/queue, the number of nodes
@@ -53,7 +53,7 @@ scheduling policies.
5353
.. _executors:
5454

5555
Executors
56-
----------------
56+
~~~~~~~~~
5757

5858
Executors are concrete implementations of mechanisms that execute jobs.
5959
To get an instance of a specific executor, call

docs/development/tutorial_add_executor.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ For example, with PBS `qsub` and `qstat` commands are used to submit a request a
2727
To use BatchSchedulerExecutor for a new local resource manager that uses this command line interface, subclass BatchSchedulerExecutor and add in code that understands how to form the command lines necessary to submit a request for an allocation and to get allocation status. This tutorial will do that for PBSPro.
2828

2929
Adding an Executor
30-
----------------
30+
------------------
3131

3232
First set up a directory structure::
3333

@@ -53,7 +53,7 @@ Prerequisites:
5353
First, we'll build a skeleton that won't work, and see that it doesn't work in the test suite. Then we'll build up to the full functionality.
5454

5555
A Not-implemented Stub
56-
^^^^^^^^^^^^^^
56+
----------------------
5757

5858
Add the project directory to the Python path directory::
5959

docs/getting_started.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -83,7 +83,7 @@ Local // Slurm // LSF // PBS // Cobalt
8383
job = Job(JobSpec(executable="/bin/date"))
8484
ex.submit(job)
8585
86-
The ``executable="/bin/date")`` tells PSI/J that we want the job to run
86+
The ``executable="/bin/date"`` parameter tells PSI/J that we want the job to run
8787
the ``/bin/date`` command. Once that command has finished executing
8888
(which should be almost as soon as the job starts, since ``date`` does very little work)
8989
the resource manager will mark the job as complete, triggering PSI/J to do the same.

docs/index.rst

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -16,16 +16,16 @@ scheduler. PSI/J has a number of advantages:
1616

1717
#. **Offers an asynchronous modern API for job management.** PSI/J is a clean Python API for requesting and managing jobs that works across a variety of HPC centers.
1818

19-
#. **Supports common batch schedulers.** It's easy to test PSI/J on your systems—including multiple DOE supercomputer centers—and share the results with the community.
19+
#. **Supports the common batch schedulers:** We test PSI/J across multiple DOE supercomputer centers. It's easy to test PSI/J on your systems and share the results with the community.
2020

2121
#. **Is built by the HPC community, for the HPC community:** PSI/J is based on a number of libraries used by state-of-the-art HPC workflow applications.
2222

23-
#. **Leverages contributor expertise as an open source project:** We are establishing a community to develop, test, and deploy PSI/J across many HPC facilities.
23+
#. **PSI/J is an open source project:** We are establishing a community to develop, test, and deploy PSI/J across many HPC facilities.
2424

2525
Most HPC centers feature multiple schedulers, rolling policy changes and
2626
deployments of software stacks, and subtle differences even across systems with
27-
similar architectures. **PSI/J tames this complexity** for
28-
computational scientists and workflow developers.
27+
similar architectures. **PSI/J is designed to tame this complexity** and provide
28+
computational scientists and workflow developers a common API for interacting
2929

3030

3131

docs/user_guide.rst

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ PSI/J simplifies your work.
1414

1515

1616
When Not to Use PSI/J
17-
^^^^^^^^^^^^^^
17+
-------------------------------
1818

1919
If you are certain that you will only *ever* be launching jobs on ORNL's Summit
2020
system and you don't care about any other cluster or machine, it makes sense to
@@ -53,7 +53,7 @@ What is a JobExecutor?
5353

5454
A :class:`JobExecutor <psij.job_executor.JobExecutor>` represents a specific RM,
5555
e.g. Slurm, on which the job is being executed. Generally, when jobs are
56-
submitted they will be queued depending on how
56+
submitted they will be queued for a variable period of time, depending on how
5757
busy the target machine is. Once the job is started, its executable is
5858
launched and runs to completion, and the job will be marked as completed.
5959

@@ -81,7 +81,7 @@ reference the `developer documentation
8181

8282

8383
Submitting a Job
84-
------------
84+
----------------
8585

8686
The most basic way to use PSI/J looks something like the following:
8787

@@ -112,7 +112,7 @@ manager’s queue after running the example above.
112112

113113

114114
Submitting Multiple Jobs
115-
^^^^^^^^^^^^^
115+
^^^^^^^^^^^^^^^^^^^^^^^^
116116

117117
In the last section we submitted a single job. Submitting multiple jobs is as
118118
simple as adding a loop:
@@ -133,7 +133,7 @@ numbers of jobs (tested with up to 64k jobs).
133133
Configuring Your Job
134134
--------------------
135135

136-
In the example above, the ``executable='/bin/date'`` tells PSI/J that we want
136+
In the example above, ``executable='/bin/date'`` tells PSI/J that we want
137137
the job to run the ``/bin/date`` command. But there are other parts of the job
138138
which can be configured:
139139

web/about.html

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -16,9 +16,9 @@ <h2>The motivation behind <span class="psij-font">PSI/J</span></h2>
1616
Project</a> which brings together a number of high level HPC tools developed
1717
by the members of ExaWorks. We noticed that most of these projects, as well as
1818
many of the community projects, implemented a software layer/library to interact
19-
with HPC schedulers in order to insulate the core functionality from the detailed
20-
scheduler specifications. We also noticed that
21-
the respective libraries were limited to schedulers running on resources that
19+
with HPC schedulers in order to insulate the core functionality from the
20+
details of how things are specified for each scheduler. We also noticed that
21+
the respective libraries were mostly limited to schedulers running on resources that
2222
each team had access to. We used our combined knowledge to
2323
design a single API/library for this goal, one that would be tested on all
2424
resources that all ExaWorks teams have access to. We then shared this API

web/index.html

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -33,11 +33,11 @@
3333
<i class="fas fa-server" style="color: #9A4A68"></i>
3434
</div>
3535
<v-card-title class="text-h5 justify-center">
36-
Run your HPC application anywhere
36+
Write Scheduler Agnostic HPC Applications
3737
</v-card-title>
3838

3939
<v-card-text>
40-
<span class="psij-font>PSI/J</span>’s unified API automatically translates abstract job specs into concrete scripts and commands to send to the scheduler.
40+
Use a unified API to enable your HPC application to run virtually anywhere.
4141
</v-card-text>
4242
</v-card>
4343
</v-col>
@@ -52,29 +52,29 @@
5252
<v-col justify="center">
5353
<v-card align="center">
5454
<div class="icon-logo"><i class="fas fa-terminal" style="color: #DD3D5A"></i></div>
55-
<h3>Stay on top of cluster changes </h3>
55+
<h3><span class="psij-font">PSI/J</span> runs entirely in user space</h3>
5656

5757
<v-card-text>
58-
Quickly respond to experimental changes in the cluster environment rather than waiting for infrequent deployment cycles.
58+
here is no need to wait for infrequent deployment cycles. The HPC world
5959
</v-card-text>
6060
</v-card>
6161
</v-col>
6262
<v-col justify="center">
6363
<v-card align="center">
6464
<div class="icon-logo"><i class="fas fa-puzzle-piece" style="color: #F15A3D"></i>
6565
</div>
66-
<h3>Leverage a community of coders</h3>
66+
<h3>Use built-in or community contributed plugins</h3>
6767

6868
<v-card-text>
69-
Community contributors expand its utility across a variety of clusters with constant extensions and improvements.
69+
It is virtually impossible for a single entity to
7070
</v-card-text>
7171
</v-card>
7272
</v-col>
7373

7474
<v-col justify="center">
7575
<v-card align="center">
7676
<div class="icon-logo"><i class="fas fa-save" style="color: #FDA214"></i></div>
77-
<h3>Rely on a state-of-the-art HPC workflow </h3>
77+
<h3><span class="psij-font">PSI/J</span> has a rich HPC legacy</h3>
7878

7979
<v-card-text>
8080
<span class="psij-font">PSI/J</span> was built by a team with decades of experience building workflow systems for large scale computing.

0 commit comments

Comments
 (0)