Skip to content

Commit ef0badb

Browse files
committed
fixed some typos
1 parent f47a25b commit ef0badb

File tree

8 files changed

+36
-32
lines changed

8 files changed

+36
-32
lines changed

pySDC/tutorial/step_1/README.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ Important things to note:
2121
structure
2222
- A quick peak into ``HeatEquation_1D_FD`` reveals that the ``init``
2323
and the ``params.nvars`` attribute contain the same values (namely
24-
``nvars``). Yet, sometime the one or the other is used here (and
24+
``nvars``). Yet, sometimes one or the other is used here (and
2525
throughout the code). The reason is this: the data structure
2626
(``mesh`` in this case) needs some form of standard initialization.
2727
For this, pySDC uses the ``init`` attribute each problem class has.

pySDC/tutorial/step_2/README.rst

Lines changed: 11 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ the ``step``. It represents a single time step with whatever hierarchy
1212
we prescribe (more on that later). The relevant data (e.g. the solution
1313
and right-hand side vectors as well as the problem instances and so on)
1414
are all part of a ``level`` and a set of levels make a ``step``
15-
(together with transfer operators, see teh following tutorials). In this
15+
(together with transfer operators, see the following tutorials). In this
1616
first example, we simply create a step and test the problem instance of
1717
its only level. This is the same test we ran
1818
`here <../step_1/A_spatial_problem_setup.py>`__.
@@ -32,17 +32,17 @@ Part B: My first sweeper
3232
------------------------
3333

3434
Since we know how to create a step, we now create our first SDC
35-
iteration (by hand, this time). The make use of the IMEX SDC sweeper
35+
iteration (by hand, this time). We make use of the IMEX SDC sweeper
3636
``imex_1st_order`` and of the problem class
3737
``HeatEquation_1D_FD_forced``. Also, the data structure for the
3838
right-hand side is now ``rhs_imex_mesh``, since we need implicit and
3939
explicit parts of the right-hand side. The rest is rather
4040
straightforward: we set initial values and times, start by spreading the
41-
data and tehn do the iteration until the maximum number of iterastions
41+
data and then do the iteration until the maximum number of iterations
4242
is reached or until the residual is small enough. Yet, this example
4343
sheds light on the key functionalities of the sweeper:
4444
``compute_residual``, ``update_nodes`` and ``compute_end_point``. Also,
45-
the ``step`` and ``level`` structures are explores a bit more deeply,
45+
the ``step`` and ``level`` structures are explored a bit more deeply,
4646
since we make use of parameters and status objects here.
4747

4848
Important things to note:
@@ -51,9 +51,9 @@ Important things to note:
5151
do not have to deal with the internal data structures, see Part C
5252
below.
5353
- Note the difference between status and parameter objects: parameters
54-
are user-defined flags created using the dicitionaries (e.g.
54+
are user-defined flags created using the dictionaries (e.g.
5555
``maxiter`` as part of the ``step_params`` in this example), while
56-
status objects are internal control objects which reflects the
56+
status objects are internal control objects which reflect the
5757
current status of a level or a step (e.g. ``iter`` or ``time``).
5858
- The logic required to implement an SDC iteration is simple but also
5959
rather tedious. This will get worse if you want to deal with more
@@ -77,8 +77,11 @@ Important things to note:
7777
- By using one of the controllers, the whole code relevant for the user
7878
is reduced to setting up the ``description`` dictionary, some pre-
7979
and some post-processing.
80-
- During initializtion, the parameters used for the run are printed out. Also, user-defined/-changed parameters are indicated. This can be surpressed by setting the controller parameter ``dump_setup`` to False.
81-
- We make use of ``controller_parameters`` in order to provide logging to file capabilities.
80+
- During initialization, the parameters used for the run are printed out.
81+
Also, user-defined/-changed parameters are indicated. This can be
82+
suppressed by setting the controller parameter ``dump_setup`` to False.
83+
- We make use of ``controller_parameters`` in order to provide logging to
84+
file capabilities.
8285
- In contrast to Part B, we do not have direct access to residuals or
8386
iteration counts for now. We will deal with these later.
8487
- This example is the prototype for a user to work with pySDC. Most of

pySDC/tutorial/step_3/README.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ all residuals logged during time 0.1 (i.e. for all iterations in the
2121
first time step). Analogously, we could ask for all residuals at the
2222
final iteration of each step by calling
2323
``filter_stats(stats, iter=-1, type='residual')``. The second helper
24-
routine converts the filtered or non-filtered dictionary to a listof
24+
routine converts the filtered or non-filtered dictionary to a list of
2525
tuples, where the first part is the item defined by the parameter
2626
``sortby`` and the second part is the value. Here, we would like to have
2727
a list of iterations and residuals to see how SDC converged over the

pySDC/tutorial/step_4/README.rst

Lines changed: 9 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -7,9 +7,9 @@ how MLSDC can be run and tested.
77
Part A: Spatial transfer operators
88
----------------------------------
99

10-
For a mjltilevel hierarchy, we need transfer operators. The user, having
10+
For a multilevel hierarchy, we need transfer operators. The user, having
1111
knowledge of the data types, will have to provide a
12-
``space_transfer_class`` which deals with restriciton and interpolation
12+
``space_transfer_class`` which deals with restriction and interpolation
1313
in the spatial dimension. In this part, we simply set up two problems
1414
with two different resolutions and check the order of interpolation (4
1515
in this case).
@@ -36,15 +36,15 @@ collocation-based coarsening, we simply replace the ``num_nodes``
3636
parameter by a list, where the first entry corresponds to the finest
3737
level. For spatial coarsening, the problem parameter ``nvars`` is
3838
replaced by a list, too. During the step setup, these dictionaries with
39-
lists entries are transformed into lists of dictionaries corresponding
39+
list entries are transformed into lists of dictionaries corresponding
4040
to the levels (3 in this case). A third generic way of creating multiple
41-
levels is to replace an entry in the dscription by a list, e.g. a list
41+
levels is to replace an entry in the description by a list, e.g. a list
4242
of problem classes. The first entry of each list will always belong to
4343
the finest level.
4444

4545
Important things to note:
4646

47-
- Not all lists must habe the same length: The longest list defines the
47+
- Not all lists must have the same length: The longest list defines the
4848
number of levels and if other lists are shorter, the levels get the
4949
last entry in these lists (3 nodes on level 1 and 2 in this example).
5050
- As for most other parameters, ``space_transfer_class`` and
@@ -69,7 +69,8 @@ Important things to note:
6969
- In this case, the number of iterations is halved when using MLSDC.
7070
This is the best case and in many situations, this cannot be
7171
achieved. In particular, the interpolation order is crucial.
72-
- Using the controller parameter ``predict``, we can turn the coarse level predictor on (default) or off in the case of MLSDC or PFASST.
72+
- Using the controller parameter ``predict``, we can turn the coarse
73+
level predictor on (default) or off in the case of MLSDC or PFASST.
7374
- While MLSDC looks less expensive, the number of evaluations of the
7475
right-hand side of the ODE is basically the same: This is due to the
7576
fact that after each coarse grid correction (i.e. after the
@@ -94,10 +95,10 @@ replace the problem class by a simpler version: the coarse evaluation of
9495
the forces omits the particle-particle interaction and only takes
9596
external forces into account. This is done simply by replacing the
9697
problem class by a list of two problem classes in the description. In
97-
the rsults, we can see that all versions produce more or less the same
98+
the results, we can see that all versions produce more or less the same
9899
energies, where MLSDC without f-interpolation takes about half as many
99100
iterations and with f-interpolation slightly more. We also check the
100-
timings of the three runs: although MLSDC requires much less iterations,
101+
timings of the three runs: although MLSDC requires much fewer iterations,
101102
it takes longer to run. This is due to the fact that the right-hand side
102103
of the ODE (i.e. the costly force evaluation) is required after each
103104
interpolation! To this end, we also use f-interpolation, which increases

pySDC/tutorial/step_5/README.rst

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -7,14 +7,14 @@ Part A: Multistep multilevel hierarchy
77
--------------------------------------
88

99
In this first part, we create a controller and demonstrate how pySDC's data structures represent multiple time-steps.
10-
While for SDC the ``step`` data structure is the key part, we now habe simply a list of steps, bundled in the ``MS`` attribute of the controller.
10+
While for SDC the ``step`` data structure is the key part, we now have simply a list of steps, bundled in the ``MS`` attribute of the controller.
1111
The nice thing about going form MLSDC to PFASST is that only the number of processes in the ``num_procs`` variable has to be changed.
1212
This way the controller knows that multiple steps have to be computed in parallel.
1313

1414
Important things to note:
1515

1616
- To avoid the tedious installation of mpi4py and to have full access to all data at all times, the controllers with the ``_nonMPI`` suffix only emulate parallelism.
17-
The algorithm is the same, but the steps are performed serially. Using ``MPI`` controllers allow for real parallelism and should yield the same results (see next tutorial step).
17+
The algorithm is the same, but the steps are performed serially. Using ``MPI`` controllers allows for real parallelism and should yield the same results (see next tutorial step).
1818
- While in principle all steps can have a different number of levels, the controllers implemented so far assume that the number of levels is constant.
1919
Also, the instantiation of (the list of) steps via the controllers is implemented only for this case. Yet, pySDC's data structures in principle allow for different approaches.
2020

@@ -25,9 +25,9 @@ Part B: My first PFASST run
2525

2626
After we have created our multistep-multilevel hierarchy, we are now ready to run PFASST.
2727
We choose the simple heat equation for our first test.
28-
One of the most important characteristic of a parallel-in-time algorithm is its behavior for increasing number of parallel time-steps for a given problem (i.e. with ``dt`` and ``Tend`` fixed).
29-
Therefore, we loop over the number of parallel time-steps in this eample to see how PFASST performs for 1, 2, ..., 16 parallel steps.
30-
We compute and check the error as well as multiple statistical quantaties, e.g. the mean number of iterations, the range of iterations counts and so on.
28+
One of the most important characteristics of a parallel-in-time algorithm is its behavior for increasing number of parallel time-steps for a given problem (i.e. with ``dt`` and ``Tend`` fixed).
29+
Therefore, we loop over the number of parallel time-steps in this example to see how PFASST performs for 1, 2, ..., 16 parallel steps.
30+
We compute and check the error as well as multiple statistical quantities, e.g. the mean number of iterations, the range of iteration counts and so on.
3131
We see that PFASST performs very well in this case, the iteration counts do not increase significantly.
3232

3333
Important things to note:
@@ -42,15 +42,15 @@ Part C: Advection and PFASST
4242
----------------------------
4343

4444
We saw in the last part that PFASST does perform very well for certain parabolic problems. Now, we test PFASST for an advection test case to see how things go then.
45-
The basic set up is the same, but now using only an implicit sweeper and periodic boundary conditions.
45+
The basic setup is the same, but now using only an implicit sweeper and periodic boundary conditions.
4646
To make things more interesting, we choose two different sweepers: the LU-trick as well as the implicit Euler and check how these are performing for this kind of problem.
4747
We see that in contrast to the parabolic problem, the iteration counts actually increase significantly, if more parallel time-steps are computed.
4848
Again, this heavily depends on the actual problem under consideration, but it is a typical behavior of parallel-in-time algorithms of this type.
4949

5050
Important things to note:
5151

5252
- The setup is actually periodic in time as well! At ``Tend = 1`` the exact solution looks exactly like the initial condition.
53-
- Like the standard IME sweeper, the ``generic_implicit`` sweeper allows the user to change the preconditioner, named ``QI``.
53+
- Like the standard IMEX sweeper, the ``generic_implicit`` sweeper allows the user to change the preconditioner, named ``QI``.
5454
To get the standard implicit Euler scheme, choose ``IE``, while for the LU-trick, choose ``LU``.
5555
More choices have been implemented in ``pySDC.plugins.sweeper_helper.get_Qd``.
5656

pySDC/tutorial/step_6/README.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
Step-6: Advanced PFASST controllers
22
===================================
33

4-
We discuss controller implementations, features and parallelization of PFASST controller in this step.
4+
We discuss controller implementations, features and parallelization of PFASST controllers in this step.
55

66
Part A: The nonMPI controller
77
------------------------------------------
@@ -39,14 +39,14 @@ To do this, pySDC comes with the MPI-parallelized controller, namely ``controlle
3939
It is supposed to yield the same results as the non-MPI counterpart and this is what we are demonstrating here (at least for one particular example).
4040
The actual code of this part is rather short, since the only task is to call another snippet (``playground_parallelization.py``) with different number of parallel processes.
4141
This is realized using Python's ``subprocess`` library and we check at the end if each call returned normally.
42-
Now, the snippet called by the example is the basically the same code as use by Parts A and B.
42+
Now, the snippet called by the example is basically the same code as used by Parts A and B.
4343
We can use the results of Parts A and B to compare with and we expect the same number of iterations, the same accuracy and the same difference between the two flavors as in Part A (up to machine precision).
4444

4545
Important things to note:
4646

4747
- The additional Python script ``playground_parallelization.py`` contains the code to run the MPI-parallel controller. To this end, we import the routine ``set_parameters`` from Part A to ensure that we use the same set of parameters for all runs.
4848
- This example also shows how the statistics of multiple MPI processes can be gathered and processed by rank 0, see ``playground_parallelization.py``.
49-
- The controller need a working installation of ``mpi4py``. Since this is not always easy to achieve and since debugging a parallel program can cause a lot of headaches, the non-MPI controller performs the same operations in serial.
49+
- The controller needs a working installation of ``mpi4py``. Since this is not always easy to achieve and since debugging a parallel program can cause a lot of headaches, the non-MPI controller performs the same operations in serial.
5050
- The somewhat weird notation with the current working directory ``cwd`` is due to the corresponding test, which, run by nosetests, has a different working directory than the tutorial.
5151

5252
.. include:: doc_step_6_C.rst

pySDC/tutorial/step_7/README.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ Part B: mpi4py-fft for parallel Fourier transforms
2525

2626
The most prominent parallel solver is, probably, the FFT.
2727
While many implementations or wrappers for Python exist, we decided to use `mpi4py-fft <https://mpi4py-fft.readthedocs.io/en/latest/>`_, which provided the easiest installation, a simple API and good parallel scaling.
28-
As an example we here test the nonlinear Schrödinger equation, using the IMEX sweeper to treat the nonlinear parts explicitly.
28+
As an example, we here test the nonlinear Schrödinger equation, using the IMEX sweeper to treat the nonlinear parts explicitly.
2929
The code allows to work both in real and spectral space, while the latter is usually faster.
3030
This example tests SDC, MLSDC and PFASST.
3131

@@ -48,6 +48,6 @@ See `implementations/datatype_classes/petsc_dmda_grid.py` and `implementations/p
4848
Important things to note:
4949

5050
- We need processors in space and time, which can be achieved by `comm.Split` and coloring. The space-communicator is then passed to the problem class.
51-
- Below we run the code 3 times: with 1 and 2 processors in space as well as 4 processors (2 in time and 2 in space). Do not expect scaling due to the CI environment.
51+
- Below, we run the code 3 times: with 1 and 2 processors in space as well as 4 processors (2 in time and 2 in space). Do not expect scaling due to the CI environment.
5252

5353
.. include:: doc_step_7_C.rst

pySDC/tutorial/step_8/README.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ To prevent this, information can be sent forward immediately, but then this is n
2929

3030
Important things to note:
3131

32-
- Use the controller parameter ``mssdc_jac`` to controll whether the method should be "parallel" (Jacobi-like) or "serial" (Gauss-like).
32+
- Use the controller parameter ``mssdc_jac`` to control whether the method should be "parallel" (Jacobi-like) or "serial" (Gauss-like).
3333
- We increased the logging value here again, (safely) ignoring the warnings for multi-step SDC.
3434

3535
.. include:: doc_step_8_B.rst
@@ -53,4 +53,4 @@ Important things to note:
5353
Part X: To be continued...
5454
--------------------------
5555

56-
We shall see what comes next...
56+
We shall see what comes next...

0 commit comments

Comments
 (0)