You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: pySDC/tutorial/step_2/README.rst
+11-8Lines changed: 11 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,7 @@ the ``step``. It represents a single time step with whatever hierarchy
12
12
we prescribe (more on that later). The relevant data (e.g. the solution
13
13
and right-hand side vectors as well as the problem instances and so on)
14
14
are all part of a ``level`` and a set of levels make a ``step``
15
-
(together with transfer operators, see teh following tutorials). In this
15
+
(together with transfer operators, see the following tutorials). In this
16
16
first example, we simply create a step and test the problem instance of
17
17
its only level. This is the same test we ran
18
18
`here <../step_1/A_spatial_problem_setup.py>`__.
@@ -32,17 +32,17 @@ Part B: My first sweeper
32
32
------------------------
33
33
34
34
Since we know how to create a step, we now create our first SDC
35
-
iteration (by hand, this time). The make use of the IMEX SDC sweeper
35
+
iteration (by hand, this time). We make use of the IMEX SDC sweeper
36
36
``imex_1st_order`` and of the problem class
37
37
``HeatEquation_1D_FD_forced``. Also, the data structure for the
38
38
right-hand side is now ``rhs_imex_mesh``, since we need implicit and
39
39
explicit parts of the right-hand side. The rest is rather
40
40
straightforward: we set initial values and times, start by spreading the
41
-
data and tehn do the iteration until the maximum number of iterastions
41
+
data and then do the iteration until the maximum number of iterations
42
42
is reached or until the residual is small enough. Yet, this example
43
43
sheds light on the key functionalities of the sweeper:
44
44
``compute_residual``, ``update_nodes`` and ``compute_end_point``. Also,
45
-
the ``step`` and ``level`` structures are explores a bit more deeply,
45
+
the ``step`` and ``level`` structures are explored a bit more deeply,
46
46
since we make use of parameters and status objects here.
47
47
48
48
Important things to note:
@@ -51,9 +51,9 @@ Important things to note:
51
51
do not have to deal with the internal data structures, see Part C
52
52
below.
53
53
- Note the difference between status and parameter objects: parameters
54
-
are user-defined flags created using the dicitionaries (e.g.
54
+
are user-defined flags created using the dictionaries (e.g.
55
55
``maxiter`` as part of the ``step_params`` in this example), while
56
-
status objects are internal control objects which reflects the
56
+
status objects are internal control objects which reflect the
57
57
current status of a level or a step (e.g. ``iter`` or ``time``).
58
58
- The logic required to implement an SDC iteration is simple but also
59
59
rather tedious. This will get worse if you want to deal with more
@@ -77,8 +77,11 @@ Important things to note:
77
77
- By using one of the controllers, the whole code relevant for the user
78
78
is reduced to setting up the ``description`` dictionary, some pre-
79
79
and some post-processing.
80
-
- During initializtion, the parameters used for the run are printed out. Also, user-defined/-changed parameters are indicated. This can be surpressed by setting the controller parameter ``dump_setup`` to False.
81
-
- We make use of ``controller_parameters`` in order to provide logging to file capabilities.
80
+
- During initialization, the parameters used for the run are printed out.
81
+
Also, user-defined/-changed parameters are indicated. This can be
82
+
suppressed by setting the controller parameter ``dump_setup`` to False.
83
+
- We make use of ``controller_parameters`` in order to provide logging to
84
+
file capabilities.
82
85
- In contrast to Part B, we do not have direct access to residuals or
83
86
iteration counts for now. We will deal with these later.
84
87
- This example is the prototype for a user to work with pySDC. Most of
Copy file name to clipboardExpand all lines: pySDC/tutorial/step_5/README.rst
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,14 +7,14 @@ Part A: Multistep multilevel hierarchy
7
7
--------------------------------------
8
8
9
9
In this first part, we create a controller and demonstrate how pySDC's data structures represent multiple time-steps.
10
-
While for SDC the ``step`` data structure is the key part, we now habe simply a list of steps, bundled in the ``MS`` attribute of the controller.
10
+
While for SDC the ``step`` data structure is the key part, we now have simply a list of steps, bundled in the ``MS`` attribute of the controller.
11
11
The nice thing about going form MLSDC to PFASST is that only the number of processes in the ``num_procs`` variable has to be changed.
12
12
This way the controller knows that multiple steps have to be computed in parallel.
13
13
14
14
Important things to note:
15
15
16
16
- To avoid the tedious installation of mpi4py and to have full access to all data at all times, the controllers with the ``_nonMPI`` suffix only emulate parallelism.
17
-
The algorithm is the same, but the steps are performed serially. Using ``MPI`` controllers allow for real parallelism and should yield the same results (see next tutorial step).
17
+
The algorithm is the same, but the steps are performed serially. Using ``MPI`` controllers allows for real parallelism and should yield the same results (see next tutorial step).
18
18
- While in principle all steps can have a different number of levels, the controllers implemented so far assume that the number of levels is constant.
19
19
Also, the instantiation of (the list of) steps via the controllers is implemented only for this case. Yet, pySDC's data structures in principle allow for different approaches.
20
20
@@ -25,9 +25,9 @@ Part B: My first PFASST run
25
25
26
26
After we have created our multistep-multilevel hierarchy, we are now ready to run PFASST.
27
27
We choose the simple heat equation for our first test.
28
-
One of the most important characteristic of a parallel-in-time algorithm is its behavior for increasing number of parallel time-steps for a given problem (i.e. with ``dt`` and ``Tend`` fixed).
29
-
Therefore, we loop over the number of parallel time-steps in this eample to see how PFASST performs for 1, 2, ..., 16 parallel steps.
30
-
We compute and check the error as well as multiple statistical quantaties, e.g. the mean number of iterations, the range of iterations counts and so on.
28
+
One of the most important characteristics of a parallel-in-time algorithm is its behavior for increasing number of parallel time-steps for a given problem (i.e. with ``dt`` and ``Tend`` fixed).
29
+
Therefore, we loop over the number of parallel time-steps in this example to see how PFASST performs for 1, 2, ..., 16 parallel steps.
30
+
We compute and check the error as well as multiple statistical quantities, e.g. the mean number of iterations, the range of iteration counts and so on.
31
31
We see that PFASST performs very well in this case, the iteration counts do not increase significantly.
32
32
33
33
Important things to note:
@@ -42,15 +42,15 @@ Part C: Advection and PFASST
42
42
----------------------------
43
43
44
44
We saw in the last part that PFASST does perform very well for certain parabolic problems. Now, we test PFASST for an advection test case to see how things go then.
45
-
The basic set up is the same, but now using only an implicit sweeper and periodic boundary conditions.
45
+
The basic setup is the same, but now using only an implicit sweeper and periodic boundary conditions.
46
46
To make things more interesting, we choose two different sweepers: the LU-trick as well as the implicit Euler and check how these are performing for this kind of problem.
47
47
We see that in contrast to the parabolic problem, the iteration counts actually increase significantly, if more parallel time-steps are computed.
48
48
Again, this heavily depends on the actual problem under consideration, but it is a typical behavior of parallel-in-time algorithms of this type.
49
49
50
50
Important things to note:
51
51
52
52
- The setup is actually periodic in time as well! At ``Tend = 1`` the exact solution looks exactly like the initial condition.
53
-
- Like the standard IME sweeper, the ``generic_implicit`` sweeper allows the user to change the preconditioner, named ``QI``.
53
+
- Like the standard IMEX sweeper, the ``generic_implicit`` sweeper allows the user to change the preconditioner, named ``QI``.
54
54
To get the standard implicit Euler scheme, choose ``IE``, while for the LU-trick, choose ``LU``.
55
55
More choices have been implemented in ``pySDC.plugins.sweeper_helper.get_Qd``.
Copy file name to clipboardExpand all lines: pySDC/tutorial/step_6/README.rst
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,7 @@
1
1
Step-6: Advanced PFASST controllers
2
2
===================================
3
3
4
-
We discuss controller implementations, features and parallelization of PFASST controller in this step.
4
+
We discuss controller implementations, features and parallelization of PFASST controllers in this step.
5
5
6
6
Part A: The nonMPI controller
7
7
------------------------------------------
@@ -39,14 +39,14 @@ To do this, pySDC comes with the MPI-parallelized controller, namely ``controlle
39
39
It is supposed to yield the same results as the non-MPI counterpart and this is what we are demonstrating here (at least for one particular example).
40
40
The actual code of this part is rather short, since the only task is to call another snippet (``playground_parallelization.py``) with different number of parallel processes.
41
41
This is realized using Python's ``subprocess`` library and we check at the end if each call returned normally.
42
-
Now, the snippet called by the example is the basically the same code as use by Parts A and B.
42
+
Now, the snippet called by the example is basically the same code as used by Parts A and B.
43
43
We can use the results of Parts A and B to compare with and we expect the same number of iterations, the same accuracy and the same difference between the two flavors as in Part A (up to machine precision).
44
44
45
45
Important things to note:
46
46
47
47
- The additional Python script ``playground_parallelization.py`` contains the code to run the MPI-parallel controller. To this end, we import the routine ``set_parameters`` from Part A to ensure that we use the same set of parameters for all runs.
48
48
- This example also shows how the statistics of multiple MPI processes can be gathered and processed by rank 0, see ``playground_parallelization.py``.
49
-
- The controller need a working installation of ``mpi4py``. Since this is not always easy to achieve and since debugging a parallel program can cause a lot of headaches, the non-MPI controller performs the same operations in serial.
49
+
- The controller needs a working installation of ``mpi4py``. Since this is not always easy to achieve and since debugging a parallel program can cause a lot of headaches, the non-MPI controller performs the same operations in serial.
50
50
- The somewhat weird notation with the current working directory ``cwd`` is due to the corresponding test, which, run by nosetests, has a different working directory than the tutorial.
Copy file name to clipboardExpand all lines: pySDC/tutorial/step_7/README.rst
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -25,7 +25,7 @@ Part B: mpi4py-fft for parallel Fourier transforms
25
25
26
26
The most prominent parallel solver is, probably, the FFT.
27
27
While many implementations or wrappers for Python exist, we decided to use `mpi4py-fft <https://mpi4py-fft.readthedocs.io/en/latest/>`_, which provided the easiest installation, a simple API and good parallel scaling.
28
-
As an example we here test the nonlinear Schrödinger equation, using the IMEX sweeper to treat the nonlinear parts explicitly.
28
+
As an example, we here test the nonlinear Schrödinger equation, using the IMEX sweeper to treat the nonlinear parts explicitly.
29
29
The code allows to work both in real and spectral space, while the latter is usually faster.
30
30
This example tests SDC, MLSDC and PFASST.
31
31
@@ -48,6 +48,6 @@ See `implementations/datatype_classes/petsc_dmda_grid.py` and `implementations/p
48
48
Important things to note:
49
49
50
50
- We need processors in space and time, which can be achieved by `comm.Split` and coloring. The space-communicator is then passed to the problem class.
51
-
- Below we run the code 3 times: with 1 and 2 processors in space as well as 4 processors (2 in time and 2 in space). Do not expect scaling due to the CI environment.
51
+
- Below, we run the code 3 times: with 1 and 2 processors in space as well as 4 processors (2 in time and 2 in space). Do not expect scaling due to the CI environment.
0 commit comments