-
Notifications
You must be signed in to change notification settings - Fork 2
Expand file tree
/
Copy path12_Initial_value_problems.jl
More file actions
4320 lines (3630 loc) · 170 KB
/
12_Initial_value_problems.jl
File metadata and controls
4320 lines (3630 loc) · 170 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
### A Pluto.jl notebook ###
# v0.20.6
using Markdown
using InteractiveUtils
# This Pluto notebook uses @bind for interactivity. When running this notebook outside of Pluto, the following 'mock version' of @bind gives bound variables a default value (instead of an error).
macro bind(def, element)
#! format: off
return quote
local iv = try Base.loaded_modules[Base.PkgId(Base.UUID("6e696c72-6542-2067-7265-42206c756150"), "AbstractPlutoDingetjes")].Bonds.initial_value catch; b -> missing; end
local el = $(esc(element))
global $(esc(def)) = Core.applicable(Base.get, el) ? Base.get(el) : iv(el)
el
end
#! format: on
end
# ╔═╡ 8197b6f0-00b7-11ef-2142-4b7cbbaefd90
begin
using Plots
using PlutoUI
using PlutoTeachingTools
using LaTeXStrings
using DifferentialEquations
using ForwardDiff
using HypertextLiteral
using LinearAlgebra
end
# ╔═╡ ba9b6172-0234-442c-baaa-876b12f689bd
md"""
!!! info ""
[Click here to view the PDF version.](https://teaching.matmat.org/numerical-analysis/12_Initial_value_problems.pdf)
"""
# ╔═╡ d8406b01-e36f-4953-a5af-cd563005c2a1
TableOfContents()
# ╔═╡ 692aa6e2-21a9-4dad-a719-9d8ad88bd467
md"""
# Initial value problems
"""
# ╔═╡ ebabe949-5492-4b8b-959d-05c5728da043
md"""
In computational science we are often faced with quantitites that change continuously in space or time. For example the temperature profile of a hot body or the population of an animal species over time. Such problems are often modelled by differential equations. If there is only a single indepentent variable $t$ we call the model an **ordinary differential equation**. The usual setup is that for initial value $t = 0$ we have some knowledge about our problem, e.g. by performing a measurement, and the key question is thus how the situation evolves for $t > 0$.
"""
# ╔═╡ 05ef8174-e9a6-4280-8640-08d74635fba2
md"""
!!! warning "Example: Ducks on the Leman"
Suppose we want to model the population of ducks on the Leman, i.e. let $u(t)$ denote the number of ducks on the lake at time $t$. To simplify the mdoelling we allow for "fractional ducks", i.e. we allow $u$ to be any real number.
First we assume a constant growth rate $C$ for the number of ducks at any time $t$, i.e. that the number of ducks born minus the number of ducks deceased in one unit of time is proportional to $u(t)$ --- the current population of ducks. This leads to the ordinary differential equation (ODE)
```math
\left\{\begin{aligned}
\frac{d u(t)}{d t} &= C\, u(t) &&t > 0\\
u(0) = u_0
\end{aligned}\right.
```
where $u_0$ is the initial number of ducks we observed at $t = 0$.
In comparison to the general definition shown above
we thus have $f(t, u) = C\, u$.
The solution of this linear equation is
```math
u(t) = u_0 \, e^{Ct},
```
i.e. exponential growth.
Clearly as the size of the Leman is finite, this is not a reasonable model
as both food and space on the lake is limited, which should cap the growth.
To improve the model we assume that the death rate itself is proportional
to the size of the population, i.e. $d\,u$ for a constant $d > 0$.
Keeping the idea of a constant birth rate $b > 0$ one thus obtains
an improved model
```math
\left\{\begin{aligned}
\frac{d u(t)}{d t} &= b\, u(t) - d \, \left(u(t) \right)^2 &&t > 0\\
u(0) = u_0
\end{aligned}\right.
```
i.e. we have the case of $f(u, t) = b\, u - d \, u^2$
when considering the above definition.
This is the **logistic equation**, which in fact has multiple solutions.
The solution relevant for population models has the form
```math
u(t) = \frac{b/d}{1 + \left( \frac{b}{d\, u_0} -1 \right) e^{-b t}}
```
As a plot shows, this solution varies smoothly from some initial population
$u_0$ to a final population $b/d$:
"""
# ╔═╡ 31332dd6-1ef2-4181-a638-35980f470552
let
u₀ = 1
b = 4
d = 0.5
u(t) = b/d / (1 + (b/(d*u₀) - 1) * exp(-b*t))
plot(u; xlims=(0, 3), label="b=4, d=0.5, u₀=1", xlabel=L"t", ylabel=L"u", title="Duck population", lw=2)
end
# ╔═╡ 1e65223e-9a12-4a4c-8b5e-31bfe4f813ec
md"""This problem is an example for the a class of problems one calls **initial value problem**, because based on some initial knowledge at $t=0$ one wants to know how a quantity (e.g. here the population) evolves.
"""
# ╔═╡ 64fe575e-0d47-4949-a9e8-2056ddee45df
md"""
!!! info "Definition: Initial-value problem"
A scalar, first-order initial value problem (IVP) can be formulated as
```math
\tag{1}
\left\{
\begin{aligned}
\frac{d u(t)}{d t} &= f(t, u(t)) && a ≤ t ≤ b \\
u(a) &= u_0,
\end{aligned}\right.
```
We call $t$ the **indepentent variable**, $u$ the **dependent variable**
and $u_0$ the initial conditions.
A **solution** of an initial-value problem is a differentiable, continuous
function $u : \mathbb{R} \to \mathbb{R}$
which makes both $u'(t) = f(t, u(t))$ (for all $a ≤ t ≤ b$)
and $u(a) = u_0$ true equations.
For the specific case where $f(t, u) = g(t) + u h(t)$ for two functions $g$ and $h$ the differential equation (1) is called **linear**.
Often (but not always) $t$ plays the role of time and (1) thus models the time-dependence of a quantity $u$.
"""
# ╔═╡ 1dbe5d72-17d9-4f20-b365-ad913cd607c3
md"""
## Solving initial value problems numerically
For simple examples like the Duck problem above
analytic solutions can still be found with a little practice.
However, initial value problems quickly become more involved.
For example consider the problem
```math
\tag{2}
\left\{
\begin{aligned}
\frac{d u(t)}{d t} &= \sin\left[(u+t)^2\right] && 0 ≤ t ≤ 4 \\
u(0) &= -1,
\end{aligned}\right.
```
for which an analytical solution is not so easy to obtain.
"""
# ╔═╡ 6c2f1cd1-e79a-4790-9871-0a8825b5c02e
md"""
Before we introduce a few key ideas how to solve such problems,
we first need a reference technique to compare against.
Here the [DifferentialEquations.jl](https://github.com/SciML/DifferentialEquations.jl) Julia package provides a range of production-grade numerical solvers for ODE problems.
We first translate our problem (2) into Julia code.
"""
# ╔═╡ 8f159939-4bf4-4151-bd3e-a84c5e1493db
begin
f(u, t) = sin((t + u)^2) # defines du/dt
u₀ = -1.0 # initial value
tstart = 0.0 # Start time
tend = 4.0 # Final time
end;
# ╔═╡ 9b71f2b6-34bb-416a-947d-6e009f565bf4
md"""This we can now solve using `DifferentialEquations`, the details of which is beyond the scope of this course and hidden in the function `solve_reference`.
"""
# ╔═╡ 3f59bd74-1bb0-41df-bea2-bf5990a2b89a
function solve_reference(f, u₀, tstart, tend; abstol=1e-6, reltol=1e-6, kwargs...)
# Construct ODEProblem object
ivp = ODEProblem((u, p, t) -> f(u, t), u₀, (tstart, tend))
# Solve problem using the Tsit5() algortihm, a pretty good default.
solve(ivp, Tsit5(); abstol, reltol, kwargs...)
end
# ╔═╡ 9cb5fd8e-d923-434e-b543-a0c987a5b5a7
sol = solve_reference(f, u₀, tstart, tend);
# ╔═╡ 5d2886f2-87a5-43ae-9064-448478f957ca
md"The obtained solution can easily be visualised:"
# ╔═╡ 82321d1e-50b9-4362-afc5-d256a506bed7
plot(sol.t, sol.u; label="solution", lw=2, xlabel="t", ylabel=L"u(t)",
title=L"\frac{du}{dt} = \sin((t+u)^2)")
# ╔═╡ 6b4ba69d-0a70-4c66-b4bc-63b717348471
md"""
## Existence and uniqueness of solutions
Let us remark that the **existence** of a solution to problem (1) is not guaranteed for all $t$ with $a ≤ t ≤ b$. For example consider the problem
```math
\frac{d u(t)}{dt} = \left(u(t)\right)^2,\, 0≤t≤2, \qquad u(0) = 1,
```
which has solution $u(t) = \frac{1}{1-t}$. This solution only exists for $t<1$.
When attempting a numerical solution beyond $t=1$ of such a problem we are faced with problems:
"""
# ╔═╡ 7a0fa3cd-a270-4947-b7ba-d30110139795
let
f(u, t) = u^2
u₀ = 1.0
tstart = 0
tend = 10
sol = solve_reference(f, u₀, tstart, tend)
plot(sol.t, sol.u; lw=2, label="", xlabel=L"t", yaxis=:log, ylabel=L"u(t)", xlims=(0, 2))
end
# ╔═╡ c94d264b-ad1b-47fd-8b58-fd434cfe0f11
md"""
A second issue is that the solution may not necessarily be **unique**. Consider for example the equation
```math
\frac{d u(t)}{dt} = \sqrt{u(t)}, \, t>0, \qquad u(0) = 0,
```
which has the two solutions $u_1(t) = 0$ and $u_2(t) = \frac14 t^2$.
"""
# ╔═╡ 3dc8b41f-3ba0-4c0a-b017-731fd0958920
md"""
Both cases cause additional difficulties to numerical techniques, which we do not want to discuss here. For the scope of the course **we will assume that all IVP problems we consider, have a unique solution.**
If you are curious, the precise conditions for existence and uniqueness are given below:
"""
# ╔═╡ bc840e8d-77d9-4a4a-a908-3e9b4a8d253b
Foldable("Optional: Theorem governing Existence and uniquens of first-order ODEs",
md"""
!!! info "Theorem 1: Existence and uniquens of first-order ODEs"
Given $f : \mathbb{R} \times \mathbb{R} \to \mathbb{R}$ a continuous function
with respect to both arguments and **Lipschitz-continuous** with respect to its second argument, i.e. there exists a constant $L>0$ such that
```math
| f(t, x) - f(t, y)| \leq L |x-y|, \qquad \forall x,y \in \mathbb{R}, \, \forall t \text{ with } a ≤ t ≤ b
```
Then the ODE problem (1) has a unique solution $u(t)$ defined for all
$a ≤ t ≤ b$ and $u$ is moreover continuously differentiable.
""")
# ╔═╡ 66affc7b-f0b0-48f4-b8bd-a1ff16d0357e
md"""
## Forward Euler
Let us return to the question how to solve the IVP (1)
```math
\tag{3}
\left\{
\begin{aligned}
\frac{d u(t)}{d t} &= f(t, u(t)) && a ≤ t ≤ b \\
u(a) &= u_0,
\end{aligned}\right.
```
where the time derivative $f$ is a known function, e.g. $f(t, u) = C\,u$ in the case of the duck population. Our goal is thus to trace how how the intial condition $u_0$ (at $t = a$) evolves in time until the final time $t = b$ is reached.
"""
# ╔═╡ 4454973b-81f7-4b05-9ad8-d85ce97d8d9e
md"""
Since the entire time interval $[a, b]$ could be large,
we will not attempt to solve this problem in one step.
Instead we will perform a **time discretisation**.
That is we split this time interval even further into smaller subintervals $[t_n, t_{n+1}]$ with
```math
a = t_0 < t_1 < \cdots < t_n < t_{n+1} < \cdots t_N = b
```
and consider algorithms, which **propagate the solution** across one such interval $[t_n, t_{n+1}]$, i.e. which take the solution **at time $t_n$ to the solution at $t_{n+1}$**.
"""
# ╔═╡ 95b843d1-c445-413e-a474-6241111923da
md"""
Here, for simplicity we only consider a discretisation using
**intervals of equal length** $h$, i.e.
```math
\tag{4}
t_n = a + n\, h \quad n = 0, \ldots, N \qquad h = \frac{b-a}{N},
```
such that $t_0 = a$ and $t_N = b$.
The parameter $h$ is often called the **stepsize**.
At time $t_n$ we need to satisfy
```math
\frac{d\, u(t_n)}{dt} = f(t_n, u(t_n)),
```
where we assume that we know $u(t_n)$, the solution at $t_n$,
but the derivative $\frac{d\, u(t_n)}{dt}$ is unknown.
Our task is to find $u(t_{n+1})$.
"""
# ╔═╡ 8fdd4e24-cfa9-4c41-868d-1069e99074bf
md"""
We make progress by approximating the dervative of $u$
using one of the finite differences formulas
discussed in the chapter on [Numerical differentiation](https://teaching.matmat.org/numerical-analysis/09_Numerical_differentiation.html).
The simplest approach is to employ forward finite differences, i.e.
```math
\tag{5}
\frac{d\, u(t_n)}{d t} ≈ D^+_h u(t_n) =\frac{1}{h} \Big( u(t_n+h) - u(t_n) \Big)
= \frac{1}{h} \Big( u(t_{n+1}) - u(t_n) \Big)
```
Introducing a short-hand notation $u^{(n)}$ for $u(t_n)$
and $u^{(n+1)}$ for $u(t_{n+1})$ we obtain from (3):
```math
\tag{6}
\left\{
\begin{aligned}
\frac{u^{(n+1)} - u^{(n)}}{h} &= f(t_n, u^{(n)}), \qquad\forall n = 0, \ldots, N-1\\
u^{(0)} = u(a) &= u_0
\end{aligned}\right.
```
Notably the first line can be rearranged to
$u^{(n+1)} = u^{(n)} + h\, f(t_n, u^{(n)})$,
thus providing us with a numerical scheme
how to obtain $u^{(n+1)}$ --- an approximation to the solution at $t_{n+1}$ ---
from $u^{(n)}$ --- an approximation to the solution at $t_n$.
This scheme is known as the **forward Euler method**:
"""
# ╔═╡ 0f2b87f4-8bc0-459c-a60d-29d523e60844
md"""
!!! info "Algorithm 1: Forward Euler method"
Given an initial value problem $\frac{d u}{d t} = f(u, t)$, $u(a) = u_0$
with $t \in [a, b]$ and nodes $t_n = a+n\, h$, $h = \frac{b-a}{N}$
iteratively compute the sequence
```math
\tag{7}
u^{(n+1)} = u^{(n)} + h\, f(t_n, u^{(n)})\qquad \forall n = 0, \ldots, N-1.
```
Then $u^{(n)}$ is *approximately* the solution $u$ at $t = t_n$.
"""
# ╔═╡ be01cdbc-1e85-4ae5-bd41-c95a9b606920
md"""
!!! danger "u⁽ⁿ⁾ ≠ u(tₙ)"
Notice, that the replacement of the derivative $\frac{d\, u(t_n)}{d t}$ in (6) by the forward finite difference approximation implies that the computed points $u^{(0)}, u^{(1)}, \ldots, u^{(N)}$ are **not equal to** the function values of the exact solution $u(t_0), u(t_1), \ldots, u(t_N)$.
"""
# ╔═╡ c641cdf2-aabf-45e7-bb12-83d73a93b5b7
md"""
A basic implementation of Euler's method is shown below.
It expects the definition of the derivative (`f`), the initial value (`u₀`),
the time interval $[a,b]$ and the number of time intervals $N$.
The output is the vector of nodes $t_n$ and the vector of approximate solution
values $u^{(n)} ≈ u(t_n)$.
"""
# ╔═╡ 9b0581c3-6fa8-44d8-8ebb-d2fda2841230
function forward_euler(f, u₀, a, b, N)
# f: Derivative function
# u₀: Initial value
# a: Start of the time interval
# b: End of the time interval
# Discrete time nodes to consider:
h = (b - a) / N
t = [a + i * h for i in 0:N]
# Setup output vector with all elements initialised to u₀
u = [float(copy(u₀)) for i in 0:N]
# Time integration
u[1] = u₀
for i in 1:N
u[i+1] = u[i] + h * f(u[i], t[i])
end
(; t, u)
end
# ╔═╡ f0164280-14fd-48a5-9ae8-8b31f4580da8
md"""
Let us apply this approach to the test problem (2)
```math
\left\{
\begin{aligned}
\frac{d u(t)}{d t} &= \sin\left[(u+t)^2\right] && 0 ≤ t ≤ 4 \\
u(0) &= -1,
\end{aligned}\right.
```
which we solved previously using `DifferentialEquations`. The variables `f`, `u₀`, `tstart` and `tend` already contains a setup of this initial value problem
"""
# ╔═╡ 6052f3cf-8a1e-4755-8724-a3f885702d62
f
# ╔═╡ 90a2176f-7986-4140-8e34-555993070324
u₀
# ╔═╡ 15e135e3-e95f-4494-8a20-2c571c4019f2
(tstart, tend)
# ╔═╡ 14edb601-ab10-4228-81a1-d27bb16ea202
md"""
and we can directly proceed to solving it:
"""
# ╔═╡ fb38ada2-26ce-42e9-bfdc-e4c677f8c501
md"and plot the result:"
# ╔═╡ 98f64da0-0d01-4c7e-8782-3d97dcf4c950
md"- Number of subintervals: `Neuler = ` $(@bind Neuler Slider(5:1:50; show_value=true, default=20)) "
# ╔═╡ 50007ba8-8160-48c1-b5c1-27a9332c1d58
res_euler = forward_euler(f, u₀, tstart, tend, Neuler);
# ╔═╡ 65807ddc-0c8b-4b88-bbc2-4ff3c7a1d2f8
let
res_euler = forward_euler(f, u₀, tstart, tend, Neuler)
plot(sol.t, sol.u; label="reference", lw=2, xlabel="t", ylabel=L"u(t)",
title=L"\frac{du}{dt} = \sin((t+u)^2)", ylims=(-2, 0))
plot!(res_euler.t, res_euler.u; label="Euler N=$Neuler", mark=:o, lw=2, ls=:dash)
end
# ╔═╡ ecb868c3-44a2-4f95-bafe-b3324770ee6b
md"""
We observe that the approximated solution becomes **more and more accurate** as we **increase the number of sub-intervals $N$** and as the step size $h$ decreases.
We obtain **linear convergence** (see plot below).
But we also observe that for **too small values** of $N$ (e.g. $N=9$) that the Euler **solution notably deviates** from the reference.
"""
# ╔═╡ 9962303e-1f7a-4569-bb09-e4998f481c6e
p_cvg_fw_euler = let
# Obtain extremely tight solution using DifferentialEquations.jl
# Note, that ivp still contains the ODEProblem (2)
u_exact = solve_reference(f, u₀, tstart, tend; reltol=1e-14, abstol=1e-14)
Ns = [ round(Int, 5 * 10^k) for k in 0:0.5:3 ]
hs = [ (tend - tstart) / N for N in Ns ]
errors = Float64[]
for N in Ns
res_euler = forward_euler(f, u₀, tstart, tend, N)
u = res_euler.u
t = res_euler.t
# Compute global errors in each node tₙ
global_error = [abs(u_exact(t[n]) - u[n]) for n in 1:length(u)]
# Push the largest error to the array for plotting
push!(errors, maximum(global_error))
end
plot(hs, errors; mark=:o, label="Errors", lw=2, xaxis=:log, yaxis=:log,
xlabel=L"h", ylabel="Largest global error", xflip=true,
title="Convergence of Forward Euler")
# Add line for perfect 1st order convergence:
plot!(hs, 0.1 .* hs, ls=:dash, lw=2, label=L"O(h)")
end
# ╔═╡ 667fbe5a-dbee-43a3-bb93-0160b61cf31e
md"""
Forward Euler is just one example of a broader class of so-called **explicit numerical methods** for initial value problems, which broadly speaking differ by the operations performed to obtain $u^{(n+1)}$ from $u^{(n)}$. We denote
!!! info "Explicit methods for solving initial value problems"
An **explicit method** to solve (1) is a method of the form
```math
\tag{8}
u^{(n+1)} = P_f(u^{(n)}, t_n, h)
\qquad \text{for $n=0, 1, \ldots, N-1$},
```
where $P_f(u, t, h)$ is a real-valued function
and $u^{(0)} = u_0$.
For forward Euler we simply have $P_f(u^{(n)}, t_n, h) = u^{(n)} + h f(t_n, u^{(n)})$.
"""
# ╔═╡ 46f8284f-7751-4c1e-a617-386d86e7f4d0
md"""
## Error analysis
As discussed above applying Forward Euler to an initial value problem (1) with $N$ subintervals leads to a sequence of computed points $u^{(0)}, u^{(1)}, \ldots, u^{(N)}$, which approximate $u(t_0), u(t_1), \ldots, u(t_N)$ --- the solution $u$ evaluated at the nodes $\{t_i\}_{i=0}^N$. Our goal is to **quantify the error**
```math
\tag{9}
|e^{(n)}| = |u(t_n) - u^{(n)}| \qquad \text{for $n = 0, 1, \ldots, N$}
```
between the computed points and the exact values of the solution.
In employing the Forward Euler scheme (Algorithm 1) we actually make **two approximations**, namely:
1. Instead of evaluating the exact derivative $\frac{d\, u(t_n)}{d t}$ we employ a **finite differences formula** in (5). This error contribution is usually called the **local truncation error**.
2. When computing $u^{(n+1)}$ in (6) we cannot employ $u(t_n)$ --- the exact value of the solution at time $t_n$ --- since this value is unknown to us. Instead we **employ $u^{(n)}$ as an approximation to $u(t_n)$**.
Both approximations contribute to the total error (9).
"""
# ╔═╡ 701250c5-3856-4720-b125-31eaede052a0
md"""
### Local truncation error
We want to isolate the error in the Forward Euler scheme
introduced by employing the finite difference formula
from the other error contribution.
For this let us assume for a second that we actually had access to the exact solution values $u(t_n)$, which we could use as part of a Forward Euler step (7).
The error of the Euler step is then simply $u(t_{n+1}) - P_f(u(t_n), t_n, h)$.
To make the scaling of errors comparable as $h\to0$
it is usually more convenient ot investigate the size of this value
relative to the chosen stepsize $h$, leading to the following definition:
!!! info "Definition: Local truncation error"
The **local truncation error** of an explicit method (8) is the quantity
```math
\tag{10}
\tau^{(n)}_h
= \frac{u(t_{n+1}) - P_f(\textcolor{red}{u(t_n)}, t_n, h)}{h}
```
"""
# ╔═╡ d898d0ca-f49e-4216-8197-950a2bc07762
md"""
Making reference to (1) we first note for Forward Euler:
```math
P_f(u(t_n), t_n, h) = u(t_n) + h\, f(t_n, u(t_n))= u(t_n) + h\, u'(t_n)
```
while a Taylor expansion of $u(t_{n+1}) = u(t_n + h)$ around $t_n$ yields
```math
u(t_{n+1}) = u(t_n + h) = u(t_n) + h u'(t_n) + \frac12 h^2 u''(t_n) + O(h^3)
```
such that the **local truncation error of Forward Euler** can be obtained as
```math
\tag{11}
\tau^{(n)}_h = \frac12 h \, u''(t_n) + O(h^2)
```
"""
# ╔═╡ d7150f04-e3e8-41f7-a173-9d458f72cdde
md"""
### Global error
We return to the question of estimating the error
```math
|e^{(n)}| = |u(t_n) - u^{(n)}| \qquad \text{for $n = 0, 1, \ldots, N$}.
```
In contrast to the *local* truncation error $\tau^{(n)}_h$
--- which really only estimates the error committed in a single time step ---
the error $e^{(n)}$ is a **global error**,
since it *accumulates* all error contributions up to the $n$-th timestep.
Naively one might think that simply adding all local error contributions $\tau^{(k)}_h$ with $k \leq n$ provides an estimate for this global error $e^{(n)}$. However, this neglects something important. Namely, that in our explicit algorithms (8), the *already erroneous estimate* $u^{(n)}$ is used to estimate $u^{(n+1)}$. **The error of step $n$ *propagates* to step $n+1$**.
For small values of $N$ this can cause the error $|u(t_n) - u^{(n)}|$ in later time steps $n$ to grow *exponentially*. For example consider:
"""
# ╔═╡ 6d73f60b-0ccf-492f-9700-09d0bbf15124
let
res_euler = forward_euler(f, u₀, tstart, tend, 7)
plot(sol.t, sol.u, label="reference", lw=2, xlabel="t", ylabel=L"u(t)",
title=L"\frac{du}{dt} = \sin((t+u)^2)", ylims=(-2, 1))
plot!(res_euler.t, res_euler.u; label=L"Euler $N=7$", mark=:o, lw=2, ls=:dash)
end
# ╔═╡ bcfd9254-c471-4e52-8259-aad01646fb4b
md"""
More precisely one can formulate:
!!! info "Theorem 2 (Convergence of explicit methods, simple version)"
If one has an explicit method with local truncation error satisfying
```math
|τ^{(n)}_h| ≤ C \, h^p,
```
as $h\to0$ then as $h\to0$ the global error satisfies
```math
|e^{(n)}| = |u(t_n) - u^{(n)}| ≤ \frac{C h^p}{L} \left(e^{L (t_n-a)} -1 \right)
```
where $C > 0$ and $L > 0$ are constants.
We note:
- If the local truncation error $τ^{(n)}_h$ converges with order $p$, then the explicit methods also converges globally with order $p$.
- However, the global error has an **additional prefactor** $(e^{L (t_n-a)} -1)$, which **grows exponentially in time**. This is an effect of the accumulation of error from one time step to the next. In particular if $b \gg a$ or results can get rather inaccurate **even for higher-order methods** beyond Forward Euler where $p > 1$. This point we will pick up in the section on *Stability and implicit methods* below.
- For Theorem 2 to hold there are a few more details to consider (e.g. problem (1) should have a unique solution). More information can be unfolded below.
"""
# ╔═╡ 5ac4eca2-0372-4b2d-add6-2347ce9461a3
details("Optional: More details on Theorem 2",
md"""
!!! info "Theorem 2"
Given a explicit method (8) for solving IVPs (1) with local truncation error satisfying
```math
\tag{12}
|τ^{(n)}_h| ≤ C \, h^p
```
with a constant $C > 0$ and $p \in \mathbb{N}$ as well as[^1]
```math
\tag{13}
\frac{\partial \phi_f(u, t, h)}{\partial u} ≤ L \qquad \text{for all $t \in [a, b]$ and all $h>0$}.
```
where $ϕ_f$ is defined such that $P_f(u, t, h) = u + h\, ϕ_f(u, t, h)$.
Then the global error satisfies
for all $n = 0, 1, \ldots, N$
```math
\tag{14}
|e^{(n)}| = |u(t_n) - u^{(n)}| ≤ \frac{C h^p}{L} \left(e^{L (t_n-a)} -1 \right)
```
as $h\to 0$,
that is the numerical method (8) is **convergent with order $p$**.
[^1]: This result can also be proven with minor modifications
under the conditions of Theorem 1, i.e. that
$|\phi_f(u, t, h) - \phi_f(v, t, h)| \leq L |u-v|$ for all $u,v \in \mathbb{R}$,
all $h>0$ and $a ≤ t ≤ b$.
> **Proof:**
> First note that $u^{(n+1)} - u^{(n)} = P_f(u^{(n)}, t_n, h) - u^{(n)} = h\, ϕ_f(u^{(n)}, t_n, h)$.
> Based on $e^{(n)} = u(t_n) - u^{(n)}$ we then obtain
> ```math
> \begin{aligned}
> e^{(n+1)} - e^{(n)} &= [u(t_{n+1}) - u^{(n+1)}] - [u(t_n) - u^{(n)}] \\
> &= [u(t_{n+1}) - u(t_n)] - [u^{(n+1)} - u^{(n)}] \\
> &= [u(t_{n+1}) - u(t_n)] - h\, \phi_f (u^{(n)}, t_n, h).
> \end{aligned}
> ```
> By adding and subtracting $h\,\phi_f \big(u(t_n), t_n, h\big)$ we develop this to
> ```math
> \begin{aligned}
> e^{(n+1)}
> = e^{(n)} &+ \left[u(t_{n+1}) - u(t_n) - h\,\phi_f \big(u(t_n), t_n, h\big)\right]\\
> &+ h \left[ \phi_f \big(u(t_n), t_n, h\big) - \phi_f \big(u^{(n)}, t_n, h\big) \right]\\
> = e^{(n)} &+ \overbrace{\left[u(t_{n+1}) - P_f\big(u(t_n), t_n, h\big)\right]}^{=h\, τ^{(n)}_h}\\
> &+ h \left[ \phi_f \big(u(t_n), t_n, h\big) - \phi_f \big(u^{(n)}, t_n, h\big) \right]
> \end{aligned}
> ```
> Employing the triangle inequality yields
> ```math
> |e^{(n+1)}| ≤ |e^{(n)}| + h |τ^{(n)}_h| + h \left|\phi_f \big(u(t_n), t_n, h\big) - \phi_f \big(u^{(n)}, t_n, h\big)\right|
> ```
> Concerning the last term, the Fundamental Theorem of Calculus implies
> ```math
> \begin{aligned}
> \left|\phi_f \big(u(t_n), t_n, h\big) - \phi_f \big(u^{(n)}, t_n, h\big)\right|
> &= \left| \int_{u^{(n)}}^{u(t_n)} \frac{\partial \phi_f (u, t_n, h)}{\partial u} du \right|\\
> &≤ \int_{u^{(n)}}^{u(t_n)} \left|\frac{\partial \phi_f (u, t_n, h)}{\partial u}\right| du \\
> &\stackrel{(13)}{≤} L \left| u(t_n) - u^{(n)} \right|\\ &= L |e^{(n)}|.
> \end{aligned}
> ```
> Therefore using (12)
> ```math
> \begin{aligned}
> |e^{(n+1)}| ≤ (1 + h\,L)\, \left|e^{(n)}\right| + h |τ^{(n)}_h| \stackrel{(12)}{≤} C\, h^{p+1} + (1 + h\,L)\, \left|e^{(n)}\right|
> \end{aligned}
> ```
> with which we develop
> ```math
> \begin{aligned}
> |e^{(n+1)}| &≤ C\,h^{p+1} + (1 + h\,L)\, \left|e^{(n)}\right| \\
> &≤ C\,h^{p+1} + (1 + h\,L) \left( C\,h^{p+1} + (1 + h\,L)\,\left|e^{(n-1)}\right| \right) \\
> &= C\,h^{p+1} + (1 + h\,L) \, C\,h^{p+1} + (1+h\,L)^2\, \left|e^{(n-1)}\right| \\
> &\ \vdots\\
> &≤ \sum_{i=0}^n (1+h\,L)^i\, C\,h^{p+1} + (1+h\,L)^{n+1} |e_0| \\
> &= \frac{1-(1+h\,L)^{n+1}}{1 - (1+h\,L)} C\,h^{p+1} + (1+h\,L)^{n+1} |e_0|.
> \end{aligned}
> ```
> where in the last step we used the identity
> ```math
> \sum_{i=0}^n r^i = \frac{1-r^{n+1}}{1-r} \qquad \forall r \neq 1
> ```
> Now we notice that $e_0 = u(a) - u_0 = 0$, such that the expression simplifies to
> ```math
> |e^{(n+1)}| ≤ \frac{1-\left(1+h\,L\right)^{n+1}}{-h\,L} C\,h^{p+1}
> = \frac{C\,h^{p}}{L} \left(\left(1+h\,L\right)^{n+1} -1 \right)
> ```
> Finally notice that $1 + x ≤ e^x$ for all $x ≥ 0$ and therefore that
> $(1+h\,L)^{n+1} \leq e^{(n+1)\,hL} = e^{(t_{n+1}-a) L}$. Replacing $n+1$ by $n$
> we obtain
> ```math
> |e^{(n)}| ≤ \frac{C\,h^{p}}{L} \left( e^{(t_{n}-a) L} - 1\right)
> ```
> which concludes the proof.
""")
# ╔═╡ 285cbf5b-7097-47c0-950c-0c84d69a65bd
md"""
!!! info "Observation: Convergence of Forward Euler"
As discussed in (11) Forward Euler has local truncation error
$\tau^{(n)}_h = \frac12 h\, u''(t_n) + O(h^2)$. Therefore
$|\tau^{(n)}_h| ≤ \frac12 \max_{t\in[t_n,t_{n+1}]} |u''(t)| \, h$
and by using Theorem 2 we conclude that **Forward Euler converges linearly**.
This is demonstrated graphically below:
"""
# ╔═╡ 20096950-17ef-4d60-8059-dc7f00265349
p_cvg_fw_euler
# ╔═╡ 38a5d61d-aff7-44dd-90e2-e690cd2643f7
md"""
## Runge-Kutta methods
Similar to the previous chapters we now ask the question: How can we improve the convergence of a numerical method for an IVP ? The answer to this question leads to the major and most widely employed types of methods for solving initial-value problems, the **Runge-Kutta** (RK) **methods**.
We start with a second-order RK method, namely the midpoint method.
"""
# ╔═╡ 2a464c17-2711-477a-90cd-bdc37a04ce5a
md"""
### Optional: Midpoint method
The goal of the midpoint method is to construct a
second-order method for solving the IVP (1)
```math
\left\{
\begin{aligned}
\frac{d u(t)}{d t} &= f\Big(t, u(t)\Big) && a ≤ t ≤ b \\
u(a) &= u_0.
\end{aligned}\right.
```
We stay within our general framework of (8), i.e. we want to iterate
for $n=0, 1, \ldots, N-1$
```math
u^{(n+1)} = P_f (u^{(n)}, t_n, h).
```
"""
# ╔═╡ 34ddbbe2-69bb-41a4-93e3-b721d88b480e
md"""
The ideal method would obtain the exact $u(t_{n+1})$ from the exact $u(t_{n})$,
implying a local truncation error $τ^{(n)}_h = 0$. To build such a method
we expand
```math
\tag{15}
\begin{aligned}
u(t_{n+1}) = u(t_{n} + h)
&= u(t_n) + h\,u'(t_n) \hspace{27pt}+ \frac12 h^2 u''(t_n) + O(h^3)\\
&= u(t_n) + h\,f\Big(t_n, u(t_n)\Big) + \frac12 h^2 u''(t_n) + O(h^3) .
\end{aligned}
```
where we used the definition of the IVP to replace $u'$.
If in this equation we keep only the first two terms we recover
the Forward Euler method (7). To obtain more accuracy
we therefore need to compute or approximate $u''(t_n)$ as well.
Using the chain rule we note
```math
\frac{d^2 u}{d t^2} = \frac{d f}{d t}
\stackrel{(*)}{=} \frac{\partial f}{\partial t}
+ \frac{\partial f}{\partial u}
\frac{\partial u}{\partial t}
= \frac{\partial f}{\partial t}
+ \frac{\partial f}{\partial u} f,
```
where notably step $(*)$ is neccessary because both $f$ and its argument $u$ depend on $t$. Inserting this expression into (15) we obtain
```math
\tag{16}
\begin{aligned}
u(t_{n+1}) = u(t_{n} + h)
&= u(t_n) + h\,\left[f\Big(t_n, u(t_n)\Big) + \frac h2
\frac{\partial f}{\partial t}\Big(t_n, u(t_n)\Big)
\right.\\&\hspace{70pt}\left.
+ \textcolor{blue}{\frac h2} \frac{\partial f}{\partial u}\Big(t_n, u(t_n)\Big)
\textcolor{blue}{f\Big(t_n, u(t_n)\Big)}
\right] + O(h^3).
\end{aligned}
```
"""
# ╔═╡ 8f4bca61-8fa8-45a3-a728-92604bc319a2
md"""
Since computing and implementing the partial derivatives $\frac{\partial f}{\partial t}$ and $\frac{\partial f}{\partial u}$ can in general become ticky,
we will also approximate these further.
Expanding $f$ in a multi-dimensional Taylor series we observe
```math
\begin{aligned}
f\Big(t_n + ζ, u(t_n) + \textcolor{blue}{ξ} \Big)
= f\Big(t_n, u(t_n) \Big) &+ ζ\, \frac{\partial f}{\partial t}\Big(t_n, u(t_n)\Big)
+ \textcolor{blue}{ξ}\, \frac{\partial f}{\partial u}\Big(t_n, u(t_n)\Big)\\
&+ O(ζ^2 + |ζ\textcolor{blue}{ξ}| + \textcolor{blue}{ξ}^2).
\end{aligned}
```
With the choice $ζ = \frac h2$ and $\textcolor{blue}{ξ = \frac h2 f\big(t_n, u(t_n)\big)}$
this expressions becomes equal to lowest order with the term
in the square brackets of (16). This leads to
```math
\begin{aligned}
u(t_{n+1}) = u(t_{n} + h)
&= u(t_n) + h\,f\Big(t_n + \frac h2, u(t_n) + \textcolor{blue}{\frac h2 f\big(t_n, u(t_n)\big)} \Big)\\
&\hspace{35pt}+ O(h^3 + h ζ^2 + h |ζξ| + h ξ^2).
\end{aligned}
```
"""
# ╔═╡ 4d20e43b-ba3d-4eab-adfc-f3a7c6feeb0e
md"""
This is the **midpoint method** we mentioned earlier:
!!! info "Algorithm 2: Midpoint method"
Given an initial value problem $\frac{d u}{d t} = f(u, t)$, $u(0) = u_0$
with $t \in [a, b]$ and nodes $t_n = a+n\, h$, $h = \frac{b-a}{N}$
iteratively compute the sequence
```math
u^{(n+1)} = P_f(u^{(n)}, t_n, h)\qquad \forall n = 0, \ldots, N-1.
```
with
```math
P_f(u^{(n)}, t_n, h) = u^{(n)} + h \left[
f\left(t_n + \frac{h}{2}, \textcolor{green}{u^{(n)} + \frac{h}{2} f(t_n, u^{(n)}})\right)
\right]
```
From our derivation the local truncation error for the midpoint method can be identified as
```math
\tau^{(n)}_h = \frac{u(t_{n+1}) - P_f(\textcolor{red}{u(t_n)}, t_n, h)}{h}
= \frac{1}{h} O(h^3 + h ζ^2 + h |ζξ| + h ξ^2) = O(h^2).
```
From Theorem 2 we observe that the **midpoint method is** indeed **a second-order method**.
"""
# ╔═╡ 3430cee3-d5ac-4f7d-99ce-713008cd3553
md"""
Runge-Kutta methods, such as the midpoint methods, are what one calls **multi-stage methods**. To make this term clear, let us rewrite equation (7) in two stages
by isolating the green part:
```math
\begin{aligned}
\textcolor{green}{u^{(n+\frac12)}} &\,\textcolor{green}{:= u^{(n)} + \frac h2 f(t_n, u^{(n)})} \\
u^{(n+1)} &= u^{(n)} + h\, f\big(t_n + \frac h2, \textcolor{green}{u^{(n+\frac12)}}\big)
\end{aligned}
```
Written like this we notice that the **first stage** (in green)
performs a Forward Euler step with **half the stepsize**,
namely from time $t_n$ to $t_n + \frac h2$.
The **second stage** then performs an Euler-style update over the whole
timestep, but employing the slope $u^{(n+\frac12)}$
from the first stage in the computation of $f$.
This interpretation also explains why the method is called **midpoint method**.
We obtain an implementation of the midpoint method as:
"""
# ╔═╡ 76978387-3b7b-4644-a27e-246f84186eb9
function midpoint(f, u₀, a, b, N)
# f: Derivative function
# u₀: Initial value
# a: Start of the time interval
# b: End of the time interval
# Discrete time nodes to consider:
h = (b - a) / N
t = [a + i * h for i in 0:N]
# Setup output vector with all elements initialised to u₀
u = [float(copy(u₀)) for i in 0:N]
# Time integration ... this is what changes over forward_euler
u[1] = u₀
for i in 1:N
uhalf = u[i] + h/2 * f(u[i], t[i] )
u[i+1] = u[i] + h * f(uhalf, t[i] + h/2)
end
(; t, u)
end
# ╔═╡ 87345743-d3a5-4ec4-bf66-15e6209a729a
md"""In comparison with Forward Euler we notice this method to be clearly more accurate for the case of rather small number of timesteps:"""
# ╔═╡ 2c3ad400-acdd-41bb-ac40-16cdc4b6fb07
let
N = 9
res_euler = forward_euler(f, u₀, tstart, tend, N)
res_midpoint = midpoint(f, u₀, tstart, tend, N)
plot(sol.t, sol.u; label="reference", lw=2, xlabel="t", ylabel=L"u(t)",
title=L"\frac{du}{dt} = \sin((t+u)^2)", ylims=(-2, 0.5))
plot!(res_euler.t, res_euler.u; label=L"Euler $N=9$", mark=:o, lw=2, ls=:dash)
plot!(res_midpoint.t, res_midpoint.u; label=L"Midpoint $N=9$", mark=:o, lw=2, ls=:dash)
end
# ╔═╡ 262a2a67-3fd3-497c-b56c-71fe26abffc1
md"""
### Optional: Higher-order Runge-Kutta methods
The ideas we used for constructing the midpoint method point to a clear path how one could construct even higher-order methods: All we need to do is to introduce further intermediate stages, i.e. half, third or other intermediate timesteps. In this way we can generate additional equations and effectively match the higher-order derivatives in the Taylor expansion (15), which will in turn reduce the local truncation error $τ^{(n)}_h$. The methods generated in this form are called **Runge-Kutta methods**.
The algebra required to work out the details grows in complexity, so we will not attempt to do this here and only present a general overview. Constructing an $s$-stage Runge-Kutta methods leads to the set of equations
```math
\begin{aligned}
v_1 &= h\,f(t_n, \hphantom{+ c_1\,h,\ }u_n) \\
v_2 &= h\,f(t_n + c_1\,h,\ u_n + a_{11}\, v_1) \\
v_3 &= h\,f(t_n + c_2\,h,\ u_n + a_{21}\, v_1 + a_{22}\, v_2) \\
&\vdots\\
v_s &= h\,f\left(t_n + c_{s-1}\,h,\ u_n + \sum_{i=1}^{s-1} a_{s-1,i}\,v_i\right)\\
u^{(n+1)} &= u^{(n)} + \sum_{i=1}^{s} b_i\, v_i.
\end{aligned}
```
where specifying both the number of stages $s$ as well as the constants $a_{ij}$, $b_i$ and $c_i$ uniquely determines an RK method.
Both one-step methods we discussed so far actually match this framework:
- **Forward Euler:** $s=1$, $b_1 = 1$ and no $a_{ij}$ and no $c_i$.
- **Midpoint method:** $s=2$, $b_1 = 0$, $b_2=1$, $c_1 = \frac12$ and $a_{11} = \frac12$.
"""
# ╔═╡ 850928f3-ab31-42c6-a51a-0dca28175bdf
md"""
Additionally we want to specify one additionall **fourth-order RK** method,
often called **RK4**, which is the most commonly used IVP approach:
!!! info "Algorithm 3: Fourth-order Runge-Kutta method (RK4)"
Given an initial value problem $\frac{d u}{d t} = f(u, t)$, $u(0) = u_0$
with $t \in [a, b]$ and nodes $t_n = a+n\, h$, $h = \frac{b-a}{N}$
iteratively compute the sequence
```math
\begin{aligned}
v_1 &= h\,f(t_n\hphantom{\,\, + \frac h2}, u^{(n)}) \\
v_2 &= h\,f(t_n + \frac h2, u^{(n)} + \frac{v_1}{2}) \\
v_3 &= h\,f(t_n + \frac h2, u^{(n)} + \frac{v_2}{2}) \\
v_4 &= h\,f(t_n + h, \ \, u^{(n)} + v_3) \\[0.5em]
u^{(n+1)} &= u^{(n)} + \frac{v_1}{6} + \frac{v_2}{3} + \frac{v_3}{3} + \frac{v_4}{6}
\end{aligned}
```
Let us mention in passing, that our reference solution routine `solve_reference` is using a method called `Tsit5()`, which is also based on a Runge-Kutta scheme.
An implementation of RK4 is given by:
"""
# ╔═╡ 5f7562f6-4329-4c1e-b5d4-74c9b1980346
function rk4(f, u₀, a, b, N)
# f: Derivative function
# u0: Initial value
# a: Start of the time interval
# b: End of the time interval
# Discrete time nodes to consider:
h = (b - a) / N
t = [a + i * h for i in 0:N]
# Setup output vector with all elements initialised to u₀
u = [float(copy(u₀)) for i in 0:N]
# Time integration ... this is what changes over forward_euler
u[1] = u₀
for i in 1:N
v₁ = h * f(u[i], t[i] )
v₂ = h * f(u[i] + v₁/2, t[i] + h/2)
v₃ = h * f(u[i] + v₂/2, t[i] + h/2)
v₄ = h * f(u[i] + v₃, t[i] + h )
u[i+1] = u[i] + (v₁/6 + v₂/3 + v₃/3 + v₄/6)
end
(; t, u)
end
# ╔═╡ f0f756fc-f7fe-459c-8419-4e60c741fce5
md"""
Let us compare all methods we saw in this chapter:
- `N = ` $(@bind N Slider(5:1:25; show_value=true, default=7))
"""
# ╔═╡ f9e8c1ec-3bc6-40d3-b7e3-f465bd53ecab
let
res_euler = forward_euler(f, u₀, tstart, tend, N)
res_midpoint = midpoint(f, u₀, tstart, tend, N)
res_rk4 = rk4(f, u₀, tstart, tend, N)
plot(sol.t, sol.u; label="reference", lw=2, ylabel=L"u(t)", xlabel="",
title=L"\frac{du}{dt} = \sin((t+u)^2)", ylims=(-2, 0.5), titlefontsize=12)
plot!(res_euler.t, res_euler.u; label="Euler", mark=:o, lw=2, ls=:dash, markersize=3)
plot!(res_midpoint.t, res_midpoint.u; label="Midpoint", mark=:o, lw=2, ls=:dash, markersize=3)
p = plot!(res_rk4.t, res_rk4.u; label="RK4", mark=:o, lw=2, ls=:dash, markersize=3)
q = plot(; yaxis=:log, ylims=(1e-6, 1), legend=false, ylabel=L"$|u - u_{\textrm{ref}}|$", xlabel=L"t", title="Error", titlefontsize=10)
plot!(q, res_euler.t, abs.(res_euler.u - sol.(res_euler.t) .+ 1e-16);
label="Euler", mark=:o, lw=2, markersize=3, ls=:dash, c=2)
plot!(q, res_midpoint.t, abs.(res_midpoint.u - sol.(res_midpoint.t) .+ 1e-16);
label="Midpoint", mark=:o, lw=2, markersize=3, ls=:dash, c=3)
plot!(q, res_rk4.t, abs.(res_rk4.u - sol.(res_rk4.t) .+ 1e-16);
label="RK4", mark=:o, lw=2, markersize=3, ls=:dash, c=4)
plot(p, q; layout=grid(2, 1; heights=[0.75, 0.25]))
end
# ╔═╡ bdb783bc-1648-43f0-b706-462d2a5d6dcf
md"""
RK4 converges faster than the other two methods. As the name suggests it is indeed a fourth-order method:
"""
# ╔═╡ 99922f12-34b6-4a9d-b3af-2b61708725fa
begin
# Obtain extremely tight solution using DifferentialEquations.jl
# Note, that ivp still contains the ODEProblem (2)
u_exact = solve_reference(f, u₀, tstart, tend; reltol=1e-14, abstol=1e-14)
# Function to compute the global error
function global_error(t, u, u_exact)
errors = [abs(u_exact(t[n]) - u[n]) for n in 1:length(u)]
maximum(errors)
end
Ns = [ round(Int, 5 * 10^k) for k in 0:0.5:3 ]
errors_euler = Float64[]
errors_midpoint = Float64[]
errors_rk4 = Float64[]
for N in Ns
result_euler = forward_euler(f, u₀, tstart, tend, N)
push!(errors_euler, global_error(result_euler.t, result_euler.u, u_exact))
result_midpoint = midpoint(f, u₀, tstart, tend, N)
push!(errors_midpoint, global_error(result_midpoint.t, result_midpoint.u, u_exact))
res_rk4 = rk4(f, u₀, tstart, tend, N)
push!(errors_rk4, global_error(res_rk4.t, res_rk4.u, u_exact))
end