@@ -60,7 +60,7 @@ md_lines_preamble = Dict{Int, String}(
60
60
61
61
7 => """
62
62
Gaussian processes are cool, of course, but the models they will use will require
63
- initial values. Some of these have already been identified before (mean and
63
+ initial values. Some of these have already been determined before (mean and
64
64
variance), but we will also need an estimate of the period. Let's go the classic
65
65
way and make this estimate by determining the peak position on the Lomb-Scargle
66
66
periodogram.
@@ -73,7 +73,8 @@ md_lines_preamble = Dict{Int, String}(
73
73
9 => """
74
74
Yeah, close the plot. It's just good practice. This ends the preamble to this
75
75
object, and we can now proceed with tests of different models for Gaussian
76
- processes. Click on any of the model IDs in the sidebar to continue.
76
+ processes. Click on any of the model IDs in the sidebar to continue. If you're not
77
+ sure what they mean, check out the [`IDs`](@ref IDs) page.
77
78
""" ,
78
79
]
79
80
)
@@ -87,8 +88,8 @@ md_lines_kernel = Dict{Int, String}(
87
88
88
89
2 => """
89
90
Let's define a function for calculating the negative log marginal likelihood and an
90
- auxiliary function for unpacking the tuple of parameters. See the `IDs` page on the
91
- sidebar for decrypting model IDs.
91
+ auxiliary function for unpacking the tuple of parameters. See the [ `IDs`](@ref IDs)
92
+ page on the sidebar for decrypting model IDs.
92
93
""" ,
93
94
94
95
3 => """
@@ -99,9 +100,25 @@ md_lines_kernel = Dict{Int, String}(
99
100
""" ,
100
101
101
102
4 => """
102
- Finally, let's optimize the negative log marginal likelihood function. This process
103
- can take quite a lot of time (it was likely aborted manually at some point), so the
104
- output of the optimizer is very large and therefore placed under the spoiler.
103
+ Let's optimize the negative log marginal likelihood function. This process
104
+ can take quite a lot of time, so the output of the optimizer is very large and
105
+ therefore placed under the spoiler. Don't mind the failure, by the way. It's
106
+ because of the prohibition of the BFGS solver from getting out of the pits.
107
+ """ ,
108
+
109
+ 5 => """
110
+ Now let's create a plot similar to the previous one using a set of parameters
111
+ obtained during the optimization process.
112
+ """ ,
113
+
114
+ 6 => """
115
+ We can now compare this realization of the Gaussian process with the original
116
+ time series for similarity of characteristics. This is for a test of matching our
117
+ subjective opinion of similarity with the objective result of the solver.
118
+ """ ,
119
+
120
+ 7 => """
121
+ This ends the notebook about this model. There are others on the sidebar.
105
122
""" ,
106
123
]
107
124
)
@@ -191,7 +208,7 @@ for (index, notebook) in enumerate(notebooks)
191
208
if inside_output_block
192
209
193
210
# Condition of placing the end of a spoiler (before a block of code)
194
- if about_kernel && code_block == 4
211
+ if about_kernel && code_block == 5
195
212
lines[index - 1 ] = """
196
213
```
197
214
```@raw html
0 commit comments