Skip to content

Commit 81a3824

Browse files
jhoofwijkbasnijholt
authored andcommitted
added curvature docs
1 parent f68bd81 commit 81a3824

File tree

2 files changed

+64
-0
lines changed

2 files changed

+64
-0
lines changed

docs/source/reference/adaptive.learner.learner1D.rst

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,3 +12,9 @@ Custom loss functions
1212
.. autofunction:: adaptive.learner.learner1D.default_loss
1313

1414
.. autofunction:: adaptive.learner.learner1D.uniform_loss
15+
16+
.. autofunction:: adaptive.learner.learner1D.uses_nth_neighbors
17+
18+
.. autofunction:: adaptive.learner.learner1D.triangle_loss
19+
20+
.. autofunction:: adaptive.learner.learner1D.get_curvature_loss

docs/source/tutorial/tutorial.Learner1D.rst

Lines changed: 58 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -137,3 +137,61 @@ functions:
137137
.. jupyter-execute::
138138

139139
runner.live_plot(update_interval=0.1)
140+
141+
142+
Looking at curvature
143+
....................
144+
145+
By default ``adaptive`` will sample more points where the (normalized)
146+
euclidean distance between the neighboring points is large.
147+
You may achieve better results sampling more points in regions with high
148+
curvature. To do this, you need to tell the learner to look at the curvature
149+
by specifying ``loss_per_interval``.
150+
151+
.. jupyter-execute::
152+
153+
from adaptive.learner.learner1D import (get_curvature_loss,
154+
uniform_loss,
155+
default_loss)
156+
curvature_loss = get_curvature_loss()
157+
learner = adaptive.Learner1D(f, bounds=(-1, 1), loss_per_interval=curvature_loss)
158+
runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01)
159+
160+
.. jupyter-execute::
161+
:hide-code:
162+
163+
await runner.task # This is not needed in a notebook environment!
164+
165+
.. jupyter-execute::
166+
167+
runner.live_info()
168+
169+
.. jupyter-execute::
170+
171+
runner.live_plot(update_interval=0.1)
172+
173+
We may see the difference of homogeneous sampling vs only one interval vs
174+
including nearest neighboring intervals in this plot: We will look at 100 points.
175+
176+
.. jupyter-execute::
177+
178+
def sin_exp(x):
179+
from math import exp, sin
180+
return sin(15 * x) * exp(-x**2*2)
181+
182+
learner_h = adaptive.Learner1D(sin_exp, (-1, 1), loss_per_interval=uniform_loss)
183+
learner_1 = adaptive.Learner1D(sin_exp, (-1, 1), loss_per_interval=default_loss)
184+
learner_2 = adaptive.Learner1D(sin_exp, (-1, 1), loss_per_interval=curvature_loss)
185+
186+
npoints_goal = lambda l: l.npoints >= 100
187+
# adaptive.runner.simple is a non parallel blocking runner.
188+
adaptive.runner.simple(learner_h, goal=npoints_goal)
189+
adaptive.runner.simple(learner_1, goal=npoints_goal)
190+
adaptive.runner.simple(learner_2, goal=npoints_goal)
191+
192+
(learner_h.plot().relabel('homogeneous')
193+
+ learner_1.plot().relabel('euclidean loss')
194+
+ learner_2.plot().relabel('curvature loss')).cols(2)
195+
196+
More info about using custom loss functions can be found
197+
in :ref:`Custom adaptive logic for 1D and 2D`.

0 commit comments

Comments
 (0)