Skip to content

Commit 0d27717

Browse files
authored
Merge pull request #330 from python-adaptive/docs-page-split-up
Splits up documentations page into "algo+examples" and rest
2 parents 0a14a9e + 0515f10 commit 0d27717

File tree

4 files changed

+173
-175
lines changed

4 files changed

+173
-175
lines changed

README.rst

Lines changed: 9 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -25,10 +25,6 @@ to see examples of how to use ``adaptive`` or visit the
2525

2626
.. summary-end
2727
28-
**WARNING: adaptive is still in a beta development stage**
29-
30-
.. not-in-documentation-start
31-
3228
Implemented algorithms
3329
----------------------
3430

@@ -44,6 +40,8 @@ but the details of the adaptive sampling are completely customizable.
4440

4541
The following learners are implemented:
4642

43+
.. not-in-documentation-start
44+
4745
- ``Learner1D``, for 1D functions ``f: ℝ → ℝ^N``,
4846
- ``Learner2D``, for 2D functions ``f: ℝ^2 → ℝ^N``,
4947
- ``LearnerND``, for ND functions ``f: ℝ^N → ℝ^M``,
@@ -52,10 +50,16 @@ The following learners are implemented:
5250
- ``AverageLearner1D``, for stochastic 1D functions where you want to
5351
estimate the mean value of the function at each point,
5452
- ``IntegratorLearner``, for
55-
when you want to intergrate a 1D function ``f: ℝ → ℝ``,
53+
when you want to intergrate a 1D function ``f: ℝ → ℝ``.
5654
- ``BalancingLearner``, for when you want to run several learners at once,
5755
selecting the “best” one each time you get more points.
5856

57+
Meta-learners (to be used with other learners):
58+
59+
- ``BalancingLearner``, for when you want to run several learners at once,
60+
selecting the “best” one each time you get more points,
61+
- ``DataSaver``, for when your function doesn't just return a scalar or a vector.
62+
5963
In addition to the learners, ``adaptive`` also provides primitives for
6064
running the sampling across several cores and even several machines,
6165
with built-in support for
Lines changed: 163 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,163 @@
1+
.. include:: ../../README.rst
2+
:start-after: summary-end
3+
:end-before: not-in-documentation-start
4+
5+
- `~adaptive.Learner1D`, for 1D functions ``f: ℝ → ℝ^N``,
6+
- `~adaptive.Learner2D`, for 2D functions ``f: ℝ^2 → ℝ^N``,
7+
- `~adaptive.LearnerND`, for ND functions ``f: ℝ^N → ℝ^M``,
8+
- `~adaptive.AverageLearner`, for random variables where you want to
9+
average the result over many evaluations,
10+
- `~adaptive.AverageLearner1D`, for stochastic 1D functions where you want to
11+
estimate the mean value of the function at each point,
12+
- `~adaptive.IntegratorLearner`, for
13+
when you want to intergrate a 1D function ``f: ℝ → ℝ``.
14+
- `~adaptive.BalancingLearner`, for when you want to run several learners at once,
15+
selecting the “best” one each time you get more points.
16+
17+
Meta-learners (to be used with other learners):
18+
19+
- `~adaptive.BalancingLearner`, for when you want to run several learners at once,
20+
selecting the “best” one each time you get more points,
21+
- `~adaptive.DataSaver`, for when your function doesn't just return a scalar or a vector.
22+
23+
In addition to the learners, ``adaptive`` also provides primitives for
24+
running the sampling across several cores and even several machines,
25+
with built-in support for
26+
`concurrent.futures <https://docs.python.org/3/library/concurrent.futures.html>`_,
27+
`mpi4py <https://mpi4py.readthedocs.io/en/stable/mpi4py.futures.html>`_,
28+
`loky <https://loky.readthedocs.io/en/stable/>`_,
29+
`ipyparallel <https://ipyparallel.readthedocs.io/en/latest/>`_ and
30+
`distributed <https://distributed.readthedocs.io/en/latest/>`_.
31+
32+
Examples
33+
--------
34+
35+
Here are some examples of how Adaptive samples vs. homogeneous sampling. Click
36+
on the *Play* :fa:`play` button or move the sliders.
37+
38+
.. jupyter-execute::
39+
:hide-code:
40+
41+
import itertools
42+
import adaptive
43+
from adaptive.learner.learner1D import uniform_loss, default_loss
44+
import holoviews as hv
45+
import numpy as np
46+
47+
adaptive.notebook_extension()
48+
hv.output(holomap="scrubber")
49+
50+
`adaptive.Learner1D`
51+
~~~~~~~~~~~~~~~~~~~~
52+
53+
.. jupyter-execute::
54+
:hide-code:
55+
56+
def f(x, offset=0.07357338543088588):
57+
a = 0.01
58+
return x + a**2 / (a**2 + (x - offset)**2)
59+
60+
def plot_loss_interval(learner):
61+
if learner.npoints >= 2:
62+
x_0, x_1 = max(learner.losses, key=learner.losses.get)
63+
y_0, y_1 = learner.data[x_0], learner.data[x_1]
64+
x, y = [x_0, x_1], [y_0, y_1]
65+
else:
66+
x, y = [], []
67+
return hv.Scatter((x, y)).opts(style=dict(size=6, color="r"))
68+
69+
def plot(learner, npoints):
70+
adaptive.runner.simple(learner, lambda l: l.npoints == npoints)
71+
return (learner.plot() * plot_loss_interval(learner))[:, -1.1:1.1]
72+
73+
def get_hm(loss_per_interval, N=101):
74+
learner = adaptive.Learner1D(f, bounds=(-1, 1), loss_per_interval=loss_per_interval)
75+
plots = {n: plot(learner, n) for n in range(N)}
76+
return hv.HoloMap(plots, kdims=["npoints"])
77+
78+
layout = (
79+
get_hm(uniform_loss).relabel("homogeneous samping")
80+
+ get_hm(default_loss).relabel("with adaptive")
81+
)
82+
83+
layout.opts(plot=dict(toolbar=None))
84+
85+
`adaptive.Learner2D`
86+
~~~~~~~~~~~~~~~~~~~~
87+
88+
.. jupyter-execute::
89+
:hide-code:
90+
91+
def ring(xy):
92+
import numpy as np
93+
x, y = xy
94+
a = 0.2
95+
return x + np.exp(-(x**2 + y**2 - 0.75**2)**2/a**4)
96+
97+
def plot(learner, npoints):
98+
adaptive.runner.simple(learner, lambda l: l.npoints == npoints)
99+
learner2 = adaptive.Learner2D(ring, bounds=learner.bounds)
100+
xs = ys = np.linspace(*learner.bounds[0], int(learner.npoints**0.5))
101+
xys = list(itertools.product(xs, ys))
102+
learner2.tell_many(xys, map(ring, xys))
103+
return (learner2.plot().relabel('homogeneous grid')
104+
+ learner.plot().relabel('with adaptive')
105+
+ learner2.plot(tri_alpha=0.5).relabel('homogeneous sampling')
106+
+ learner.plot(tri_alpha=0.5).relabel('with adaptive')).cols(2)
107+
108+
learner = adaptive.Learner2D(ring, bounds=[(-1, 1), (-1, 1)])
109+
plots = {n: plot(learner, n) for n in range(4, 1010, 20)}
110+
hv.HoloMap(plots, kdims=['npoints']).collate()
111+
112+
`adaptive.AverageLearner`
113+
~~~~~~~~~~~~~~~~~~~~~~~~~
114+
115+
.. jupyter-execute::
116+
:hide-code:
117+
118+
def g(n):
119+
import random
120+
random.seed(n)
121+
val = random.gauss(0.5, 0.5)
122+
return val
123+
124+
learner = adaptive.AverageLearner(g, atol=None, rtol=0.01)
125+
126+
def plot(learner, npoints):
127+
adaptive.runner.simple(learner, lambda l: l.npoints == npoints)
128+
return learner.plot().relabel(f'loss={learner.loss():.2f}')
129+
130+
plots = {n: plot(learner, n) for n in range(10, 10000, 200)}
131+
hv.HoloMap(plots, kdims=['npoints'])
132+
133+
`adaptive.LearnerND`
134+
~~~~~~~~~~~~~~~~~~~~
135+
136+
.. jupyter-execute::
137+
:hide-code:
138+
139+
def sphere(xyz):
140+
import numpy as np
141+
x, y, z = xyz
142+
a = 0.4
143+
return np.exp(-(x**2 + y**2 + z**2 - 0.75**2)**2/a**4)
144+
145+
learner = adaptive.LearnerND(sphere, bounds=[(-1, 1), (-1, 1), (-1, 1)])
146+
adaptive.runner.simple(learner, lambda l: l.npoints == 5000)
147+
148+
fig = learner.plot_3D(return_fig=True)
149+
150+
# Remove a slice from the plot to show the inside of the sphere
151+
scatter = fig.data[0]
152+
coords_col = [
153+
(x, y, z, color)
154+
for x, y, z, color in zip(
155+
scatter["x"], scatter["y"], scatter["z"], scatter.marker["color"]
156+
)
157+
if not (x > 0 and y > 0)
158+
]
159+
scatter["x"], scatter["y"], scatter["z"], scatter.marker["color"] = zip(*coords_col)
160+
161+
fig
162+
163+
see more in the :ref:`Tutorial Adaptive`.

docs/source/docs.rst

Lines changed: 0 additions & 170 deletions
Original file line numberDiff line numberDiff line change
@@ -1,173 +1,3 @@
1-
Implemented algorithms
2-
----------------------
3-
4-
The core concept in ``adaptive`` is that of a *learner*. A *learner*
5-
samples a function at the best places in its parameter space to get
6-
maximum “information” about the function. As it evaluates the function
7-
at more and more points in the parameter space, it gets a better idea of
8-
where the best places are to sample next.
9-
10-
Of course, what qualifies as the “best places” will depend on your
11-
application domain! ``adaptive`` makes some reasonable default choices,
12-
but the details of the adaptive sampling are completely customizable.
13-
14-
The following learners are implemented:
15-
16-
- `~adaptive.Learner1D`, for 1D functions ``f: ℝ → ℝ^N``,
17-
- `~adaptive.Learner2D`, for 2D functions ``f: ℝ^2 → ℝ^N``,
18-
- `~adaptive.LearnerND`, for ND functions ``f: ℝ^N → ℝ^M``,
19-
- `~adaptive.AverageLearner`, for random variables where you want to
20-
average the result over many evaluations,
21-
- `~adaptive.AverageLearner1D`, for stochastic 1D functions where you want to
22-
estimate the mean value of the function at each point,
23-
- `~adaptive.IntegratorLearner`, for
24-
when you want to intergrate a 1D function ``f: ℝ → ℝ``.
25-
26-
Meta-learners (to be used with other learners):
27-
28-
- `~adaptive.BalancingLearner`, for when you want to run several learners at once,
29-
selecting the “best” one each time you get more points,
30-
- `~adaptive.DataSaver`, for when your function doesn't just return a scalar or a vector.
31-
32-
In addition to the learners, ``adaptive`` also provides primitives for
33-
running the sampling across several cores and even several machines,
34-
with built-in support for
35-
`concurrent.futures <https://docs.python.org/3/library/concurrent.futures.html>`_,
36-
`ipyparallel <https://ipyparallel.readthedocs.io/en/latest/>`_ and
37-
`distributed <https://distributed.readthedocs.io/en/latest/>`_.
38-
39-
Examples
40-
--------
41-
42-
Here are some examples of how Adaptive samples vs. homogeneous sampling. Click
43-
on the *Play* :fa:`play` button or move the sliders.
44-
45-
.. jupyter-execute::
46-
:hide-code:
47-
48-
import itertools
49-
import adaptive
50-
from adaptive.learner.learner1D import uniform_loss, default_loss
51-
import holoviews as hv
52-
import numpy as np
53-
54-
adaptive.notebook_extension()
55-
hv.output(holomap="scrubber")
56-
57-
`adaptive.Learner1D`
58-
~~~~~~~~~~~~~~~~~~~~
59-
60-
.. jupyter-execute::
61-
:hide-code:
62-
63-
def f(x, offset=0.07357338543088588):
64-
a = 0.01
65-
return x + a**2 / (a**2 + (x - offset)**2)
66-
67-
def plot_loss_interval(learner):
68-
if learner.npoints >= 2:
69-
x_0, x_1 = max(learner.losses, key=learner.losses.get)
70-
y_0, y_1 = learner.data[x_0], learner.data[x_1]
71-
x, y = [x_0, x_1], [y_0, y_1]
72-
else:
73-
x, y = [], []
74-
return hv.Scatter((x, y)).opts(style=dict(size=6, color="r"))
75-
76-
def plot(learner, npoints):
77-
adaptive.runner.simple(learner, lambda l: l.npoints == npoints)
78-
return (learner.plot() * plot_loss_interval(learner))[:, -1.1:1.1]
79-
80-
def get_hm(loss_per_interval, N=101):
81-
learner = adaptive.Learner1D(f, bounds=(-1, 1), loss_per_interval=loss_per_interval)
82-
plots = {n: plot(learner, n) for n in range(N)}
83-
return hv.HoloMap(plots, kdims=["npoints"])
84-
85-
layout = (
86-
get_hm(uniform_loss).relabel("homogeneous samping")
87-
+ get_hm(default_loss).relabel("with adaptive")
88-
)
89-
90-
layout.opts(plot=dict(toolbar=None))
91-
92-
`adaptive.Learner2D`
93-
~~~~~~~~~~~~~~~~~~~~
94-
95-
.. jupyter-execute::
96-
:hide-code:
97-
98-
def ring(xy):
99-
import numpy as np
100-
x, y = xy
101-
a = 0.2
102-
return x + np.exp(-(x**2 + y**2 - 0.75**2)**2/a**4)
103-
104-
def plot(learner, npoints):
105-
adaptive.runner.simple(learner, lambda l: l.npoints == npoints)
106-
learner2 = adaptive.Learner2D(ring, bounds=learner.bounds)
107-
xs = ys = np.linspace(*learner.bounds[0], int(learner.npoints**0.5))
108-
xys = list(itertools.product(xs, ys))
109-
learner2.tell_many(xys, map(ring, xys))
110-
return (learner2.plot().relabel('homogeneous grid')
111-
+ learner.plot().relabel('with adaptive')
112-
+ learner2.plot(tri_alpha=0.5).relabel('homogeneous sampling')
113-
+ learner.plot(tri_alpha=0.5).relabel('with adaptive')).cols(2)
114-
115-
learner = adaptive.Learner2D(ring, bounds=[(-1, 1), (-1, 1)])
116-
plots = {n: plot(learner, n) for n in range(4, 1010, 20)}
117-
hv.HoloMap(plots, kdims=['npoints']).collate()
118-
119-
`adaptive.AverageLearner`
120-
~~~~~~~~~~~~~~~~~~~~~~~~~
121-
122-
.. jupyter-execute::
123-
:hide-code:
124-
125-
def g(n):
126-
import random
127-
random.seed(n)
128-
val = random.gauss(0.5, 0.5)
129-
return val
130-
131-
learner = adaptive.AverageLearner(g, atol=None, rtol=0.01)
132-
133-
def plot(learner, npoints):
134-
adaptive.runner.simple(learner, lambda l: l.npoints == npoints)
135-
return learner.plot().relabel(f'loss={learner.loss():.2f}')
136-
137-
plots = {n: plot(learner, n) for n in range(10, 10000, 200)}
138-
hv.HoloMap(plots, kdims=['npoints'])
139-
140-
`adaptive.LearnerND`
141-
~~~~~~~~~~~~~~~~~~~~
142-
143-
.. jupyter-execute::
144-
:hide-code:
145-
146-
def sphere(xyz):
147-
import numpy as np
148-
x, y, z = xyz
149-
a = 0.4
150-
return np.exp(-(x**2 + y**2 + z**2 - 0.75**2)**2/a**4)
151-
152-
learner = adaptive.LearnerND(sphere, bounds=[(-1, 1), (-1, 1), (-1, 1)])
153-
adaptive.runner.simple(learner, lambda l: l.npoints == 5000)
154-
155-
fig = learner.plot_3D(return_fig=True)
156-
157-
# Remove a slice from the plot to show the inside of the sphere
158-
scatter = fig.data[0]
159-
coords_col = [
160-
(x, y, z, color)
161-
for x, y, z, color in zip(
162-
scatter["x"], scatter["y"], scatter["z"], scatter.marker["color"]
163-
)
164-
if not (x > 0 and y > 0)
165-
]
166-
scatter["x"], scatter["y"], scatter["z"], scatter.marker["color"] = zip(*coords_col)
167-
168-
fig
169-
170-
see more in the :ref:`Tutorial Adaptive`.
1711

1722
.. include:: ../../README.rst
1733
:start-after: not-in-documentation-end

docs/source/index.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,7 @@
1616
:maxdepth: 2
1717
:hidden:
1818

19+
algorithms_and_examples
1920
docs
2021
tutorial/tutorial
2122
usage_examples

0 commit comments

Comments
 (0)