Skip to content

Commit d9aaa49

Browse files
committed
change "execute" into "jupyter-execute"
1 parent 0586cb5 commit d9aaa49

11 files changed

+118
-128
lines changed

docs/source/rest_of_readme.rst

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ Examples
3636
Here are some examples of how Adaptive samples vs. homogeneous sampling. Click
3737
on the *Play* :fa:`play` button or move the sliders.
3838

39-
.. execute::
39+
.. jupyter-execute::
4040
:hide-code:
4141

4242
import itertools
@@ -52,7 +52,7 @@ on the *Play* :fa:`play` button or move the sliders.
5252
`adaptive.Learner1D`
5353
~~~~~~~~~~~~~~~~~~~~
5454

55-
.. execute::
55+
.. jupyter-execute::
5656
:hide-code:
5757

5858
%%opts Layout [toolbar=None]
@@ -87,7 +87,7 @@ on the *Play* :fa:`play` button or move the sliders.
8787
`adaptive.Learner2D`
8888
~~~~~~~~~~~~~~~~~~~~
8989

90-
.. execute::
90+
.. jupyter-execute::
9191
:hide-code:
9292

9393
def ring(xy):
@@ -116,7 +116,7 @@ on the *Play* :fa:`play` button or move the sliders.
116116
`adaptive.AverageLearner`
117117
~~~~~~~~~~~~~~~~~~~~~~~~~
118118

119-
.. execute::
119+
.. jupyter-execute::
120120
:hide-code:
121121

122122
def g(n):

docs/source/tutorial/tutorial.AverageLearner.rst

Lines changed: 7 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -8,11 +8,10 @@ Tutorial `~adaptive.AverageLearner`
88

99
.. seealso::
1010
The complete source code of this tutorial can be found in
11-
:jupyter-download:notebook:`AverageLearner`
11+
:jupyter-download:notebook:`tutorial.AverageLearner`
1212

13-
.. execute::
13+
.. jupyter-execute::
1414
:hide-code:
15-
:new-notebook: AverageLearner
1615

1716
import adaptive
1817
adaptive.notebook_extension()
@@ -25,7 +24,7 @@ the learner must formally take a single parameter, which should be used
2524
like a “seed” for the (pseudo-) random variable (although in the current
2625
implementation the seed parameter can be ignored by the function).
2726

28-
.. execute::
27+
.. jupyter-execute::
2928

3029
def g(n):
3130
import random
@@ -38,20 +37,20 @@ implementation the seed parameter can be ignored by the function).
3837
random.setstate(state)
3938
return val
4039

41-
.. execute::
40+
.. jupyter-execute::
4241

4342
learner = adaptive.AverageLearner(g, atol=None, rtol=0.01)
4443
runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 2)
4544

46-
.. execute::
45+
.. jupyter-execute::
4746
:hide-code:
4847

4948
await runner.task # This is not needed in a notebook environment!
5049

51-
.. execute::
50+
.. jupyter-execute::
5251

5352
runner.live_info()
5453

55-
.. execute::
54+
.. jupyter-execute::
5655

5756
runner.live_plot(update_interval=0.1)

docs/source/tutorial/tutorial.BalancingLearner.rst

Lines changed: 7 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -8,11 +8,10 @@ Tutorial `~adaptive.BalancingLearner`
88

99
.. seealso::
1010
The complete source code of this tutorial can be found in
11-
:jupyter-download:notebook:`BalancingLearner`
11+
:jupyter-download:notebook:`tutorial.BalancingLearner`
1212

13-
.. execute::
13+
.. jupyter-execute::
1414
:hide-code:
15-
:new-notebook: BalancingLearner
1615

1716
import adaptive
1817
adaptive.notebook_extension()
@@ -30,7 +29,7 @@ improvement.
3029
The balancing learner can for example be used to implement a poor-man’s
3130
2D learner by using the `~adaptive.Learner1D`.
3231

33-
.. execute::
32+
.. jupyter-execute::
3433

3534
def h(x, offset=0):
3635
a = 0.01
@@ -42,16 +41,16 @@ The balancing learner can for example be used to implement a poor-man’s
4241
bal_learner = adaptive.BalancingLearner(learners)
4342
runner = adaptive.Runner(bal_learner, goal=lambda l: l.loss() < 0.01)
4443

45-
.. execute::
44+
.. jupyter-execute::
4645
:hide-code:
4746

4847
await runner.task # This is not needed in a notebook environment!
4948

50-
.. execute::
49+
.. jupyter-execute::
5150

5251
runner.live_info()
5352

54-
.. execute::
53+
.. jupyter-execute::
5554

5655
plotter = lambda learner: hv.Overlay([L.plot() for L in learner.learners])
5756
runner.live_plot(plotter=plotter, update_interval=0.1)
@@ -61,7 +60,7 @@ product of parameters. For that particular case we’ve added a
6160
``classmethod`` called ``~adaptive.BalancingLearner.from_product``.
6261
See how it works below
6362

64-
.. execute::
63+
.. jupyter-execute::
6564

6665
from scipy.special import eval_jacobi
6766

docs/source/tutorial/tutorial.DataSaver.rst

Lines changed: 8 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -8,11 +8,10 @@ Tutorial `~adaptive.DataSaver`
88

99
.. seealso::
1010
The complete source code of this tutorial can be found in
11-
:jupyter-download:notebook:`DataSaver`
11+
:jupyter-download:notebook:`tutorial.DataSaver`
1212

13-
.. execute::
13+
.. jupyter-execute::
1414
:hide-code:
15-
:new-notebook: DataSaver
1615

1716
import adaptive
1817
adaptive.notebook_extension()
@@ -23,7 +22,7 @@ metadata, you can wrap your learner in an `adaptive.DataSaver`.
2322
In the following example the function to be learned returns its result
2423
and the execution time in a dictionary:
2524

26-
.. execute::
25+
.. jupyter-execute::
2726

2827
from operator import itemgetter
2928

@@ -48,27 +47,27 @@ and the execution time in a dictionary:
4847
``learner.learner`` is the original learner, so
4948
``learner.learner.loss()`` will call the correct loss method.
5049

51-
.. execute::
50+
.. jupyter-execute::
5251

5352
runner = adaptive.Runner(learner, goal=lambda l: l.learner.loss() < 0.1)
5453

55-
.. execute::
54+
.. jupyter-execute::
5655
:hide-code:
5756

5857
await runner.task # This is not needed in a notebook environment!
5958

60-
.. execute::
59+
.. jupyter-execute::
6160

6261
runner.live_info()
6362

64-
.. execute::
63+
.. jupyter-execute::
6564

6665
runner.live_plot(plotter=lambda l: l.learner.plot(), update_interval=0.1)
6766

6867
Now the ``DataSavingLearner`` will have an dictionary attribute
6968
``extra_data`` that has ``x`` as key and the data that was returned by
7069
``learner.function`` as values.
7170

72-
.. execute::
71+
.. jupyter-execute::
7372

7473
learner.extra_data

docs/source/tutorial/tutorial.IntegratorLearner.rst

Lines changed: 9 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -8,11 +8,10 @@ Tutorial `~adaptive.IntegratorLearner`
88

99
.. seealso::
1010
The complete source code of this tutorial can be found in
11-
:jupyter-download:notebook:`IntegratorLearner`
11+
:jupyter-download:notebook:`tutorial.IntegratorLearner`
1212

13-
.. execute::
13+
.. jupyter-execute::
1414
:hide-code:
15-
:new-notebook: IntegratorLearner
1615

1716
import adaptive
1817
adaptive.notebook_extension()
@@ -27,7 +26,7 @@ of the integral with it. It is based on Pedro Gonnet’s
2726
Let’s try the following function with cusps (that is difficult to
2827
integrate):
2928

30-
.. execute::
29+
.. jupyter-execute::
3130

3231
def f24(x):
3332
return np.floor(np.exp(x))
@@ -40,7 +39,7 @@ let’s try a familiar function integrator `scipy.integrate.quad`, which
4039
will give us warnings that it encounters difficulties (if we run it
4140
in a notebook.)
4241

43-
.. execute::
42+
.. jupyter-execute::
4443

4544
import scipy.integrate
4645
scipy.integrate.quad(f24, 0, 3)
@@ -50,7 +49,7 @@ we want to reach. Then in the `~adaptive.Runner` we pass
5049
``goal=lambda l: l.done()`` where ``learner.done()`` is ``True`` when
5150
the relative tolerance has been reached.
5251

53-
.. execute::
52+
.. jupyter-execute::
5453

5554
from adaptive.runner import SequentialExecutor
5655

@@ -61,24 +60,24 @@ the relative tolerance has been reached.
6160
# the overhead of evaluating the function in another process.
6261
runner = adaptive.Runner(learner, executor=SequentialExecutor(), goal=lambda l: l.done())
6362

64-
.. execute::
63+
.. jupyter-execute::
6564
:hide-code:
6665

6766
await runner.task # This is not needed in a notebook environment!
6867

69-
.. execute::
68+
.. jupyter-execute::
7069

7170
runner.live_info()
7271

7372
Now we could do the live plotting again, but lets just wait untill the
7473
runner is done.
7574

76-
.. execute::
75+
.. jupyter-execute::
7776

7877
if not runner.task.done():
7978
raise RuntimeError('Wait for the runner to finish before executing the cells below!')
8079

81-
.. execute::
80+
.. jupyter-execute::
8281

8382
print('The integral value is {} with the corresponding error of {}'.format(learner.igral, learner.err))
8483
learner.plot()

docs/source/tutorial/tutorial.Learner1D.rst

Lines changed: 15 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -8,11 +8,10 @@ Tutorial `~adaptive.Learner1D`
88

99
.. seealso::
1010
The complete source code of this tutorial can be found in
11-
:jupyter-download:notebook:`Learner1D`
11+
:jupyter-download:notebook:`tutorial.Learner1D`
1212

13-
.. execute::
13+
.. jupyter-execute::
1414
:hide-code:
15-
:new-notebook: Learner1D
1615

1716
import adaptive
1817
adaptive.notebook_extension()
@@ -30,7 +29,7 @@ We start with the most common use-case: sampling a 1D function
3029
We will use the following function, which is a smooth (linear)
3130
background with a sharp peak at a random location:
3231

33-
.. execute::
32+
.. jupyter-execute::
3433

3534
offset = random.uniform(-0.5, 0.5)
3635

@@ -47,7 +46,7 @@ We start by initializing a 1D “learner”, which will suggest points to
4746
evaluate, and adapt its suggestions as more and more points are
4847
evaluated.
4948

50-
.. execute::
49+
.. jupyter-execute::
5150

5251
learner = adaptive.Learner1D(f, bounds=(-1, 1))
5352

@@ -61,13 +60,13 @@ On Windows systems the runner will try to use a `distributed.Client`
6160
if `distributed` is installed. A `~concurrent.futures.ProcessPoolExecutor`
6261
cannot be used on Windows for reasons.
6362

64-
.. execute::
63+
.. jupyter-execute::
6564

6665
# The end condition is when the "loss" is less than 0.1. In the context of the
6766
# 1D learner this means that we will resolve features in 'func' with width 0.1 or wider.
6867
runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01)
6968

70-
.. execute::
69+
.. jupyter-execute::
7170
:hide-code:
7271

7372
await runner.task # This is not needed in a notebook environment!
@@ -76,23 +75,23 @@ When instantiated in a Jupyter notebook the runner does its job in the
7675
background and does not block the IPython kernel. We can use this to
7776
create a plot that updates as new data arrives:
7877

79-
.. execute::
78+
.. jupyter-execute::
8079

8180
runner.live_info()
8281

83-
.. execute::
82+
.. jupyter-execute::
8483

8584
runner.live_plot(update_interval=0.1)
8685

8786
We can now compare the adaptive sampling to a homogeneous sampling with
8887
the same number of points:
8988

90-
.. execute::
89+
.. jupyter-execute::
9190

9291
if not runner.task.done():
9392
raise RuntimeError('Wait for the runner to finish before executing the cells below!')
9493

95-
.. execute::
94+
.. jupyter-execute::
9695

9796
learner2 = adaptive.Learner1D(f, bounds=learner.bounds)
9897

@@ -107,7 +106,7 @@ vector output: ``f:ℝ → ℝ^N``
107106

108107
Sometimes you may want to learn a function with vector output:
109108

110-
.. execute::
109+
.. jupyter-execute::
111110

112111
random.seed(0)
113112
offsets = [random.uniform(-0.8, 0.8) for _ in range(3)]
@@ -121,20 +120,20 @@ Sometimes you may want to learn a function with vector output:
121120
``adaptive`` has you covered! The ``Learner1D`` can be used for such
122121
functions:
123122

124-
.. execute::
123+
.. jupyter-execute::
125124

126125
learner = adaptive.Learner1D(f_levels, bounds=(-1, 1))
127126
runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01)
128127

129-
.. execute::
128+
.. jupyter-execute::
130129
:hide-code:
131130

132131
await runner.task # This is not needed in a notebook environment!
133132

134-
.. execute::
133+
.. jupyter-execute::
135134

136135
runner.live_info()
137136

138-
.. execute::
137+
.. jupyter-execute::
139138

140139
runner.live_plot(update_interval=0.1)

0 commit comments

Comments
 (0)