Skip to content

Commit 4a81961

Browse files
committed
use common text from README
1 parent de21e4d commit 4a81961

File tree

2 files changed

+13
-20
lines changed

2 files changed

+13
-20
lines changed

README.rst

Lines changed: 9 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -25,10 +25,6 @@ to see examples of how to use ``adaptive`` or visit the
2525

2626
.. summary-end
2727
28-
**WARNING: adaptive is still in a beta development stage**
29-
30-
.. not-in-documentation-start
31-
3228
Implemented algorithms
3329
----------------------
3430

@@ -44,6 +40,8 @@ but the details of the adaptive sampling are completely customizable.
4440

4541
The following learners are implemented:
4642

43+
.. not-in-documentation-start
44+
4745
- ``Learner1D``, for 1D functions ``f: ℝ → ℝ^N``,
4846
- ``Learner2D``, for 2D functions ``f: ℝ^2 → ℝ^N``,
4947
- ``LearnerND``, for ND functions ``f: ℝ^N → ℝ^M``,
@@ -52,10 +50,16 @@ The following learners are implemented:
5250
- ``AverageLearner1D``, for stochastic 1D functions where you want to
5351
estimate the mean value of the function at each point,
5452
- ``IntegratorLearner``, for
55-
when you want to intergrate a 1D function ``f: ℝ → ℝ``,
53+
when you want to intergrate a 1D function ``f: ℝ → ℝ``.
5654
- ``BalancingLearner``, for when you want to run several learners at once,
5755
selecting the “best” one each time you get more points.
5856

57+
Meta-learners (to be used with other learners):
58+
59+
- ``BalancingLearner``, for when you want to run several learners at once,
60+
selecting the “best” one each time you get more points,
61+
- ``DataSaver``, for when your function doesn't just return a scalar or a vector.
62+
5963
In addition to the learners, ``adaptive`` also provides primitives for
6064
running the sampling across several cores and even several machines,
6165
with built-in support for

docs/source/algorithms_and_examples.rst

Lines changed: 4 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,6 @@
1-
Implemented algorithms
2-
----------------------
3-
4-
The core concept in ``adaptive`` is that of a *learner*. A *learner*
5-
samples a function at the best places in its parameter space to get
6-
maximum “information” about the function. As it evaluates the function
7-
at more and more points in the parameter space, it gets a better idea of
8-
where the best places are to sample next.
9-
10-
Of course, what qualifies as the “best places” will depend on your
11-
application domain! ``adaptive`` makes some reasonable default choices,
12-
but the details of the adaptive sampling are completely customizable.
13-
14-
The following learners are implemented:
1+
.. include:: ../../README.rst
2+
:start-after: summary-end
3+
:end-before: not-in-documentation-start
154

165
- `~adaptive.Learner1D`, for 1D functions ``f: ℝ → ℝ^N``,
176
- `~adaptive.Learner2D`, for 2D functions ``f: ℝ^2 → ℝ^N``,
@@ -22,7 +11,7 @@ The following learners are implemented:
2211
estimate the mean value of the function at each point,
2312
- `~adaptive.IntegratorLearner`, for
2413
when you want to intergrate a 1D function ``f: ℝ → ℝ``.
25-
- ``~adaptive.BalancingLearner``, for when you want to run several learners at once,
14+
- `~adaptive.BalancingLearner`, for when you want to run several learners at once,
2615
selecting the “best” one each time you get more points.
2716

2817
Meta-learners (to be used with other learners):

0 commit comments

Comments
 (0)