@@ -27,10 +27,6 @@ to see examples of how to use ``adaptive`` or visit the
27
27
28
28
.. summary-end
29
29
30
- **WARNING: adaptive is still in a beta development stage **
31
-
32
- .. not-in-documentation-start
33
-
34
30
Implemented algorithms
35
31
----------------------
36
32
@@ -46,6 +42,8 @@ but the details of the adaptive sampling are completely customizable.
46
42
47
43
The following learners are implemented:
48
44
45
+ .. not-in-documentation-start
46
+
49
47
- ``Learner1D ``, for 1D functions ``f: ℝ → ℝ^N ``,
50
48
- ``Learner2D ``, for 2D functions ``f: ℝ^2 → ℝ^N ``,
51
49
- ``LearnerND ``, for ND functions ``f: ℝ^N → ℝ^M ``,
@@ -54,10 +52,16 @@ The following learners are implemented:
54
52
- ``AverageLearner1D ``, for stochastic 1D functions where you want to
55
53
estimate the mean value of the function at each point,
56
54
- ``IntegratorLearner ``, for
57
- when you want to intergrate a 1D function ``f: ℝ → ℝ ``,
55
+ when you want to intergrate a 1D function ``f: ℝ → ℝ ``.
58
56
- ``BalancingLearner ``, for when you want to run several learners at once,
59
57
selecting the “best” one each time you get more points.
60
58
59
+ Meta-learners (to be used with other learners):
60
+
61
+ - ``BalancingLearner ``, for when you want to run several learners at once,
62
+ selecting the “best” one each time you get more points,
63
+ - ``DataSaver ``, for when your function doesn't just return a scalar or a vector.
64
+
61
65
In addition to the learners, ``adaptive `` also provides primitives for
62
66
running the sampling across several cores and even several machines,
63
67
with built-in support for
@@ -67,7 +71,6 @@ with built-in support for
67
71
`ipyparallel <https://ipyparallel.readthedocs.io/en/latest/ >`_ and
68
72
`distributed <https://distributed.readthedocs.io/en/latest/ >`_.
69
73
70
-
71
74
Examples
72
75
--------
73
76
0 commit comments