@@ -25,10 +25,6 @@ to see examples of how to use ``adaptive`` or visit the
25
25
26
26
.. summary-end
27
27
28
- **WARNING: adaptive is still in a beta development stage **
29
-
30
- .. not-in-documentation-start
31
-
32
28
Implemented algorithms
33
29
----------------------
34
30
@@ -44,6 +40,8 @@ but the details of the adaptive sampling are completely customizable.
44
40
45
41
The following learners are implemented:
46
42
43
+ .. not-in-documentation-start
44
+
47
45
- ``Learner1D ``, for 1D functions ``f: ℝ → ℝ^N ``,
48
46
- ``Learner2D ``, for 2D functions ``f: ℝ^2 → ℝ^N ``,
49
47
- ``LearnerND ``, for ND functions ``f: ℝ^N → ℝ^M ``,
@@ -52,10 +50,16 @@ The following learners are implemented:
52
50
- ``AverageLearner1D ``, for stochastic 1D functions where you want to
53
51
estimate the mean value of the function at each point,
54
52
- ``IntegratorLearner ``, for
55
- when you want to intergrate a 1D function ``f: ℝ → ℝ ``,
53
+ when you want to intergrate a 1D function ``f: ℝ → ℝ ``.
56
54
- ``BalancingLearner ``, for when you want to run several learners at once,
57
55
selecting the “best” one each time you get more points.
58
56
57
+ Meta-learners (to be used with other learners):
58
+
59
+ - ``BalancingLearner ``, for when you want to run several learners at once,
60
+ selecting the “best” one each time you get more points,
61
+ - ``DataSaver ``, for when your function doesn't just return a scalar or a vector.
62
+
59
63
In addition to the learners, ``adaptive `` also provides primitives for
60
64
running the sampling across several cores and even several machines,
61
65
with built-in support for
0 commit comments