8
8
9
9
*Adaptive *: parallel active learning of mathematical functions.
10
10
11
- .. animated-logo ::
11
+ .. include :: logo.rst
12
12
13
13
``adaptive `` is an open-source Python library designed to
14
14
make adaptive parallel function evaluation simple. With ``adaptive `` you
@@ -27,6 +27,10 @@ to see examples of how to use ``adaptive`` or visit the
27
27
28
28
.. summary-end
29
29
30
+ **WARNING: adaptive is still in a beta development stage **
31
+
32
+ .. not-in-documentation-start
33
+
30
34
Implemented algorithms
31
35
----------------------
32
36
@@ -42,8 +46,6 @@ but the details of the adaptive sampling are completely customizable.
42
46
43
47
The following learners are implemented:
44
48
45
- .. not-in-documentation-start
46
-
47
49
- ``Learner1D ``, for 1D functions ``f: ℝ → ℝ^N ``,
48
50
- ``Learner2D ``, for 2D functions ``f: ℝ^2 → ℝ^N ``,
49
51
- ``LearnerND ``, for ND functions ``f: ℝ^N → ℝ^M ``,
@@ -52,16 +54,10 @@ The following learners are implemented:
52
54
- ``AverageLearner1D ``, for stochastic 1D functions where you want to
53
55
estimate the mean value of the function at each point,
54
56
- ``IntegratorLearner ``, for
55
- when you want to intergrate a 1D function ``f: ℝ → ℝ ``.
57
+ when you want to intergrate a 1D function ``f: ℝ → ℝ ``,
56
58
- ``BalancingLearner ``, for when you want to run several learners at once,
57
59
selecting the “best” one each time you get more points.
58
60
59
- Meta-learners (to be used with other learners):
60
-
61
- - ``BalancingLearner ``, for when you want to run several learners at once,
62
- selecting the “best” one each time you get more points,
63
- - ``DataSaver ``, for when your function doesn't just return a scalar or a vector.
64
-
65
61
In addition to the learners, ``adaptive `` also provides primitives for
66
62
running the sampling across several cores and even several machines,
67
63
with built-in support for
@@ -71,6 +67,7 @@ with built-in support for
71
67
`ipyparallel <https://ipyparallel.readthedocs.io/en/latest/ >`_ and
72
68
`distributed <https://distributed.readthedocs.io/en/latest/ >`_.
73
69
70
+
74
71
Examples
75
72
--------
76
73
0 commit comments