You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
> *Adaptive*: parallel active learning of mathematical functions.
17
-
18
16
<!-- badges-end -->
19
17
20
18
<!-- summary-start -->
21
19
22
-
`adaptive` is an open-source Python library designed to make adaptive parallel function evaluation simple. With `adaptive` you just supply a function with its bounds, and it will be evaluated at the “best” points in parameter space, rather than unnecessarily computing *all* points on a dense grid.
23
-
With just a few lines of code you can evaluate functions on a computing cluster, live-plot the data as it returns, and fine-tune the adaptive sampling algorithm.
20
+
Adaptive is an open-source Python library that streamlines adaptive parallel function evaluations.
21
+
Rather than calculating all points on a dense grid, it intelligently selects the "best" points in the parameter space based on your provided function and bounds.
22
+
With minimal code, you can perform evaluations on a computing cluster, display live plots, and optimize the adaptive sampling algorithm.
24
23
25
-
`adaptive` excels on computations where each function evaluation takes *at least* ≈50ms due to the overhead of picking potentially interesting points.
24
+
Adaptive is most efficient for computations where each function evaluation takes at least ≈50ms due to the overhead of selecting potentially interesting points.
26
25
27
-
Run the `adaptive` example notebook [live on Binder](https://mybinder.org/v2/gh/python-adaptive/adaptive/main?filepath=example-notebook.ipynb)to see examples of how to use `adaptive`or visit the [tutorial on Read the Docs](https://adaptive.readthedocs.io/en/latest/tutorial/tutorial.html).
26
+
To see Adaptive in action, try the [example notebook on Binder](https://mybinder.org/v2/gh/python-adaptive/adaptive/main?filepath=example-notebook.ipynb) or explore the [tutorial on Read the Docs](https://adaptive.readthedocs.io/en/latest/tutorial/tutorial.html).
28
27
29
28
<!-- summary-end -->
30
29
31
-
## Implemented algorithms
30
+
<details><summary><b><u>[ToC]</u></b> 📚</summary>
32
31
33
-
The core concept in `adaptive` is that of a *learner*.
34
-
A *learner* samples a function at the best places in its parameter space to get maximum “information” about the function.
35
-
As it evaluates the function at more and more points in the parameter space, it gets a better idea of where the best places are to sample next.
32
+
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
33
+
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
36
34
37
-
Of course, what qualifies as the “best places” will depend on your application domain! `adaptive` makes some reasonable default choices, but the details of the adaptive sampling are completely customizable.
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
40
46
41
-
<!-- not-in-documentation-start -->
47
+
</details>
42
48
43
-
-`Learner1D`, for 1D functions `f: ℝ → ℝ^N`,
44
-
-`Learner2D`, for 2D functions `f: ℝ^2 → ℝ^N`,
45
-
-`LearnerND`, for ND functions `f: ℝ^N → ℝ^M`,
46
-
-`AverageLearner`, for random variables where you want to average the result over many evaluations,
47
-
-`AverageLearner1D`, for stochastic 1D functions where you want to estimate the mean value of the function at each point,
48
-
-`IntegratorLearner`, for when you want to intergrate a 1D function `f: ℝ → ℝ`.
49
-
-`BalancingLearner`, for when you want to run several learners at once, selecting the “best” one each time you get more points.
49
+
<!-- key-features-start -->
50
50
51
-
Meta-learners (to be used with other learners):
51
+
## :star: Key features
52
52
53
-
-`BalancingLearner`, for when you want to run several learners at once, selecting the “best” one each time you get more points,
54
-
-`DataSaver`, for when your function doesn't just return a scalar or a vector.
53
+
- 🎯 **Intelligent Adaptive Sampling**: Adaptive focuses on areas of interest within a function, ensuring better results with fewer evaluations, saving time, and computational resources.
54
+
- ⚡ **Parallel Execution**: The library leverages parallel processing for faster function evaluations, making optimal use of available computational resources.
55
+
- 📊 **Live Plotting and Info Widgets**: When working in Jupyter notebooks, Adaptive offers real-time visualization of the learning process, making it easier to monitor progress and identify areas of improvement.
56
+
- 🔧 **Customizable Loss Functions**: Adaptive supports various loss functions and allows customization, enabling users to tailor the learning process according to their specific needs.
57
+
- 📈 **Support for Multidimensional Functions**: The library can handle functions with scalar or vector outputs in one or multiple dimensions, providing flexibility for a wide range of problems.
58
+
- 🧩 **Seamless Integration**: Adaptive offers a simple and intuitive interface, making it easy to integrate with existing Python projects and workflows.
59
+
- 💾 **Flexible Data Export**: The library provides options to export learned data as NumPy arrays or Pandas DataFrames, ensuring compatibility with various data processing tools.
60
+
- 🌐 **Open-Source and Community-Driven**: Adaptive is an open-source project, encouraging contributions from the community to continuously improve and expand the library's features and capabilities.
55
61
56
-
In addition to the learners, `adaptive` also provides primitives for running the sampling across several cores and even several machines, with built-in support for
Clone the repository and run `pip install -e ".[notebook,testing,other]"` to add a link to the cloned repo into your Python path:
115
165
@@ -119,25 +169,25 @@ cd adaptive
119
169
pip install -e ".[notebook,testing,other]"
120
170
```
121
171
122
-
We highly recommend using a Conda environment or a virtualenv to manage the versions of your installed packages while working on `adaptive`.
172
+
We recommend using a Conda environment or a virtualenv for package management during Adaptive development.
123
173
124
-
In order to not pollute the history with the output of the notebooks, please setup the git filter by executing
174
+
To avoid polluting the history with notebook output, set up the git filter by running:
125
175
126
176
```bash
127
177
python ipynb_filter.py
128
178
```
129
179
130
180
in the repository.
131
181
132
-
We implement several other checks in order to maintain a consistent code style. We do this using [pre-commit](https://pre-commit.com), execute
182
+
To maintain consistent code style, we use [pre-commit](https://pre-commit.com). Install it by running:
133
183
134
184
```bash
135
185
pre-commit install
136
186
```
137
187
138
188
in the repository.
139
189
140
-
## Citing
190
+
## :books:Citing
141
191
142
192
If you used Adaptive in a scientific work, please cite it as follows.
143
193
@@ -151,17 +201,19 @@ If you used Adaptive in a scientific work, please cite it as follows.
151
201
}
152
202
```
153
203
154
-
## Credits
204
+
## :page_facing_up: Draft Paper
205
+
206
+
If you're interested in the scientific background and principles behind Adaptive, we recommend taking a look at the [draft paper](https://github.com/python-adaptive/paper) that is currently being written.
207
+
This paper provides a comprehensive overview of the concepts, algorithms, and applications of the Adaptive library.
208
+
209
+
## :sparkles: Credits
155
210
156
211
We would like to give credits to the following people:
157
212
158
213
- Pedro Gonnet for his implementation of [CQUAD](https://www.gnu.org/software/gsl/manual/html_node/CQUAD-doubly_002dadaptive-integration.html), “Algorithm 4” as described in “Increasing the Reliability of Adaptive Quadrature Using Explicit Interpolants”, P. Gonnet, ACM Transactions on Mathematical Software, 37 (3), art. no. 26, 2010.
159
214
- Pauli Virtanen for his `AdaptiveTriSampling` script (no longer available online since SciPy Central went down) which served as inspiration for the `adaptive.Learner2D`.
160
215
161
-
<!-- credits-end -->
162
-
163
-
For general discussion, we have a [Gitter chat channel](https://gitter.im/python-adaptive/adaptive). If you find any bugs or have any feature suggestions please file a GitHub [issue](https://github.com/python-adaptive/adaptive/issues/new) or submit a [pull request](https://github.com/python-adaptive/adaptive/pulls).
164
-
165
-
<!-- references-start -->
216
+
<!-- rest-end -->
166
217
167
-
<!-- references-end -->
218
+
For general discussion, we have a [Gitter chat channel](https://gitter.im/python-adaptive/adaptive).
219
+
If you find any bugs or have any feature suggestions please file a GitHub [issue](https://github.com/python-adaptive/adaptive/issues/new) or submit a [pull request](https://github.com/python-adaptive/adaptive/pulls).
Here are some examples of how Adaptive samples vs. homogeneous sampling.
43
43
Click on the *Play* {fa}`play` button or move the sliders.
@@ -59,6 +59,9 @@ hv.output(holomap="scrubber")
59
59
60
60
## {class}`adaptive.Learner1D`
61
61
62
+
The `Learner1D` class is designed for adaptively learning 1D functions of the form `f: ℝ → ℝ^N`. It focuses on sampling points where the function is less well understood to improve the overall approximation.
63
+
This learner is well-suited for functions with localized features or varying degrees of complexity across the domain.
64
+
62
65
Adaptively learning a 1D function (the plot below) and live-plotting the process in a Jupyter notebook is as easy as
The `Learner2D` class is tailored for adaptively learning 2D functions of the form `f: ℝ^2 → ℝ^N`. Similar to `Learner1D`, it concentrates on sampling points with higher uncertainty to provide a better approximation.
132
+
This learner is ideal for functions with complex features or varying behavior across a 2D domain.
The `LearnerND` class is intended for adaptively learning ND functions of the form `f: ℝ^N → ℝ^M`.
197
+
It extends the adaptive learning capabilities of the 1D and 2D learners to functions with more dimensions, allowing for efficient exploration of complex, high-dimensional spaces.
0 commit comments