Skip to content

Commit a301c8a

Browse files
committed
Merge remote-tracking branch 'origin/master' into mypy
2 parents 2e7286d + d1b0b2a commit a301c8a

33 files changed

+644
-354
lines changed

AUTHORS.md

Lines changed: 12 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -2,15 +2,18 @@
22

33
The current maintainers of Adaptive are:
44

5-
+ [Bas Nijholt](<http://nijho.lt>)
6-
+ [Joseph Weston](<https://joseph.weston.cloud>)
7-
+ [Anton Akhmerov](<https://antonakhmerov.org>)
5+
- [Bas Nijholt](<http://nijho.lt>)
6+
- [Joseph Weston](<https://joseph.weston.cloud>)
7+
- [Anton Akhmerov](<https://antonakhmerov.org>)
88

99
Other contributors to Adaptive include:
1010

11-
+ Andrey E. Antipov
12-
+ [Christoph Groth](<http://inac.cea.fr/Pisp/christoph.groth/>)
13-
+ Jorn Hoofwijk
14-
+ Philippe Solodov (@philippeitis)
15-
+ Victor Negîrneac (@caenrigen)
16-
+ Thomas A Caswell (@tacaswell)
11+
- Andrey E. Antipov
12+
- [Christoph Groth](<http://inac.cea.fr/Pisp/christoph.groth/>)
13+
- Jorn Hoofwijk
14+
- Philippe Solodov (@philippeitis)
15+
- Victor Negîrneac (@caenrigen)
16+
- Thomas A Caswell (@tacaswell)
17+
- Álvaro Gómez Iñesta (@AlvaroGI)
18+
- Sultan Orazbayev (@SultanOrazbayev)
19+
- Thomas Aarholt (@thomasaarholt)

CHANGELOG.md

Lines changed: 25 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,35 @@
11
# Changelog
22

3-
## [Unreleased](https://github.com/python-adaptive/adaptive/tree/HEAD)
3+
## [v0.13.0](https://github.com/python-adaptive/adaptive/tree/v0.13.0) (2021-09-10)
44

5-
[Full Changelog](https://github.com/python-adaptive/adaptive/compare/v0.12.2...HEAD)
5+
[Full Changelog](https://github.com/python-adaptive/adaptive/compare/v0.12.2...v0.13.0)
6+
7+
**Fixed bugs:**
8+
9+
- AverageLearner doesn't work with 0 mean [\#275](https://github.com/python-adaptive/adaptive/issues/275)
10+
- call self.\_process\_futures on canceled futures when BlockingRunner is done [\#320](https://github.com/python-adaptive/adaptive/pull/320) ([basnijholt](https://github.com/basnijholt))
11+
- AverageLearner: fix zero mean [\#276](https://github.com/python-adaptive/adaptive/pull/276) ([basnijholt](https://github.com/basnijholt))
612

713
**Closed issues:**
814

15+
- Runners should tell learner about remaining points at end of run [\#319](https://github.com/python-adaptive/adaptive/issues/319)
916
- Cryptic error when importing lmfit [\#314](https://github.com/python-adaptive/adaptive/issues/314)
17+
- change CHANGELOG to KeepAChangelog format [\#306](https://github.com/python-adaptive/adaptive/issues/306)
18+
- jupyter notebook kernels dead after running "import adaptive" [\#298](https://github.com/python-adaptive/adaptive/issues/298)
19+
- Emphasis on when to use adaptive in docs [\#297](https://github.com/python-adaptive/adaptive/issues/297)
20+
- GPU acceleration [\#296](https://github.com/python-adaptive/adaptive/issues/296)
21+
22+
**Merged pull requests:**
23+
24+
- Learner1D type hints and add typeguard to pytest tests [\#325](https://github.com/python-adaptive/adaptive/pull/325) ([basnijholt](https://github.com/basnijholt))
25+
- AverageLearner type hints [\#324](https://github.com/python-adaptive/adaptive/pull/324) ([basnijholt](https://github.com/basnijholt))
26+
- Update doc string for resolution\_loss\_function [\#323](https://github.com/python-adaptive/adaptive/pull/323) ([SultanOrazbayev](https://github.com/SultanOrazbayev))
27+
- Update Readme to emphasise when adaptive should be used [\#318](https://github.com/python-adaptive/adaptive/pull/318) ([thomasaarholt](https://github.com/thomasaarholt))
28+
- add to\_numpy methods [\#317](https://github.com/python-adaptive/adaptive/pull/317) ([basnijholt](https://github.com/basnijholt))
29+
- lazily evaluate the integrator coefficients [\#311](https://github.com/python-adaptive/adaptive/pull/311) ([basnijholt](https://github.com/basnijholt))
30+
- AverageLearner1D added [\#283](https://github.com/python-adaptive/adaptive/pull/283) ([AlvaroGI](https://github.com/AlvaroGI))
31+
- Make LearnerND pickleable [\#272](https://github.com/python-adaptive/adaptive/pull/272) ([basnijholt](https://github.com/basnijholt))
32+
- add a FAQ [\#242](https://github.com/python-adaptive/adaptive/pull/242) ([basnijholt](https://github.com/basnijholt))
1033

1134
## [v0.12.2](https://github.com/python-adaptive/adaptive/tree/v0.12.2) (2021-03-23)
1235

LICENSE

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
BSD 3-Clause License
22

3-
Copyright (c) 2017-2020, Adaptive authors
3+
Copyright (c) 2017-2021, Adaptive authors
44
All rights reserved.
55

66
Redistribution and use in source and binary forms, with or without

README.rst

Lines changed: 12 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -8,10 +8,12 @@
88

99
*Adaptive*: parallel active learning of mathematical functions.
1010

11+
.. include:: logo.rst
12+
1113
``adaptive`` is an open-source Python library designed to
1214
make adaptive parallel function evaluation simple. With ``adaptive`` you
1315
just supply a function with its bounds, and it will be evaluated at the
14-
“best” points in parameter space, rather than unecessarily computing *all* points on a dense grid.
16+
“best” points in parameter space, rather than unnecessarily computing *all* points on a dense grid.
1517
With just a few lines of code you can evaluate functions on a computing cluster,
1618
live-plot the data as it returns, and fine-tune the adaptive sampling algorithm.
1719

@@ -25,10 +27,6 @@ to see examples of how to use ``adaptive`` or visit the
2527

2628
.. summary-end
2729
28-
**WARNING: adaptive is still in a beta development stage**
29-
30-
.. not-in-documentation-start
31-
3230
Implemented algorithms
3331
----------------------
3432

@@ -44,6 +42,8 @@ but the details of the adaptive sampling are completely customizable.
4442

4543
The following learners are implemented:
4644

45+
.. not-in-documentation-start
46+
4747
- ``Learner1D``, for 1D functions ``f: ℝ → ℝ^N``,
4848
- ``Learner2D``, for 2D functions ``f: ℝ^2 → ℝ^N``,
4949
- ``LearnerND``, for ND functions ``f: ℝ^N → ℝ^M``,
@@ -52,10 +52,16 @@ The following learners are implemented:
5252
- ``AverageLearner1D``, for stochastic 1D functions where you want to
5353
estimate the mean value of the function at each point,
5454
- ``IntegratorLearner``, for
55-
when you want to intergrate a 1D function ``f: ℝ → ℝ``,
55+
when you want to intergrate a 1D function ``f: ℝ → ℝ``.
5656
- ``BalancingLearner``, for when you want to run several learners at once,
5757
selecting the “best” one each time you get more points.
5858

59+
Meta-learners (to be used with other learners):
60+
61+
- ``BalancingLearner``, for when you want to run several learners at once,
62+
selecting the “best” one each time you get more points,
63+
- ``DataSaver``, for when your function doesn't just return a scalar or a vector.
64+
5965
In addition to the learners, ``adaptive`` also provides primitives for
6066
running the sampling across several cores and even several machines,
6167
with built-in support for

adaptive/learner/average_learner1D.py

Lines changed: 31 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,19 @@
1-
from __future__ import annotations
2-
31
import math
42
import sys
53
from collections import defaultdict
64
from copy import deepcopy
75
from math import hypot
8-
from typing import Callable, DefaultDict, List, Sequence, Tuple
6+
from typing import (
7+
Callable,
8+
DefaultDict,
9+
Dict,
10+
Iterable,
11+
List,
12+
Optional,
13+
Sequence,
14+
Set,
15+
Tuple,
16+
)
917

1018
import numpy as np
1119
import scipy.stats
@@ -19,17 +27,17 @@
1927
Point = Tuple[int, Real]
2028
Points = List[Point]
2129

22-
__all__: list[str] = ["AverageLearner1D"]
30+
__all__: List[str] = ["AverageLearner1D"]
2331

2432

2533
class AverageLearner1D(Learner1D):
26-
"""Learns and predicts a noisy function 'f:ℝ → ℝ^N'.
34+
"""Learns and predicts a noisy function 'f:ℝ → ℝ'.
2735
2836
Parameters
2937
----------
3038
function : callable
3139
The function to learn. Must take a tuple of ``(seed, x)`` and
32-
return a real number or vector.
40+
return a real number.
3341
bounds : pair of reals
3442
The bounds of the interval on which to learn 'function'.
3543
loss_per_interval: callable, optional
@@ -67,10 +75,11 @@ class AverageLearner1D(Learner1D):
6775

6876
def __init__(
6977
self,
70-
function: Callable[[tuple[int, Real]], Real],
71-
bounds: tuple[Real, Real],
72-
loss_per_interval: None
73-
| (Callable[[Sequence[Real], Sequence[Real]], float]) = None,
78+
function: Callable[[Tuple[int, Real]], Real],
79+
bounds: Tuple[Real, Real],
80+
loss_per_interval: Optional[
81+
Callable[[Sequence[Real], Sequence[Real]], float]
82+
] = None,
7483
delta: float = 0.2,
7584
alpha: float = 0.005,
7685
neighbor_sampling: float = 0.3,
@@ -106,15 +115,15 @@ def __init__(
106115
self._number_samples = SortedDict()
107116
# This set contains the points x that have less than min_samples
108117
# samples or less than a (neighbor_sampling*100)% of their neighbors
109-
self._undersampled_points: set[Real] = set()
118+
self._undersampled_points: Set[Real] = set()
110119
# Contains the error in the estimate of the
111120
# mean at each point x in the form {x0: error(x0), ...}
112-
self.error: ItemSortedDict[Real, float] = decreasing_dict()
121+
self.error: Dict[Real, float] = decreasing_dict()
113122
#  Distance between two neighboring points in the
114123
# form {xi: ((xii-xi)^2 + (yii-yi)^2)^0.5, ...}
115-
self._distances: ItemSortedDict[Real, float] = decreasing_dict()
124+
self._distances: Dict[Real, float] = decreasing_dict()
116125
# {xii: error[xii]/min(_distances[xi], _distances[xii], ...}
117-
self.rescaled_error: ItemSortedDict[Real, float] = decreasing_dict()
126+
self.rescaled_error: Dict[Real, float] = decreasing_dict()
118127

119128
@property
120129
def nsamples(self) -> int:
@@ -127,7 +136,7 @@ def min_samples_per_point(self) -> int:
127136
return 0
128137
return min(self._number_samples.values())
129138

130-
def ask(self, n: int, tell_pending: bool = True) -> tuple[Points, list[float]]:
139+
def ask(self, n: int, tell_pending: bool = True) -> Tuple[Points, List[float]]:
131140
"""Return 'n' points that are expected to maximally reduce the loss."""
132141
# If some point is undersampled, resample it
133142
if len(self._undersampled_points):
@@ -156,7 +165,7 @@ def ask(self, n: int, tell_pending: bool = True) -> tuple[Points, list[float]]:
156165

157166
return points, loss_improvements
158167

159-
def _ask_for_more_samples(self, x: Real, n: int) -> tuple[Points, list[float]]:
168+
def _ask_for_more_samples(self, x: Real, n: int) -> Tuple[Points, List[float]]:
160169
"""When asking for n points, the learner returns n times an existing point
161170
to be resampled, since in general n << min_samples and this point will
162171
need to be resampled many more times"""
@@ -175,7 +184,7 @@ def _ask_for_more_samples(self, x: Real, n: int) -> tuple[Points, list[float]]:
175184
loss_improvements = [loss_improvement / n] * n
176185
return points, loss_improvements
177186

178-
def _ask_for_new_point(self, n: int) -> tuple[Points, list[float]]:
187+
def _ask_for_new_point(self, n: int) -> Tuple[Points, List[float]]:
179188
"""When asking for n new points, the learner returns n times a single
180189
new point, since in general n << min_samples and this point will need
181190
to be resampled many more times"""
@@ -357,7 +366,7 @@ def _update_losses_resampling(self, x: Real, real=True) -> None:
357366
if (b is not None) and right_loss_is_unknown:
358367
self.losses_combined[x, b] = float("inf")
359368

360-
def _calc_error_in_mean(self, ys: Sequence[Real], y_avg: Real, n: int) -> float:
369+
def _calc_error_in_mean(self, ys: Iterable[Real], y_avg: Real, n: int) -> float:
361370
variance_in_mean = sum((y - y_avg) ** 2 for y in ys) / (n - 1)
362371
t_student = scipy.stats.t.ppf(1 - self.alpha, df=n - 1)
363372
return t_student * (variance_in_mean / n) ** 0.5
@@ -389,7 +398,7 @@ def tell_many(self, xs: Points, ys: Sequence[Real]) -> None:
389398
# simultaneously, before we move on to a new x
390399
self.tell_many_at_point(x, seed_y_mapping)
391400

392-
def tell_many_at_point(self, x: Real, seed_y_mapping: dict[int, Real]) -> None:
401+
def tell_many_at_point(self, x: Real, seed_y_mapping: Dict[int, Real]) -> None:
393402
"""Tell the learner about many samples at a certain location x.
394403
395404
Parameters
@@ -445,10 +454,10 @@ def tell_many_at_point(self, x: Real, seed_y_mapping: dict[int, Real]) -> None:
445454
self._update_interpolated_loss_in_interval(*interval)
446455
self._oldscale = deepcopy(self._scale)
447456

448-
def _get_data(self) -> SortedDict[Real, Real]:
457+
def _get_data(self) -> Dict[Real, Real]:
449458
return self._data_samples
450459

451-
def _set_data(self, data: SortedDict[Real, Real]) -> None:
460+
def _set_data(self, data: Dict[Real, Real]) -> None:
452461
if data:
453462
for x, samples in data.items():
454463
self.tell_many_at_point(x, samples)
@@ -482,7 +491,7 @@ def plot(self):
482491
return p.redim(x=dict(range=plot_bounds))
483492

484493

485-
def decreasing_dict() -> ItemSortedDict:
494+
def decreasing_dict() -> Dict:
486495
"""This initialization orders the dictionary from large to small values"""
487496

488497
def sorting_rule(key, value):

adaptive/learner/base_learner.py

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -2,9 +2,10 @@
22

33
import abc
44
from contextlib import suppress
5-
from copy import deepcopy
65
from typing import Any, Callable
76

7+
import cloudpickle
8+
89
from adaptive.utils import _RequireAttrsABCMeta, load, save
910

1011

@@ -15,7 +16,7 @@ def uses_nth_neighbors(n: int) -> Callable:
1516
with ``n`` nearest neighbors
1617
1718
The loss function will then receive the data of the N nearest neighbors
18-
(``nth_neighbors``) aling with the data of the interval itself in a dict.
19+
(``nth_neighbors``) along with the data of the interval itself in a dict.
1920
The `~adaptive.Learner1D` will also make sure that the loss is updated
2021
whenever one of the ``nth_neighbors`` changes.
2122
@@ -196,7 +197,7 @@ def load(self, fname: str, compress: bool = True) -> None:
196197
self._set_data(data)
197198

198199
def __getstate__(self) -> dict[str, Any]:
199-
return deepcopy(self.__dict__)
200+
return cloudpickle.dumps(self.__dict__)
200201

201202
def __setstate__(self, state: dict[str, Any]) -> None:
202-
self.__dict__ = state
203+
self.__dict__ = cloudpickle.loads(state)

0 commit comments

Comments
 (0)