Skip to content

Commit 3ff8cc5

Browse files
bugfix
1 parent ea230cb commit 3ff8cc5

40 files changed

+1643
-433
lines changed

API_REFERENCE_FOR_CLASSIFICATION.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ Specifies a penalty in the range [0.0, 1.0] on interaction terms. A higher value
6262
Restricts the maximum number of terms in any of the underlying models trained to ***max_terms***. The default value of 0 means no limit. After the limit is reached, the remaining boosting steps are used to further update the coefficients of already included terms. An optional tuning objective could be to find the lowest positive value of ***max_terms*** that does not increase the prediction error significantly. Low positive values can speed up the training process significantly. Setting a limit with ***max_terms*** may require a higher learning rate for best results.
6363

6464

65-
## Method: fit(X:FloatMatrix, y:StrVector, sample_weight:FloatVector = [], X_names:StrVector = [], cv_observations:IntMatrix = [], prioritized_predictors_indexes:IntVector = [], monotonic_constraints:IntVector = [], interaction_constraints:List[List[int]] = [], predictor_learning_rates:FloatVector = [], predictor_penalties_for_non_linearity:FloatVector = [], predictor_penalties_for_interactions:FloatVector = [])
65+
## Method: fit(X:FloatMatrix, y:List[str], sample_weight:FloatVector = np.empty(0), X_names:List[str] = [], cv_observations:IntMatrix = np.empty([0, 0]), prioritized_predictors_indexes:List[int] = [], monotonic_constraints:List[int] = [], interaction_constraints:List[List[int]] = [], predictor_learning_rates:List[float] = [], predictor_penalties_for_non_linearity:List[float] = [], predictor_penalties_for_interactions:List[float] = [])
6666

6767
***This method fits the model to data.***
6868

@@ -72,7 +72,7 @@ Restricts the maximum number of terms in any of the underlying models trained to
7272
A numpy matrix with predictor values.
7373

7474
#### y
75-
A numpy vector with response values.
75+
A list of strings with response values (class names).
7676

7777
#### sample_weight
7878
An optional numpy vector with sample weights. If not specified then the observations are weighted equally.

API_REFERENCE_FOR_REGRESSION.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -130,7 +130,7 @@ Specifies a penalty in the range [0.0, 1.0] on interaction terms. A higher value
130130
Restricts the maximum number of terms in any of the underlying models trained to ***max_terms***. The default value of 0 means no limit. After the limit is reached, the remaining boosting steps are used to further update the coefficients of already included terms. An optional tuning objective could be to find the lowest positive value of ***max_terms*** that does not increase the prediction error significantly. Low positive values can speed up the training process significantly. Setting a limit with ***max_terms*** may require a higher learning rate for best results.
131131

132132

133-
## Method: fit(X:FloatMatrix, y:FloatVector, sample_weight:FloatVector = [], X_names:StrVector = [], cv_observations:IntMatrix = [], prioritized_predictors_indexes:IntVector = [], monotonic_constraints:IntVector = [], group:FloatVector = [], interaction_constraints:List[List[int]] = [], other_data:FloatMatrix = [], predictor_learning_rates:FloatVector = [], predictor_penalties_for_non_linearity:FloatVector = [], predictor_penalties_for_interactions:FloatVector = [])
133+
## Method: fit(X:FloatMatrix, y:FloatVector, sample_weight:FloatVector = np.empty(0), X_names:List[str] = [], cv_observations:IntMatrix = np.empty([0, 0]), prioritized_predictors_indexes:List[int] = [], monotonic_constraints:List[int] = [], group:FloatVector = np.empty(0), interaction_constraints:List[List[int]] = [], other_data:FloatMatrix = np.empty([0, 0]), predictor_learning_rates:List[float] = [], predictor_penalties_for_non_linearity:List[float] = [], predictor_penalties_for_interactions:List[float] = [])
134134

135135
***This method fits the model to data.***
136136

@@ -189,7 +189,7 @@ A numpy matrix with predictor values.
189189
If ***True*** then predictions are capped so that they are not less than the minimum and not greater than the maximum prediction or response in the training dataset. This is recommended especially if ***max_interaction_level*** is high. However, if you need the model to extrapolate then set this parameter to ***False***.
190190

191191

192-
## Method: set_term_names(X_names:StrVector)
192+
## Method: set_term_names(X_names:List[str])
193193

194194
***This method sets the names of terms based on X_names.***
195195

@@ -199,7 +199,7 @@ If ***True*** then predictions are capped so that they are not less than the min
199199
A list of strings containing names for each predictor in the ***X*** matrix that the model was trained on.
200200

201201

202-
## Method: calculate_feature_importance(X:FloatMatrix, sample_weight:FloatVector = [])
202+
## Method: calculate_feature_importance(X:FloatMatrix, sample_weight:FloatVector = np.empty(0))
203203

204204
***Returns a numpy matrix containing estimated feature importance in X for each predictor.***
205205

@@ -209,7 +209,7 @@ A list of strings containing names for each predictor in the ***X*** matrix that
209209
A numpy matrix with predictor values.
210210

211211

212-
## Method: calculate_term_importance(X:FloatMatrix, sample_weight:FloatVector = [])
212+
## Method: calculate_term_importance(X:FloatMatrix, sample_weight:FloatVector = np.empty(0))
213213

214214
***Returns a numpy matrix containing estimated term importance in X for each term in the model.***
215215

@@ -239,7 +239,7 @@ A numpy matrix with predictor values.
239239
A numpy matrix with predictor values.
240240

241241

242-
## Method: calculate_local_contribution_from_selected_terms(X:FloatMatrix, predictor_indexes:IntVector)
242+
## Method: calculate_local_contribution_from_selected_terms(X:FloatMatrix, predictor_indexes:List[int])
243243

244244
***Returns a numpy vector containing the contribution to the linear predictor from an user specified combination of interacting predictors for each observation in X. This makes it easier to interpret interactions (or main effects if just one predictor is specified), for example by plotting predictor values against the term contribution.***
245245

aplr/aplr.py

Lines changed: 34 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,11 @@
11
from typing import List, Callable, Optional, Dict
2+
import numpy as np
23
import aplr_cpp
34

4-
FloatVector = List[float]
5-
FloatMatrix = List[List[float]]
6-
IntVector = List[int]
7-
IntMatrix = List[List[int]]
8-
StrVector = List[str]
5+
FloatVector = np.ndarray
6+
FloatMatrix = np.ndarray
7+
IntVector = np.ndarray
8+
IntMatrix = np.ndarray
99

1010

1111
class APLRRegressor:
@@ -184,17 +184,17 @@ def fit(
184184
self,
185185
X: FloatMatrix,
186186
y: FloatVector,
187-
sample_weight: FloatVector = [],
188-
X_names: StrVector = [],
189-
cv_observations: IntMatrix = [],
190-
prioritized_predictors_indexes: IntVector = [],
191-
monotonic_constraints: IntVector = [],
192-
group: FloatVector = [],
187+
sample_weight: FloatVector = np.empty(0),
188+
X_names: List[str] = [],
189+
cv_observations: IntMatrix = np.empty([0, 0]),
190+
prioritized_predictors_indexes: List[int] = [],
191+
monotonic_constraints: List[int] = [],
192+
group: FloatVector = np.empty(0),
193193
interaction_constraints: List[List[int]] = [],
194-
other_data: FloatMatrix = [],
195-
predictor_learning_rates: FloatVector = [],
196-
predictor_penalties_for_non_linearity: FloatVector = [],
197-
predictor_penalties_for_interactions: FloatVector = [],
194+
other_data: FloatMatrix = np.empty([0, 0]),
195+
predictor_learning_rates: List[float] = [],
196+
predictor_penalties_for_non_linearity: List[float] = [],
197+
predictor_penalties_for_interactions: List[float] = [],
198198
):
199199
self.__set_params_cpp()
200200
self.APLRRegressor.fit(
@@ -222,16 +222,16 @@ def predict(
222222
)
223223
return self.APLRRegressor.predict(X, cap_predictions_to_minmax_in_training)
224224

225-
def set_term_names(self, X_names: StrVector):
225+
def set_term_names(self, X_names: List[str]):
226226
self.APLRRegressor.set_term_names(X_names)
227227

228228
def calculate_feature_importance(
229-
self, X: FloatMatrix, sample_weight: FloatVector = []
229+
self, X: FloatMatrix, sample_weight: FloatVector = np.empty(0)
230230
) -> FloatVector:
231231
return self.APLRRegressor.calculate_feature_importance(X, sample_weight)
232232

233233
def calculate_term_importance(
234-
self, X: FloatMatrix, sample_weight: FloatVector = []
234+
self, X: FloatMatrix, sample_weight: FloatVector = np.empty(0)
235235
) -> FloatVector:
236236
return self.APLRRegressor.calculate_term_importance(X, sample_weight)
237237

@@ -242,7 +242,7 @@ def calculate_local_term_contribution(self, X: FloatMatrix) -> FloatMatrix:
242242
return self.APLRRegressor.calculate_local_term_contribution(X)
243243

244244
def calculate_local_contribution_from_selected_terms(
245-
self, X: FloatMatrix, predictor_indexes: IntVector
245+
self, X: FloatMatrix, predictor_indexes: List[int]
246246
) -> FloatVector:
247247
return self.APLRRegressor.calculate_local_contribution_from_selected_terms(
248248
X, predictor_indexes
@@ -251,13 +251,13 @@ def calculate_local_contribution_from_selected_terms(
251251
def calculate_terms(self, X: FloatMatrix) -> FloatMatrix:
252252
return self.APLRRegressor.calculate_terms(X)
253253

254-
def get_term_names(self) -> StrVector:
254+
def get_term_names(self) -> List[str]:
255255
return self.APLRRegressor.get_term_names()
256256

257-
def get_term_affiliations(self) -> StrVector:
257+
def get_term_affiliations(self) -> List[str]:
258258
return self.APLRRegressor.get_term_affiliations()
259259

260-
def get_unique_term_affiliations(self) -> StrVector:
260+
def get_unique_term_affiliations(self) -> List[str]:
261261
return self.APLRRegressor.get_unique_term_affiliations()
262262

263263
def get_base_predictors_in_each_unique_term_affiliation(self) -> List[List[int]]:
@@ -433,16 +433,16 @@ def __set_params_cpp(self):
433433
def fit(
434434
self,
435435
X: FloatMatrix,
436-
y: StrVector,
437-
sample_weight: FloatVector = [],
438-
X_names: StrVector = [],
439-
cv_observations: IntMatrix = [],
440-
prioritized_predictors_indexes: IntVector = [],
441-
monotonic_constraints: IntVector = [],
436+
y: List[str],
437+
sample_weight: FloatVector = np.empty(0),
438+
X_names: List[str] = [],
439+
cv_observations: IntMatrix = np.empty([0, 0]),
440+
prioritized_predictors_indexes: List[int] = [],
441+
monotonic_constraints: List[int] = [],
442442
interaction_constraints: List[List[int]] = [],
443-
predictor_learning_rates: FloatVector = [],
444-
predictor_penalties_for_non_linearity: FloatVector = [],
445-
predictor_penalties_for_interactions: FloatVector = [],
443+
predictor_learning_rates: List[float] = [],
444+
predictor_penalties_for_non_linearity: List[float] = [],
445+
predictor_penalties_for_interactions: List[float] = [],
446446
):
447447
self.__set_params_cpp()
448448
self.APLRClassifier.fit(
@@ -468,13 +468,13 @@ def predict_class_probabilities(
468468

469469
def predict(
470470
self, X: FloatMatrix, cap_predictions_to_minmax_in_training: bool = False
471-
) -> StrVector:
471+
) -> List[str]:
472472
return self.APLRClassifier.predict(X, cap_predictions_to_minmax_in_training)
473473

474474
def calculate_local_feature_contribution(self, X: FloatMatrix) -> FloatMatrix:
475475
return self.APLRClassifier.calculate_local_feature_contribution(X)
476476

477-
def get_categories(self) -> StrVector:
477+
def get_categories(self) -> List[str]:
478478
return self.APLRClassifier.get_categories()
479479

480480
def get_logit_model(self, category: str) -> APLRRegressor:
@@ -489,7 +489,7 @@ def get_cv_error(self) -> float:
489489
def get_feature_importance(self) -> FloatVector:
490490
return self.APLRClassifier.get_feature_importance()
491491

492-
def get_unique_term_affiliations(self) -> StrVector:
492+
def get_unique_term_affiliations(self) -> List[str]:
493493
return self.APLRClassifier.get_unique_term_affiliations()
494494

495495
def get_base_predictors_in_each_unique_term_affiliation(self) -> List[List[int]]:

dependencies/pybind11/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
import sys
22

3-
if sys.version_info < (3, 6):
3+
if sys.version_info < (3, 6): # noqa: UP036
44
msg = "pybind11 does not support Python < 3.6. 2.9 was the last release supporting Python 2.7 and 3.5."
55
raise ImportError(msg)
66

dependencies/pybind11/_version.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,5 +8,5 @@ def _to_int(s: str) -> Union[int, str]:
88
return s
99

1010

11-
__version__ = "2.10.4"
11+
__version__ = "2.12.0"
1212
version_info = tuple(_to_int(s) for s in __version__.split("."))

dependencies/pybind11/commands.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
DIR = os.path.abspath(os.path.dirname(__file__))
44

55

6-
def get_include(user: bool = False) -> str: # pylint: disable=unused-argument
6+
def get_include(user: bool = False) -> str: # noqa: ARG001
77
"""
88
Return the path to the pybind11 include directory. The historical "user"
99
argument is unused, and may be removed.

dependencies/pybind11/include/pybind11/attr.h

Lines changed: 14 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,9 @@ struct is_method {
2626
explicit is_method(const handle &c) : class_(c) {}
2727
};
2828

29+
/// Annotation for setters
30+
struct is_setter {};
31+
2932
/// Annotation for operators
3033
struct is_operator {};
3134

@@ -188,8 +191,8 @@ struct argument_record {
188191
struct function_record {
189192
function_record()
190193
: is_constructor(false), is_new_style_constructor(false), is_stateless(false),
191-
is_operator(false), is_method(false), has_args(false), has_kwargs(false),
192-
prepend(false) {}
194+
is_operator(false), is_method(false), is_setter(false), has_args(false),
195+
has_kwargs(false), prepend(false) {}
193196

194197
/// Function name
195198
char *name = nullptr; /* why no C++ strings? They generate heavier code.. */
@@ -230,6 +233,9 @@ struct function_record {
230233
/// True if this is a method
231234
bool is_method : 1;
232235

236+
/// True if this is a setter
237+
bool is_setter : 1;
238+
233239
/// True if the function has a '*args' argument
234240
bool has_args : 1;
235241

@@ -426,6 +432,12 @@ struct process_attribute<is_method> : process_attribute_default<is_method> {
426432
}
427433
};
428434

435+
/// Process an attribute which indicates that this function is a setter
436+
template <>
437+
struct process_attribute<is_setter> : process_attribute_default<is_setter> {
438+
static void init(const is_setter &, function_record *r) { r->is_setter = true; }
439+
};
440+
429441
/// Process an attribute which indicates the parent scope of a method
430442
template <>
431443
struct process_attribute<scope> : process_attribute_default<scope> {

dependencies/pybind11/include/pybind11/buffer_info.h

Lines changed: 16 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -37,6 +37,9 @@ inline std::vector<ssize_t> f_strides(const std::vector<ssize_t> &shape, ssize_t
3737
return strides;
3838
}
3939

40+
template <typename T, typename SFINAE = void>
41+
struct compare_buffer_info;
42+
4043
PYBIND11_NAMESPACE_END(detail)
4144

4245
/// Information record describing a Python buffer object
@@ -150,6 +153,17 @@ struct buffer_info {
150153
Py_buffer *view() const { return m_view; }
151154
Py_buffer *&view() { return m_view; }
152155

156+
/* True if the buffer item type is equivalent to `T`. */
157+
// To define "equivalent" by example:
158+
// `buffer_info::item_type_is_equivalent_to<int>(b)` and
159+
// `buffer_info::item_type_is_equivalent_to<long>(b)` may both be true
160+
// on some platforms, but `int` and `unsigned` will never be equivalent.
161+
// For the ground truth, please inspect `detail::compare_buffer_info<>`.
162+
template <typename T>
163+
bool item_type_is_equivalent_to() const {
164+
return detail::compare_buffer_info<T>::compare(*this);
165+
}
166+
153167
private:
154168
struct private_ctr_tag {};
155169

@@ -170,9 +184,10 @@ struct buffer_info {
170184

171185
PYBIND11_NAMESPACE_BEGIN(detail)
172186

173-
template <typename T, typename SFINAE = void>
187+
template <typename T, typename SFINAE>
174188
struct compare_buffer_info {
175189
static bool compare(const buffer_info &b) {
190+
// NOLINTNEXTLINE(bugprone-sizeof-expression) Needed for `PyObject *`
176191
return b.format == format_descriptor<T>::format() && b.itemsize == (ssize_t) sizeof(T);
177192
}
178193
};

0 commit comments

Comments
 (0)