Skip to content

Commit 4e9bf81

Browse files
bugfix
1 parent 54c4e2b commit 4e9bf81

File tree

5 files changed

+16
-16
lines changed

5 files changed

+16
-16
lines changed

API_REFERENCE_FOR_REGRESSION.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ Used to randomly split training observations into cv_folds if ***cv_observations
1717
Determines the loss function used. Allowed values are "mse", "binomial", "poisson", "gamma", "tweedie", "group_mse", "group_mse_cycle","mae", "quantile", "negative_binomial", "cauchy", "weibull" and "custom_function". This is used together with ***link_function***. When ***loss_function*** is "group_mse" then the "group" argument in the ***fit*** method must be provided. In the latter case APLR will try to minimize group MSE when training the model. When using "group_mse_cycle", ***group_mse_cycle_min_obs_in_bin*** controls the minimum amount of observations in each group. For a description of "group_mse_cycle" see ***group_mse_cycle_min_obs_in_bin***. The ***loss_function*** "quantile" is used together with the ***quantile*** constructor parameter. When ***loss_function*** is "custom_function" then the constructor parameters ***calculate_custom_loss_function*** and ***calculate_custom_negative_gradient_function***, both described below, must be provided.
1818

1919
#### link_function (default = "identity")
20-
Determines how the linear predictor is transformed to predictions. Allowed values are "identity", "logit", "log" and "custom_function". For an ordinary regression model use ***loss_function*** "mse" and ***link_function*** "identity". For logistic regression use ***loss_function*** "binomial" and ***link_function*** "logit". For a multiplicative model use the "log" ***link_function***. The "log" ***link_function*** often works best with a "poisson", "gamma", "tweedie", "negative_binomial" or "weibull" ***loss_function***, depending on the data. The ***loss_function*** "poisson", "gamma", "tweedie", "negative_binomial" or "weibull" should only be used with the "log" ***link_function***. Inappropriate combinations of ***loss_function*** and ***link_function*** may result in a warning message when fitting the model and/or a poor model fit. Please note that values other than "identity" typically require a significantly higher ***m*** (or ***v***) in order to converge. When ***link_function*** is "custom_function" then the constructor parameters ***calculate_custom_transform_linear_predictor_to_predictions_function*** and ***calculate_custom_differentiate_predictions_wrt_linear_predictor_function***, both described below, must be provided.
20+
Determines how the linear predictor is transformed to predictions. Allowed values are "identity", "logit", "log" and "custom_function". For an ordinary regression model use ***loss_function*** "mse" and ***link_function*** "identity". For logistic regression use ***loss_function*** "binomial" and ***link_function*** "logit". For a multiplicative model use the "log" ***link_function***. The "log" ***link_function*** often works best with a "poisson", "gamma", "tweedie", "negative_binomial" or "weibull" ***loss_function***, depending on the data. The ***loss_function*** "poisson", "gamma", "tweedie", "negative_binomial" or "weibull" should only be used with the "log" ***link_function***. Inappropriate combinations of ***loss_function*** and ***link_function*** may result in a warning message when fitting the model and/or a poor model fit. Please note that values other than "identity" may require a higher ***m*** (or ***v***) in order to converge. When ***link_function*** is "custom_function" then the constructor parameters ***calculate_custom_transform_linear_predictor_to_predictions_function*** and ***calculate_custom_differentiate_predictions_wrt_linear_predictor_function***, both described below, must be provided.
2121

2222
#### n_jobs (default = 0)
2323
Multi-threading parameter. If ***0*** then uses all available cores for multi-threading. Any other positive integer specifies the number of cores to use (***1*** means single-threading).

cpp/APLRRegressor.h

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1149,7 +1149,7 @@ VectorXd APLRRegressor::calculate_neg_gradient_current_for_group_mse(GroupData &
11491149
VectorXd APLRRegressor::differentiate_predictions_wrt_linear_predictor()
11501150
{
11511151
if (link_function == "logit")
1152-
return 1.0 / 4.0 * (linear_predictor_current.array() / 2.0).cosh().array().pow(-2);
1152+
return 10.0 / 4.0 * (linear_predictor_current.array() / 2.0).cosh().array().pow(-2);
11531153
else if (link_function == "log")
11541154
{
11551155
return linear_predictor_current.array().exp();

cpp/tests.cpp

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -1055,7 +1055,7 @@ class Tests
10551055
// Model
10561056
APLRRegressor model{APLRRegressor()};
10571057
model.m = 100;
1058-
model.v = 0.5;
1058+
model.v = 0.05;
10591059
model.bins = 300;
10601060
model.n_jobs = 0;
10611061
model.loss_function = "binomial";
@@ -1614,7 +1614,7 @@ class Tests
16141614
// Model
16151615
APLRClassifier model{APLRClassifier()};
16161616
model.m = 100;
1617-
model.v = 0.5;
1617+
model.v = 0.05;
16181618
model.bins = 300;
16191619
model.n_jobs = 0;
16201620
model.verbosity = 3;
@@ -1684,7 +1684,7 @@ class Tests
16841684
// Model
16851685
APLRClassifier model{APLRClassifier()};
16861686
model.m = 100;
1687-
model.v = 0.5;
1687+
model.v = 0.05;
16881688
model.bins = 300;
16891689
model.n_jobs = 0;
16901690
model.verbosity = 3;
@@ -1754,7 +1754,7 @@ class Tests
17541754
// Model
17551755
APLRClassifier model{APLRClassifier()};
17561756
model.m = 100;
1757-
model.v = 0.5;
1757+
model.v = 0.05;
17581758
model.bins = 300;
17591759
model.n_jobs = 0;
17601760
model.verbosity = 3;
@@ -1824,7 +1824,7 @@ class Tests
18241824
// Model
18251825
APLRClassifier model{APLRClassifier()};
18261826
model.m = 100;
1827-
model.v = 0.5;
1827+
model.v = 0.05;
18281828
model.bins = 300;
18291829
model.n_jobs = 0;
18301830
model.verbosity = 3;
@@ -1894,7 +1894,7 @@ class Tests
18941894
// Model
18951895
APLRClassifier model{APLRClassifier()};
18961896
model.m = 100;
1897-
model.v = 0.5;
1897+
model.v = 0.05;
18981898
model.bins = 300;
18991899
model.n_jobs = 0;
19001900
model.verbosity = 3;
@@ -1962,7 +1962,7 @@ class Tests
19621962
// Model
19631963
APLRClassifier model{APLRClassifier()};
19641964
model.m = 100;
1965-
model.v = 0.5;
1965+
model.v = 0.05;
19661966
model.bins = 300;
19671967
model.n_jobs = 0;
19681968
model.verbosity = 3;
@@ -2020,15 +2020,15 @@ class Tests
20202020

20212021
std::cout << "cv_error\n"
20222022
<< model.get_cv_error() << "\n\n";
2023-
tests.push_back(is_approximately_equal(model.get_cv_error(), 0.15984656957508173, 0.000001));
2023+
tests.push_back(is_approximately_equal(model.get_cv_error(), 0.15942686880196807, 0.000001));
20242024

20252025
std::cout << "predicted_class_prob_mean\n"
20262026
<< predicted_class_probabilities.mean() << "\n\n";
20272027
tests.push_back(is_approximately_equal(predicted_class_probabilities.mean(), 0.5, 0.00001));
20282028

20292029
std::cout << "local_feature_importance_mean\n"
20302030
<< local_feature_contribution.mean() << "\n\n";
2031-
tests.push_back(is_approximately_equal(local_feature_contribution.mean(), 0.052181259967961045, 0.00001));
2031+
tests.push_back(is_approximately_equal(local_feature_contribution.mean(), 0.05891072116542774, 0.00001));
20322032
tests.push_back(base_predictors_in_the_second_affiliation == correct_base_predictors_in_the_second_affiliation);
20332033
tests.push_back(the_second_unique_term_affiliation == the_correct_second_unique_term_affiliation);
20342034
}
@@ -2038,7 +2038,7 @@ class Tests
20382038
// Model
20392039
APLRClassifier model{APLRClassifier()};
20402040
model.m = 100;
2041-
model.v = 0.5;
2041+
model.v = 0.05;
20422042
model.bins = 300;
20432043
model.n_jobs = 0;
20442044
model.verbosity = 3;
@@ -2092,23 +2092,23 @@ class Tests
20922092

20932093
std::cout << "cv_error\n"
20942094
<< model.get_cv_error() << "\n\n";
2095-
tests.push_back(is_approximately_equal(model.get_cv_error(), 0.17250319103503037, 0.000001));
2095+
tests.push_back(is_approximately_equal(model.get_cv_error(), 0.14420733842494515, 0.000001));
20962096

20972097
std::cout << "predicted_class_prob_mean\n"
20982098
<< predicted_class_probabilities.mean() << "\n\n";
20992099
tests.push_back(is_approximately_equal(predicted_class_probabilities.mean(), 0.5, 0.00001));
21002100

21012101
std::cout << "local_feature_importance_mean\n"
21022102
<< local_feature_contribution.mean() << "\n\n";
2103-
tests.push_back(is_approximately_equal(local_feature_contribution.mean(), 0.07920242388299352, 0.00001));
2103+
tests.push_back(is_approximately_equal(local_feature_contribution.mean(), 0.10357828243742498, 0.00001));
21042104
}
21052105

21062106
void test_aplrclassifier_two_class_max_terms()
21072107
{
21082108
// Model
21092109
APLRClassifier model{APLRClassifier()};
21102110
model.m = 100;
2111-
model.v = 0.5;
2111+
model.v = 0.05;
21122112
model.bins = 300;
21132113
model.n_jobs = 0;
21142114
model.verbosity = 3;
Binary file not shown.

setup.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@
2727

2828
setuptools.setup(
2929
name="aplr",
30-
version="10.5.0",
30+
version="10.5.1",
3131
description="Automatic Piecewise Linear Regression",
3232
ext_modules=[sfc_module],
3333
author="Mathias von Ottenbreit",

0 commit comments

Comments
 (0)