Skip to content

Commit 9cb5177

Browse files
990 updated
1 parent fe62bb6 commit 9cb5177

File tree

5 files changed

+9
-3
lines changed

5 files changed

+9
-3
lines changed

API_REFERENCE_FOR_CLASSIFICATION.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@ Specifies a penalty in the range [0.0, 1.0] on terms that are not linear effects
5959
Specifies a penalty in the range [0.0, 1.0] on interaction terms. A higher value increases model interpretability but can hurt predictiveness. Values outside of the [0.0, 1.0] range are rounded to the nearest boundary within the range.
6060

6161
#### max_terms (default = 0)
62-
Restricts the maximum number of terms in any of the underlying models trained to ***max_terms***. The default value of 0 means no limit. After the limit is reached, the remaining boosting steps are used to further update the coefficients of already included terms. A reason for using ***max_terms*** is to increase model interpretability by reducing the number of terms in the model. Please note that low non-zero values of ***max_terms*** may require a high ***v*** for best results, such as 1.0.
62+
Restricts the maximum number of terms in any of the underlying models trained to ***max_terms***. The default value of 0 means no limit. After the limit is reached, the remaining boosting steps are used to further update the coefficients of already included terms. A potential tuning objective could be to find the lowest positive value of ***max_terms*** that does not increase the prediction error significantly. Low positive values can speed up the training process significantly.
6363

6464

6565
## Method: fit(X:npt.ArrayLike, y:List[str], sample_weight:npt.ArrayLike = np.empty(0), X_names:List[str]=[], cv_observations: npt.ArrayLike = np.empty([0, 0]), prioritized_predictors_indexes:List[int]=[], monotonic_constraints:List[int]=[], interaction_constraints:List[List[int]]=[], predictor_learning_rates: List[float] = [], predictor_penalties_for_non_linearity: List[float] = [], predictor_penalties_for_interactions: List[float] = [])

API_REFERENCE_FOR_REGRESSION.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -127,7 +127,7 @@ Specifies a penalty in the range [0.0, 1.0] on terms that are not linear effects
127127
Specifies a penalty in the range [0.0, 1.0] on interaction terms. A higher value increases model interpretability but can hurt predictiveness. Values outside of the [0.0, 1.0] range are rounded to the nearest boundary within the range.
128128

129129
#### max_terms (default = 0)
130-
Restricts the maximum number of terms in any of the underlying models trained to ***max_terms***. The default value of 0 means no limit. After the limit is reached, the remaining boosting steps are used to further update the coefficients of already included terms. A reason for using ***max_terms*** is to increase model interpretability by reducing the number of terms in the model. Please note that low non-zero values of ***max_terms*** may require a high ***v*** for best results, such as 1.0.
130+
Restricts the maximum number of terms in any of the underlying models trained to ***max_terms***. The default value of 0 means no limit. After the limit is reached, the remaining boosting steps are used to further update the coefficients of already included terms. A potential tuning objective could be to find the lowest positive value of ***max_terms*** that does not increase the prediction error significantly. Low positive values can speed up the training process significantly.
131131

132132

133133
## Method: fit(X:npt.ArrayLike, y:npt.ArrayLike, sample_weight:npt.ArrayLike = np.empty(0), X_names:List[str]=[], cv_observations: npt.ArrayLike = np.empty([0, 0]), prioritized_predictors_indexes:List[int]=[], monotonic_constraints:List[int]=[], group:npt.ArrayLike = np.empty(0), interaction_constraints:List[List[int]]=[], other_data: npt.ArrayLike = np.empty([0, 0]), predictor_learning_rates: List[float] = [], predictor_penalties_for_non_linearity: List[float] = [], predictor_penalties_for_interactions: List[float] = [])

documentation/APLR 9.9.0.pdf

-813 Bytes
Binary file not shown.

examples/train_aplr_classification.py

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,12 @@
4545
best_model = None
4646
for params in param_grid:
4747
model = APLRClassifier(
48-
random_state=random_state, verbosity=2, m=3000, v=0.1, **params
48+
random_state=random_state,
49+
verbosity=2,
50+
m=3000,
51+
v=0.1,
52+
# max_terms=5, # max terms in each underlying model. Tune this to find a trade-off between interpretability and predictiveness.
53+
**params
4954
)
5055
model.fit(
5156
data_train[predictors].values, data_train[response].values, X_names=predictors

examples/train_aplr_regression.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -52,6 +52,7 @@
5252
v=0.1,
5353
loss_function=loss_function,
5454
link_function=link_function,
55+
# max_terms=10, # max terms in each underlying model. Tune this to find a trade-off between interpretability and predictiveness.
5556
**params
5657
)
5758
model.fit(

0 commit comments

Comments
 (0)