Skip to content

Commit 858f495

Browse files
docs
1 parent 4a0438d commit 858f495

File tree

3 files changed

+2
-2
lines changed

3 files changed

+2
-2
lines changed

API_REFERENCE_FOR_CLASSIFICATION.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88
The maximum number of boosting steps. If validation error does not flatten out at the end of the ***m***th boosting step, then try increasing it (or alternatively increase the learning rate).
99

1010
#### v (default = 0.5)
11-
The learning rate. Must be greater than zero and not more than one. The higher the faster the algorithm learns and the lower ***m*** is required, reducing computational costs potentially at the expense of predictiveness. Empirical evidence suggests that ***v <= 0.5*** gives good results for APLR.
11+
The learning rate. Must be greater than zero and not more than one. The higher the faster the algorithm learns and the lower ***m*** is required, reducing computational costs potentially at the expense of predictiveness. Empirical evidence suggests that ***v <= 0.5*** gives good results for APLR. For datasets with very weak signals or very small sizes, a low learning rate, such as 0.1, may be beneficial.
1212

1313
#### random_state (default = 0)
1414
Used to randomly split training observations into cv_folds if ***cv_observations*** is not specified when fitting.

API_REFERENCE_FOR_REGRESSION.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88
The maximum number of boosting steps. If validation error does not flatten out at the end of the ***m***th boosting step, then try increasing it (or alternatively increase the learning rate).
99

1010
#### v (default = 0.5)
11-
The learning rate. Must be greater than zero and not more than one. The higher the faster the algorithm learns and the lower ***m*** is required, reducing computational costs potentially at the expense of predictiveness. Empirical evidence suggests that ***v <= 0.5*** gives good results for APLR.
11+
The learning rate. Must be greater than zero and not more than one. The higher the faster the algorithm learns and the lower ***m*** is required, reducing computational costs potentially at the expense of predictiveness. Empirical evidence suggests that ***v <= 0.5*** gives good results for APLR. For datasets with very weak signals or very small sizes, a low learning rate, such as 0.1, may be beneficial.
1212

1313
#### random_state (default = 0)
1414
Used to randomly split training observations into cv_folds if ***cv_observations*** is not specified when fitting.

documentation/APLR 10.6.2.pdf

646 Bytes
Binary file not shown.

0 commit comments

Comments
 (0)