You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
#' See [Survival Analysis with Accelerated Failure Time](https://xgboost.readthedocs.io/en/latest/tutorials/aft_survival_analysis.html) for details.
459
459
#' - `"multi:softmax"`: set XGBoost to do multiclass classification using the softmax objective, you also need to set num_class(number of classes)
460
460
#' - `"multi:softprob"`: same as softmax, but output a vector of `ndata * nclass`, which can be further reshaped to `ndata * nclass` matrix. The result contains predicted probability of each data point belonging to each class.
461
-
#' - `"rank:ndcg"`: Use LambdaMART to perform pair-wise ranking where [Normalized Discounted Cumulative Gain (NDCG)](http://en.wikipedia.org/wiki/NDCG) is maximized. This objective supports position debiasing for click data.
462
-
#' - `"rank:map"`: Use LambdaMART to perform pair-wise ranking where [Mean Average Precision (MAP)](http://en.wikipedia.org/wiki/Mean_average_precision#Mean_average_precision) is maximized
461
+
#' - `"rank:ndcg"`: Use LambdaMART to perform pair-wise ranking where [Normalized Discounted Cumulative Gain (NDCG)](https://en.wikipedia.org/wiki/NDCG) is maximized. This objective supports position debiasing for click data.
462
+
#' - `"rank:map"`: Use LambdaMART to perform pair-wise ranking where [Mean Average Precision (MAP)](https://en.wikipedia.org/wiki/Mean_average_precision#Mean_average_precision) is maximized
463
463
#' - `"rank:pairwise"`: Use LambdaRank to perform pair-wise ranking using the `ranknet` objective.
464
464
#' - `"reg:gamma"`: gamma regression with log-link. Output is a mean of gamma distribution. It might be useful, e.g., for modeling insurance claims severity, or for any outcome that might be [gamma-distributed](https://en.wikipedia.org/wiki/Gamma_distribution#Occurrence_and_applications).
465
465
#' - `"reg:tweedie"`: Tweedie regression with log-link. It might be useful, e.g., for modeling total loss in insurance, or for any outcome that might be [Tweedie-distributed](https://en.wikipedia.org/wiki/Tweedie_distribution#Occurrence_and_applications).
#' Note: should only pass one of `alpha` or `reg_alpha`. Both refer to the same parameter and there's thus no difference between one or the other.
545
545
#' @param tree_method (for Tree Booster) (default= `"auto"`)
546
-
#' The tree construction algorithm used in XGBoost. See description in the [reference paper](http://arxiv.org/abs/1603.02754) and [Tree Methods](https://xgboost.readthedocs.io/en/latest/treemethod.html).
546
+
#' The tree construction algorithm used in XGBoost. See description in the [reference paper](https://arxiv.org/abs/1603.02754) and [Tree Methods](https://xgboost.readthedocs.io/en/latest/treemethod.html).
547
547
#'
548
548
#' Choices: `"auto"`, `"exact"`, `"approx"`, `"hist"`, this is a combination of commonly
549
549
#' used updaters. For other updaters like `"refresh"`, set the parameter `updater`
#' - Evaluation metrics for validation data, a default metric will be assigned according to objective (rmse for regression, and logloss for classification, `mean average precision` for ``rank:map``, etc.)
614
614
#' - User can add multiple evaluation metrics.
615
615
#' - The choices are listed below:
616
-
#' - `"rmse"`: [root mean square error](http://en.wikipedia.org/wiki/Root_mean_square_error)
616
+
#' - `"rmse"`: [root mean square error](https://en.wikipedia.org/wiki/Root_mean_square_error)
617
617
#' - `"rmsle"`: root mean square log error: \eqn{\sqrt{\frac{1}{N}[log(pred + 1) - log(label + 1)]^2}}. Default metric of `"reg:squaredlogerror"` objective. This metric reduces errors generated by outliers in dataset. But because `log` function is employed, `"rmsle"` might output `nan` when prediction value is less than -1. See `"reg:squaredlogerror"` for other requirements.
#' - `"error"`: Binary classification error rate. It is calculated as `#(wrong cases)/#(all cases)`. For the predictions, the evaluation will regard the instances with prediction value larger than 0.5 as positive instances, and the others as negative instances.
623
623
#' - `"error@t"`: a different than 0.5 binary classification threshold value could be specified by providing a numerical value through 't'.
624
624
#' - `"merror"`: Multiclass classification error rate. It is calculated as `#(wrong cases)/#(all cases)`.
#' - `"auc"`: [Receiver Operating Characteristic Area under the Curve](https://en.wikipedia.org/wiki/Receiver_operating_characteristic#Area_under_the_curve).
627
627
#' Available for classification and learning-to-rank tasks.
628
628
#' - When used with binary classification, the objective should be `"binary:logistic"` or similar functions that work on probability.
#' After XGBoost 1.6, both of the requirements and restrictions for using `"aucpr"` in classification problem are similar to `"auc"`. For ranking task, only binary relevance label \eqn{y \in [0, 1]} is supported. Different from `"map"` (mean average precision), `"aucpr"` calculates the *interpolated* area under precision recall curve using continuous interpolation.
637
637
#'
638
638
#' - `"pre"`: Precision at \eqn{k}. Supports only learning to rank task.
0 commit comments