Skip to content

Commit fd4335d

Browse files
authored
[doc] Document the current status of some features. (dmlc#9469)
1 parent 801116c commit fd4335d

File tree

3 files changed

+22
-11
lines changed

3 files changed

+22
-11
lines changed

demo/guide-python/quantile_regression.py

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,11 @@
77
The script is inspired by this awesome example in sklearn:
88
https://scikit-learn.org/stable/auto_examples/ensemble/plot_gradient_boosting_quantile.html
99
10+
.. note::
11+
12+
The feature is only supported using the Python package. In addition, quantile
13+
crossing can happen due to limitation in the algorithm.
14+
1015
"""
1116
import argparse
1217
from typing import Dict

doc/tutorials/categorical.rst

Lines changed: 11 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -4,16 +4,17 @@ Categorical Data
44

55
.. note::
66

7-
As of XGBoost 1.6, the feature is experimental and has limited features
8-
9-
Starting from version 1.5, XGBoost has experimental support for categorical data available
10-
for public testing. For numerical data, the split condition is defined as :math:`value <
11-
threshold`, while for categorical data the split is defined depending on whether
12-
partitioning or onehot encoding is used. For partition-based splits, the splits are
13-
specified as :math:`value \in categories`, where ``categories`` is the set of categories
14-
in one feature. If onehot encoding is used instead, then the split is defined as
15-
:math:`value == category`. More advanced categorical split strategy is planned for future
16-
releases and this tutorial details how to inform XGBoost about the data type.
7+
As of XGBoost 1.6, the feature is experimental and has limited features. Only the
8+
Python package is fully supported.
9+
10+
Starting from version 1.5, the XGBoost Python package has experimental support for
11+
categorical data available for public testing. For numerical data, the split condition is
12+
defined as :math:`value < threshold`, while for categorical data the split is defined
13+
depending on whether partitioning or onehot encoding is used. For partition-based splits,
14+
the splits are specified as :math:`value \in categories`, where ``categories`` is the set
15+
of categories in one feature. If onehot encoding is used instead, then the split is
16+
defined as :math:`value == category`. More advanced categorical split strategy is planned
17+
for future releases and this tutorial details how to inform XGBoost about the data type.
1718

1819
************************************
1920
Training with scikit-learn Interface

doc/tutorials/multioutput.rst

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,11 @@ can be simultaneously classified as both sci-fi and comedy. For detailed explan
1111
terminologies related to different multi-output models please refer to the
1212
:doc:`scikit-learn user guide <sklearn:modules/multiclass>`.
1313

14+
.. note::
15+
16+
As of XGBoost 2.0, the feature is experimental and has limited features. Only the
17+
Python package is tested.
18+
1419
**********************************
1520
Training with One-Model-Per-Target
1621
**********************************
@@ -49,7 +54,7 @@ Training with Vector Leaf
4954

5055
.. note::
5156

52-
This is still working-in-progress, and many features are missing.
57+
This is still working-in-progress, and most features are missing.
5358

5459
XGBoost can optionally build multi-output trees with the size of leaf equals to the number
5560
of targets when the tree method `hist` is used. The behavior can be controlled by the

0 commit comments

Comments
 (0)