You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[Link to notebook](https://github.com/microsoft/FLAML/blob/main/notebook/zeroshot_lightgbm.ipynb) | [Open in colab](https://colab.research.google.com/github/microsoft/FLAML/blob/main/notebook/zeroshot_lightgbm.ipynb)
69
69
70
+
## Flamlized LGBMClassifier
71
+
72
+
### Prerequisites
73
+
74
+
This example requires the [autozero] option.
75
+
76
+
```bash
77
+
pip install flaml[autozero] lightgbm openml
78
+
```
79
+
80
+
### Zero-shot AutoML
81
+
82
+
```python
83
+
from flaml.automl.data import load_openml_dataset
84
+
from flaml.default import LGBMClassifier
85
+
from flaml.automl.ml import sklearn_metric_loss_score
Copy file name to clipboardExpand all lines: website/docs/Use-Cases/Task-Oriented-AutoML.md
+67-18Lines changed: 67 additions & 18 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -51,6 +51,7 @@ If users provide the minimal inputs only, `AutoML` uses the default settings for
51
51
The optimization metric is specified via the `metric` argument. It can be either a string which refers to a built-in metric, or a user-defined function.
52
52
53
53
- Built-in metric.
54
+
54
55
- 'accuracy': 1 - accuracy as the corresponding metric to minimize.
55
56
- 'log_loss': default metric for multiclass classification.
56
57
- 'r2': 1 - r2_score as the corresponding metric to minimize. Default metric for regression.
@@ -70,6 +71,40 @@ The optimization metric is specified via the `metric` argument. It can be either
70
71
- 'ap': minimize 1 - average_precision_score.
71
72
- 'ndcg': minimize 1 - ndcg_score.
72
73
- 'ndcg@k': minimize 1 - ndcg_score@k. k is an integer.
A customized metric function that requires the following (input) signature, and returns the input config’s value in terms of the metric you want to minimize, and a dictionary of auxiliary information at your choice:
75
110
@@ -207,6 +242,7 @@ To tune a custom estimator that is not built-in, you need to:
207
242
208
243
```python
209
244
from flaml.automl.model import SKLearnEstimator
245
+
210
246
# SKLearnEstimator is derived from BaseEstimator
211
247
import rgf
212
248
@@ -215,31 +251,44 @@ class MyRegularizedGreedyForest(SKLearnEstimator):
215
251
def__init__(self, task="binary", **config):
216
252
super().__init__(task, **config)
217
253
218
-
if task inCLASSIFICATION:
219
-
from rgf.sklearn import RGFClassifier
254
+
ifisinstance(task, str):
255
+
from flaml.automl.task.factory import task_factory
0 commit comments