Skip to content

Commit 24e4541

Browse files
committed
finalize docs and readme
1 parent d6e0932 commit 24e4541

File tree

6 files changed

+31
-42
lines changed

6 files changed

+31
-42
lines changed

README.md

Lines changed: 15 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ pip install -e .
3838

3939
## 🎯 Getting Started
4040

41-
The example below shows how to optimize hyperparameters for a RandomForest classifier.
41+
The example below shows how to optimize hyperparameters for a RandomForest classifier. You can find more examples in the [documentation](https://confopt.readthedocs.io/).
4242

4343
### Step 1: Import Required Libraries
4444

@@ -50,7 +50,7 @@ from sklearn.datasets import load_wine
5050
from sklearn.model_selection import train_test_split
5151
from sklearn.metrics import accuracy_score
5252
```
53-
We import the necessary libraries for tuning and model evaluation. The `load_wine` function is used to load the wine dataset, which serves as our example data for optimizing the hyperparameters of the RandomForest classifier.
53+
We import the necessary libraries for tuning and model evaluation. The `load_wine` function is used to load the wine dataset, which serves as our example data for optimizing the hyperparameters of the RandomForest classifier (the dataset is trivial and we can easily reach 100% accuracy, this is for example purposes only).
5454

5555
### Step 2: Define the Objective Function
5656

@@ -72,18 +72,18 @@ def objective_function(configuration):
7272

7373
return accuracy_score(y_test, predictions)
7474
```
75-
This function defines the objective we want to optimize. It loads the wine dataset, splits it into training and testing sets, and trains a RandomForest model using the provided configuration. The function returns the accuracy score, which serves as the optimization metric.
75+
This function defines the objective we want to optimize. It loads the wine dataset, splits it into training and testing sets, and trains a RandomForest model using the provided configuration. The function returns test accuracy, which will be the objective value ConfOpt will optimize for.
7676

7777
### Step 3: Define the Search Space
7878

7979
```python
8080
search_space = {
81-
'n_estimators': IntRange(50, 200),
82-
'max_features': FloatRange(0.1, 1.0),
83-
'criterion': CategoricalRange(['gini', 'entropy', 'log_loss'])
81+
'n_estimators': IntRange(min_value=50, max_value=200),
82+
'max_features': FloatRange(min_value=0.1, max_value=1.0),
83+
'criterion': CategoricalRange(choices=['gini', 'entropy', 'log_loss'])
8484
}
8585
```
86-
Here, we specify the search space for hyperparameters. This includes defining the range for the number of estimators, the proportion of features to consider when looking for the best split, and the criterion for measuring the quality of a split.
86+
Here, we specify the search space for hyperparameters. In this Random Forest example, this includes defining the range for the number of estimators, the proportion of features to consider when looking for the best split, and the criterion for measuring the quality of a split.
8787

8888
### Step 4: Create and Run the Tuner
8989

@@ -95,7 +95,7 @@ tuner = ConformalTuner(
9595
)
9696
tuner.tune(max_searches=50, n_random_searches=10)
9797
```
98-
We initialize the `ConformalTuner` with the objective function and search space. The tuner is then run to find the best hyperparameters by maximizing the accuracy score.
98+
We initialize the `ConformalTuner` with the objective function and search space. The `tune` method then kickstarts hyperparameter search and finds the hyperparameters that maximize test accuracy.
9999

100100
### Step 5: Retrieve and Display Results
101101

@@ -106,7 +106,7 @@ best_score = tuner.get_best_value()
106106
print(f"Best accuracy: {best_score:.4f}")
107107
print(f"Best parameters: {best_params}")
108108
```
109-
Finally, we retrieve the best parameters and score from the tuning process and print them to the console for review.
109+
Finally, we retrieve the optimization's best parameters and test accuracy score and print them to the console for review.
110110

111111
For detailed examples and explanations see the [documentation](https://confopt.readthedocs.io/).
112112

@@ -121,7 +121,7 @@ For detailed examples and explanations see the [documentation](https://confopt.r
121121
- **[API Reference](https://confopt.readthedocs.io/en/latest/api_reference.html)**:
122122
Complete reference for main classes, methods, and parameters.
123123

124-
## 🤝 Contributing
124+
## 📈 Benchmarks
125125

126126
TBI
127127

@@ -135,9 +135,11 @@ ConfOpt implements surrogate models and acquisition functions from the following
135135
> **Optimizing Hyperparameters with Conformal Quantile Regression**
136136
> [PMLR, 2023](https://proceedings.mlr.press/v202/salinas23a/salinas23a.pdf)
137137
138-
## 📈 Benchmarks
138+
## 🤝 Contributing
139139

140-
TBI
140+
If you'd like to contribute, please email [r.doyle.edu@gmail.com](mailto:r.doyle.edu@gmail.com) with a quick summary of the feature you'd like to add and we can discuss it before setting up a PR!
141+
142+
If you want to contribute a fix relating to a new bug, first raise an [issue](https://github.com/rick12000/confopt/issues) on GitHub, then email [r.doyle.edu@gmail.com](mailto:r.doyle.edu@gmail.com) referencing the issue. Issues will be regularly monitored, only send an email if you want to contribute a fix.
141143

142144
## 📄 License
143145

@@ -148,5 +150,5 @@ TBI
148150
<div align="center">
149151
<strong>Ready to take your hyperparameter optimization to the next level?</strong><br>
150152
<a href="https://confopt.readthedocs.io/en/latest/getting_started.html">Get Started</a> |
151-
<a href="https://confopt.readthedocs.io/en/latest/api_reference.html">API Docs</a> |
153+
<a href="https://confopt.readthedocs.io/en/latest/api_reference.html">API Docs</a>
152154
</div>

confopt/tuning.py

Lines changed: 14 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -105,7 +105,7 @@ def __init__(
105105
objective_function: callable,
106106
search_space: Dict[str, ParameterRange],
107107
minimize: bool = True,
108-
n_candidates: int = 3000,
108+
n_candidates: int = 5000,
109109
warm_starts: Optional[List[Tuple[Dict, float]]] = None,
110110
dynamic_sampling: bool = True,
111111
) -> None:
@@ -652,28 +652,29 @@ def tune(
652652
Example:
653653
Basic usage::
654654
655+
import numpy as np
655656
from confopt.tuning import ConformalTuner
656-
from confopt.wrapping import IntRange, FloatRange
657+
from confopt.wrapping import FloatRange
658+
659+
def objective(configuration):
660+
x1 = configuration['x1']
661+
x2 = configuration['x2']
662+
A = 10
663+
n = 2
664+
return A * n + (x1**2 - A * np.cos(2 * np.pi * x1)) + (x2**2 - A * np.cos(2 * np.pi * x2))
657665
658666
search_space = {
659-
'lr': FloatRange(0.001, 0.1, log_scale=True),
660-
'units': IntRange(32, 512)
667+
'x1': FloatRange(min_value=-5.12, max_value=5.12),
668+
'x2': FloatRange(min_value=-5.12, max_value=5.12)
661669
}
662670
663-
def objective(configuration):
664-
model = SomeModel(
665-
learning_rate=configuration['lr'],
666-
hidden_units=configuration['units']
667-
)
668-
return model.evaluate()
669-
670671
tuner = ConformalTuner(
671672
objective_function=objective,
672673
search_space=search_space,
673-
metric_optimization='maximize'
674+
minimize=True
674675
)
675676
676-
tuner.tune(n_random_searches=10, max_searches=100)
677+
tuner.tune(n_random_searches=10, max_searches=50)
677678
678679
best_config = tuner.get_best_params()
679680
best_score = tuner.get_best_value()

docs/advanced_usage.rst

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -52,8 +52,7 @@ Let's use a ``QuantileConformalSearcher`` with a ``LowerBoundSampler`` and a Qua
5252
adapter="DtACI", # Conformal adapter to use for calibration
5353
beta_decay="logarithmic_decay", # Lower Bound Sampling decay function
5454
c=1.0 # Lower Bound Sampling Decay rate
55-
),
56-
n_pre_conformal_trials=32 # Minimum number of trials before conformal calibration kicks in
55+
)
5756
)
5857
5958
And pass our custom searcher to the tuner to use it:

docs/basic_usage/classification_example.rst

Lines changed: 0 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -194,13 +194,6 @@ After that runs, you can retrieve the best hyperparameters or the best score fou
194194
best_params = tuner.get_best_params()
195195
best_accuracy = tuner.get_best_value()
196196
197-
Expected output:
198-
199-
.. code-block:: text
200-
201-
Best accuracy: 0.9815
202-
Best parameters: {'n_estimators': 187, 'max_features': 0.73, 'criterion': 'entropy'}
203-
204197
Which you can use to instantiate a tuned version of your model:
205198

206199
.. code-block:: python

docs/basic_usage/regression_example.rst

Lines changed: 0 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -187,13 +187,6 @@ After that runs, you can retrieve the best hyperparameters or the best score fou
187187
best_params = tuner.get_best_params()
188188
best_mse = tuner.get_best_value()
189189
190-
Expected output:
191-
192-
.. code-block:: text
193-
194-
Best MSE: 2847.32
195-
Best parameters: {'n_estimators': 180, 'max_depth': 12, 'min_samples_split': 2}
196-
197190
Which you can use to instantiate a tuned version of your model:
198191

199192
.. code-block:: python

docs/roadmap.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,7 @@ Functionality
1212
* **Multi Objective Support**: Allow searchers to optimize for more than one objective (eg. accuracy and runtime).
1313
* **Transfer Learning Support**: Allow searchers to use a pretrained model or an observation matcher as a starting point for tuning.
1414
* **Local Search**: Expected Improvement sampler currently only performs one off configuration scoring. Local search (where a local neighbourhood around the initial EI optimum is explored as a second pass refinement) can significantly improve performance.
15+
* **Hierarchical Hyperparameters**: Improved handling for hierarchical hyperparameter spaces (currently supported, via flattening of the hyperparameters, but potentially suboptimal for surrogate learning)
1516

1617
Resource Management
1718
---------------------

0 commit comments

Comments
 (0)