You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# ReHLine-Python: Efficient Solver for ERM with PLQ Loss and Linear Constraints <ahref="https://github.com/softmin/ReHLine"><imgsrc="doc/source/figs/logo.png"align="right"height="138" /></a>
2
2
3
-
**ReHLine** is designed to be a computationally efficient and practically useful software package for large-scale empirical risk minimization (ERM) problems.
> **Fast, scalable, and scikit-learn compatible optimization for machine learning**
14
10
15
-
- It applies to any convex piecewise linear-quadratic loss function, including the hinge loss, the check loss, the Huber loss, etc.
16
-
- In addition, it supports linear equality and inequality constraints on the parameter vector.
17
-
- The optimization algorithm has a provable linear convergence rate.
18
-
- The per-iteration computational complexity is linear in the sample size.
11
+
**ReHLine-Python** is the official Python implementation of ReHLine, a powerful solver for large-scale **empirical risk minimization (ERM) problems** with **convex piecewise linear-quadratic (PLQ) loss functions** and **linear constraints**. Built with high-performance C++ core and seamless Python integration, ReHLine delivers exceptional speed while maintaining ease of use.
19
12
13
+
See more details in the [ReHLine documentation](https://rehline-python.readthedocs.io).
20
14
21
-
## ✨ New Features: Scikit-Learn Compatible Estimators
15
+
## ✨ Key Features
22
16
23
-
We are excited to introduce full scikit-learn compatibility! `ReHLine` now provides `plq_Ridge_Classifier` and `plq_Ridge_Regressor` estimators that integrate seamlessly with the entire scikit-learn ecosystem.
17
+
-**🚀 Blazing Fast**: Linear computational complexity per iteration, scales to millions of samples
18
+
-**🎯 Versatile**: Supports any convex PLQ loss (hinge, check, Huber, and more)
19
+
-**🔒 Constrained Optimization**: Handle linear equality and inequality constraints
20
+
-**📊 Scikit-Learn Compatible**: Drop-in replacement with `GridSearchCV`, `Pipeline` support
21
+
-**🐍 Pythonic API**: Both low-level and high-level interfaces for flexibility
24
22
25
-
This means you can:
26
-
- Drop `ReHLine` estimators directly into your existing scikit-learn `Pipeline`.
27
-
- Perform robust hyperparameter tuning using `GridSearchCV`.
28
-
- Use standard scikit-learn evaluation metrics and cross-validation tools.
23
+
24
+
## 📦 Installation
25
+
26
+
### Quick Install
27
+
28
+
```bash
29
+
pip install rehline
30
+
```
31
+
32
+
## 🚀 Quick Start
33
+
34
+
### Scikit-Learn Style API (Recommended)
35
+
36
+
ReHLine provides `plq_Ridge_Classifier` and `plq_Ridge_Regressor` that work seamlessly with scikit-learn:
37
+
38
+
```python
39
+
from rehline import plq_Ridge_Classifier
40
+
from sklearn.datasets import make_classification
41
+
from sklearn.model_selection import train_test_split, GridSearchCV
42
+
from sklearn.pipeline import Pipeline
43
+
from sklearn.preprocessing import StandardScaler
44
+
45
+
# Generate dataset
46
+
X, y = make_classification(n_samples=1000, n_features=20, random_state=42)
47
+
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
> **Note**: "∞" indicates the competing solver failed to produce a valid solution or exceeded time limits. Results from [NeurIPS 2023 paper](https://openreview.net/pdf?id=3pEBW2UPAD).
153
+
154
+
### Reproducible Benchmarks (powered by benchopt)
155
+
156
+
All benchmarks are reproducible via [benchopt](https://github.com/benchopt/benchopt) at our [ReHLine-benchmark](https://github.com/softmin/ReHLine-benchmark) repository.
0 commit comments