You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: skglm/experimental/quantile_huber.py
+19-8Lines changed: 19 additions & 8 deletions
Original file line number
Diff line number
Diff line change
@@ -17,7 +17,8 @@ class QuantileHuber(BaseDatafit):
17
17
----------
18
18
delta : float, positive
19
19
Width of the quadratic region around the origin. Larger values create
20
-
more smoothing. As delta approaches 0, this approaches the standard Pinball loss.
20
+
more smoothing. As delta approaches 0, this approaches the standard
21
+
Pinball loss.
21
22
22
23
quantile : float, between 0 and 1
23
24
The desired quantile level. For example, 0.5 corresponds to the median.
@@ -34,8 +35,8 @@ class QuantileHuber(BaseDatafit):
34
35
(1-\tau) (-r - \frac{\delta}{2}) & \text{if } r < -\delta
35
36
\end{cases}
36
37
37
-
where :math:`r = y - Xw` is the residual, :math:`\tau` is the target quantile,
38
-
and :math:`\delta` controls the smoothing region width.
38
+
where :math:`r = y - Xw` is the residual, :math:`\tau` is the target
39
+
quantile, and :math:`\delta` controls the smoothing region width.
39
40
40
41
The gradient is given by:
41
42
@@ -47,11 +48,15 @@ class QuantileHuber(BaseDatafit):
47
48
-(1-\tau) & \text{if } r < -\delta
48
49
\end{cases}
49
50
50
-
This formulation provides twice-differentiable smoothing while maintaining quantile estimation properties. The approach is similar to convolution smoothing with a uniform kernel.
51
+
This formulation provides twice-differentiable smoothing while maintaining
52
+
quantile estimation properties. The approach is similar to convolution
53
+
smoothing with a uniform kernel.
51
54
52
55
Special cases:
53
-
- When :math:`\\tau = 0.5`, this reduces to the symmetric Huber loss used for median regression.
54
-
- As :math:`\\delta \\to 0`, it converges to the standard Pinball loss.
56
+
- When :math:`\\tau = 0.5`, this reduces to the symmetric Huber
57
+
loss used for median regression.
58
+
- As :math:`\\delta \\to 0`, it converges to the standard
0 commit comments