Skip to content

Commit 1a619ee

Browse files
committed
use Pareto distribution for Level 1 Problem 99
With inputs sampled from Unif(0,1), the Huber Loss is effectively MSE, which we know can be hacked via statistical properties of the loss fn/inputs. We use the Pareto distribution to sample inputs w/finite mean and infinite variance to prevent hacking this way.
1 parent b15a6c9 commit 1a619ee

File tree

1 file changed

+5
-1
lines changed

1 file changed

+5
-1
lines changed

KernelBench/level1/96_HuberLoss.py

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,8 @@
11
import torch
22
import torch.nn as nn
33

4+
from torch.distributions import Pareto
5+
46
class Model(nn.Module):
57
"""
68
A model that computes Smooth L1 (Huber) Loss for regression tasks.
@@ -20,7 +22,9 @@ def forward(self, predictions, targets):
2022

2123
def get_inputs():
2224
scale = torch.rand(())
23-
return [torch.rand(batch_size, *input_shape)*scale, torch.rand(batch_size, *input_shape)]
25+
predictions = Pareto(0.01, 0.15).sample((batch_size, *input_shape))
26+
targets = Pareto(0.01, 0.15).sample((batch_size, *input_shape))
27+
return [predictions*scale, targets]
2428

2529
def get_init_inputs():
2630
return []

0 commit comments

Comments
 (0)