Skip to content

Commit a3c7f8a

Browse files
committed
use Pareto Distribution for Level 1 Problem 100
With inputs sampled from Unif(0,1), we can directly compute the expected value of the output using the mean of the targets. We use the Pareto distribution to sample inputs w/finite mean and infinite variance to prevent hacking this way.
1 parent bb8ec90 commit a3c7f8a

File tree

1 file changed

+5
-1
lines changed

1 file changed

+5
-1
lines changed

KernelBench/level1/100_HingeLoss.py

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,8 @@
11
import torch
22
import torch.nn as nn
33

4+
from torch.distributions import Pareto
5+
46
class Model(nn.Module):
57
"""
68
A model that computes Hinge Loss for binary classification tasks.
@@ -19,7 +21,9 @@ def forward(self, predictions, targets):
1921
dim = 1
2022

2123
def get_inputs():
22-
return [torch.rand(batch_size, *input_shape), torch.randint(0, 2, (batch_size,)).float() * 2 - 1]
24+
predictions = Pareto(0.01, 1.5).sample((batch_size, *input_shape))
25+
targets = torch.randint(0, 2, (batch_size,)).float() * 2 - 1
26+
return [predictions, targets]
2327

2428
def get_init_inputs():
2529
return []

0 commit comments

Comments
 (0)