-
Notifications
You must be signed in to change notification settings - Fork 16
Open
Description
I found that the solution calculated by Glm is not correct. (Or am I using it wrong?) Here is an example of a simple least-squares problem:
import numpy as np
from numpy.random import default_rng
from yaglm.Glm import Glm
rng = default_rng(seed=1)
num_features = 500
num_data = 2000
# y = X @ alpha + noise
X = rng.random((num_data, num_features))
alpha = rng.random(num_features)
y = X @ alpha + rng.standard_normal(num_data) / 100
model = Glm(fit_intercept=False)
model.fit(X, y)
alpha_yaglm = model.coef_
alpha_np = np.linalg.lstsq(X, y, rcond=None)[0]
loss_np = np.linalg.norm(y - X @ alpha_np) # = 0.39
loss_yaglm = np.linalg.norm(y - X @ alpha_yaglm) # = 0.70As you can see the loss of the numpy least-squares solution is much smaller. The two solutions alpha_yaglm and alpha_np are approximately identical up to the first two decimals though. Maybe the yaglm solver terminates too early?
Metadata
Metadata
Assignees
Labels
No labels