You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The pyglment package gives different estimated coefficients for linear regression depending on which solver you select (e.g. cdfast or batch-gradient). Neither solver agrees with sklearn (I believe they should agree!) The code below gives an example for both ElasticNet and vanilla Lasso.
Reproducible examples
fromsklearn.linear_modelimportElasticNet, Lassofromsklearn.datasetsimportmake_regressionimportnumpyasnpfrompyglmnet.pyglmnetimportGLM# sample some linear regression dataX, y, coef_=make_regression(
n_samples=200, n_features=20,
noise=1, n_informative=10, coef=True,
random_state=1)
ElasticNet
First let's check elastic net
alpha=0.9l1_ratio=.5kws= {'distr': 'gaussian',
'alpha': l1_ratio,
'reg_lambda': alpha
}
# solve with coordinate descentenet_cd=GLM(solver='cdfast', **kws)
enet_cd.fit(X, y)
# solve with gradient descentenet_grad=GLM(solver='batch-gradient', **kws)
enet_grad.fit(X, y)
print('pyglmnet gradient vs cd', np.linalg.norm(enet_grad.beta_-enet_cd.beta_))
# solve with sklearnenet_sklearn=ElasticNet(alpha=alpha, l1_ratio=l1_ratio)
enet_sklearn.fit(X, y)
print('gradient vs sklean norm diff', np.linalg.norm(enet_grad.beta_-enet_sklearn.coef_))
print('cd vs sklean norm diff', np.linalg.norm(enet_cd.beta_-enet_sklearn.coef_))
And we see the solutions are different:
pyglmnet gradient vs cd 4.387709772237268
gradient vs sklean norm diff 4.776458575567959
cd vs sklean norm diff 2.198761137967092
I messed around with the optimization parameters (e.g. tol, learning_rate, max_iter) and could not get anyone to agree. Of course the word "different" is subject to what you think the numerical tolerance should be but, but these norms seem large enough to cause concern.
I should note here that if you do just ridge regression (i.e. l1_ratio=0) the answers do seem to agree.
Lasso
lasso_tune_param=0.9kws= {'distr': 'gaussian',
'alpha': 1, # just solve using lasso'reg_lambda': lasso_tune_param, # lasso penalty parameter
}
# solve with coordinate descentlasso_cd=GLM(solver='cdfast', **kws)
lasso_cd.fit(X, y)
# solve with gradient descentlasso_grad=GLM(solver='batch-gradient', **kws)
lasso_grad.fit(X, y)
print('pyglmnet gradient vs cd norm diff', np.linalg.norm(lasso_grad.beta_-lasso_cd.beta_))
# solve with sklearnlasso_sklearn=Lasso(alpha=lasso_tune_param)
lasso_sklearn.fit(X, y)
print('gradient vs sklearn norm diff', np.linalg.norm(lasso_grad.beta_-lasso_sklearn.coef_))
print('cd vs sklearn norm diff', np.linalg.norm(lasso_cd.beta_-lasso_sklearn.coef_))
Again we get different answers
pyglmnet gradient vs cd norm diff 13.940132420883737
gradient vs sklearn norm diff 13.548769667913621
cd vs sklearn norm diff 0.5271678205389818
Perhaps coordinate descent is getting close to sklearn, but I think that norm is still large enough to be concerning.
Software versions
I am using Python 3.6.12, pyglmnet 1.1 and sklean 0.24.1
The text was updated successfully, but these errors were encountered:
idc9
changed the title
The pyglmnet solvers give different answers and neither solver agrees with sklean
The pyglmnet solvers give different answers and neither solver agrees with sklearn
May 28, 2021
Do you mind checking on the development version? I get:
pyglmnet gradient vs cd 0.8964272943835351
gradient vs sklean norm diff 0.0018012674728990078
cd vs sklean norm diff 0.8961793549855578
pyglmnet gradient vs cd norm diff 1.390974023209033
gradient vs sklearn norm diff 0.0014402782631833914
cd vs sklearn norm diff 1.3908436488985736
we have made some fixes since the release and I'm pretty confident the batch-gradient solver is correct. Regarding cdfast, I will defer to what @pavanramkumar says.
I installed version 1.2.dev0 from github get the same answers as you do above i.e. batch-gradient seems correct but there seems to be an issue with cdfast.
The pyglment package gives different estimated coefficients for linear regression depending on which solver you select (e.g. cdfast or batch-gradient). Neither solver agrees with sklearn (I believe they should agree!) The code below gives an example for both ElasticNet and vanilla Lasso.
Reproducible examples
ElasticNet
First let's check elastic net
And we see the solutions are different:
I messed around with the optimization parameters (e.g. tol, learning_rate, max_iter) and could not get anyone to agree. Of course the word "different" is subject to what you think the numerical tolerance should be but, but these norms seem large enough to cause concern.
I should note here that if you do just ridge regression (i.e. l1_ratio=0) the answers do seem to agree.
Lasso
Again we get different answers
Perhaps coordinate descent is getting close to sklearn, but I think that norm is still large enough to be concerning.
Software versions
I am using Python 3.6.12, pyglmnet 1.1 and sklean 0.24.1
The text was updated successfully, but these errors were encountered: