why ridge regression with alph=0 is not the same as normal linear regression?
up vote
0
down vote
favorite
In theory ridge regression with alpha=0 should have the same coefficients as linear regression but in sklearn python they are slighly different. anyone has idea why this happens? (datasets are normalized)
linear_regressor = linear_model.LinearRegression(fit_intercept=False)
linear_regressor.fit(X_train, Y_train)
y_train_prime = linear_regressor.predict(X_train)
linear_regressor_MSE = metrics.mean_squared_error(y_train_prime,Y_train)
ridge_regressor = linear_model.Ridge(alpha=0, fit_intercept=False)
ridge_regressor.fit(X_train, Y_train)
y_train_prime_ridge = ridge_regressor.predict(X_train)
ridge_regressor_MSEmetrics.mean_squared_error(y_train_prime_ridge,Y_train)
machine-learning scikit-learn
add a comment |
up vote
0
down vote
favorite
In theory ridge regression with alpha=0 should have the same coefficients as linear regression but in sklearn python they are slighly different. anyone has idea why this happens? (datasets are normalized)
linear_regressor = linear_model.LinearRegression(fit_intercept=False)
linear_regressor.fit(X_train, Y_train)
y_train_prime = linear_regressor.predict(X_train)
linear_regressor_MSE = metrics.mean_squared_error(y_train_prime,Y_train)
ridge_regressor = linear_model.Ridge(alpha=0, fit_intercept=False)
ridge_regressor.fit(X_train, Y_train)
y_train_prime_ridge = ridge_regressor.predict(X_train)
ridge_regressor_MSEmetrics.mean_squared_error(y_train_prime_ridge,Y_train)
machine-learning scikit-learn
1
Please update your question by showing what exactly are the differences; even better, use an openly available dataset and construct a minimal, complete, and verifiable example. As is, there is no much sense at showing us the commands for the MSE etc for some unknown to us dataset without sharing also the actual results...
– desertnaut
Nov 9 at 15:25
Can be (from here: scikit-learn.org/stable/modules/generated/…): solver, random_state, tol, max_iter Try firstly solver='lsqr'. And also better to add some information on how different are the weights.
– avchauzov
Nov 9 at 21:21
add a comment |
up vote
0
down vote
favorite
up vote
0
down vote
favorite
In theory ridge regression with alpha=0 should have the same coefficients as linear regression but in sklearn python they are slighly different. anyone has idea why this happens? (datasets are normalized)
linear_regressor = linear_model.LinearRegression(fit_intercept=False)
linear_regressor.fit(X_train, Y_train)
y_train_prime = linear_regressor.predict(X_train)
linear_regressor_MSE = metrics.mean_squared_error(y_train_prime,Y_train)
ridge_regressor = linear_model.Ridge(alpha=0, fit_intercept=False)
ridge_regressor.fit(X_train, Y_train)
y_train_prime_ridge = ridge_regressor.predict(X_train)
ridge_regressor_MSEmetrics.mean_squared_error(y_train_prime_ridge,Y_train)
machine-learning scikit-learn
In theory ridge regression with alpha=0 should have the same coefficients as linear regression but in sklearn python they are slighly different. anyone has idea why this happens? (datasets are normalized)
linear_regressor = linear_model.LinearRegression(fit_intercept=False)
linear_regressor.fit(X_train, Y_train)
y_train_prime = linear_regressor.predict(X_train)
linear_regressor_MSE = metrics.mean_squared_error(y_train_prime,Y_train)
ridge_regressor = linear_model.Ridge(alpha=0, fit_intercept=False)
ridge_regressor.fit(X_train, Y_train)
y_train_prime_ridge = ridge_regressor.predict(X_train)
ridge_regressor_MSEmetrics.mean_squared_error(y_train_prime_ridge,Y_train)
machine-learning scikit-learn
machine-learning scikit-learn
asked Nov 9 at 15:17
shahriar
42
42
1
Please update your question by showing what exactly are the differences; even better, use an openly available dataset and construct a minimal, complete, and verifiable example. As is, there is no much sense at showing us the commands for the MSE etc for some unknown to us dataset without sharing also the actual results...
– desertnaut
Nov 9 at 15:25
Can be (from here: scikit-learn.org/stable/modules/generated/…): solver, random_state, tol, max_iter Try firstly solver='lsqr'. And also better to add some information on how different are the weights.
– avchauzov
Nov 9 at 21:21
add a comment |
1
Please update your question by showing what exactly are the differences; even better, use an openly available dataset and construct a minimal, complete, and verifiable example. As is, there is no much sense at showing us the commands for the MSE etc for some unknown to us dataset without sharing also the actual results...
– desertnaut
Nov 9 at 15:25
Can be (from here: scikit-learn.org/stable/modules/generated/…): solver, random_state, tol, max_iter Try firstly solver='lsqr'. And also better to add some information on how different are the weights.
– avchauzov
Nov 9 at 21:21
1
1
Please update your question by showing what exactly are the differences; even better, use an openly available dataset and construct a minimal, complete, and verifiable example. As is, there is no much sense at showing us the commands for the MSE etc for some unknown to us dataset without sharing also the actual results...
– desertnaut
Nov 9 at 15:25
Please update your question by showing what exactly are the differences; even better, use an openly available dataset and construct a minimal, complete, and verifiable example. As is, there is no much sense at showing us the commands for the MSE etc for some unknown to us dataset without sharing also the actual results...
– desertnaut
Nov 9 at 15:25
Can be (from here: scikit-learn.org/stable/modules/generated/…): solver, random_state, tol, max_iter Try firstly solver='lsqr'. And also better to add some information on how different are the weights.
– avchauzov
Nov 9 at 21:21
Can be (from here: scikit-learn.org/stable/modules/generated/…): solver, random_state, tol, max_iter Try firstly solver='lsqr'. And also better to add some information on how different are the weights.
– avchauzov
Nov 9 at 21:21
add a comment |
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53228456%2fwhy-ridge-regression-with-alph-0-is-not-the-same-as-normal-linear-regression%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
1
Please update your question by showing what exactly are the differences; even better, use an openly available dataset and construct a minimal, complete, and verifiable example. As is, there is no much sense at showing us the commands for the MSE etc for some unknown to us dataset without sharing also the actual results...
– desertnaut
Nov 9 at 15:25
Can be (from here: scikit-learn.org/stable/modules/generated/…): solver, random_state, tol, max_iter Try firstly solver='lsqr'. And also better to add some information on how different are the weights.
– avchauzov
Nov 9 at 21:21