0

With an outcome variable and two correlated regressors...

set.seed(321)

sampleSize <- 10000
x1 <- rnorm(sampleSize)
x2 <- 0.3*x1 + rnorm(sampleSize)
y <- 1 + 2*x1 + 3*x2 + rnorm(sampleSize)

...I compute the Ridge estimates first analytically...

$\hat{\boldsymbol{\beta}}_{ridge}=(\boldsymbol{X}'\boldsymbol{X}+\lambda \boldsymbol{I}_k)^{-1}\boldsymbol{X}'\boldsymbol{Y}$

xMatrix <- cbind(rep(1,sampleSize),x1,x2)
lambdaMatrix <- matrix(c(3000,0,0,
                         0,3000,0,
                         0,0,3000),3,3)
ridgeEstimates <- solve(t(xMatrix)%*%xMatrix + lambdaMatrix)%*%t(xMatrix)%*%y

ridgeEstimates

#         [,1]
#    0.7654183
# x1 1.6649745
# x2 2.4254723

...then with glmnet:

ridgeFit <- glmnet::glmnet(xMatrix, y, alpha = 0, lambda = 3000)
coef(ridgeFit)

#4 x 1 sparse Matrix of class "dgCMatrix"
#                     s0
#(Intercept) 0.956700441
#            .          
#x1          0.004089457
#x2          0.005014845

Why are they so different?

suckrates
  • 827
  • 5
  • 14
  • 1
    I think there is a factor of 1/n difference. ie they are solving mse + lambda and you are solving sse + lambda – seanv507 Dec 06 '18 at 11:24
  • 1
    You can give a look to these answers for a complete discussion https://stats.stackexchange.com/questions/129179/why-is-glmnet-ridge-regression-giving-me-a-different-answer-than-manual-calculat – Gi_F. Dec 06 '18 at 11:29

0 Answers0