In Section 6.6.2 of An Introduction to Statistical Learning, the authors do the following:
A) Fit a lasso model
lasso.mod=glmnet(x[train ,],y[ train],alpha=1, lambda =grid)
B) Perform cross-validation
set.seed(1)
cv.out=cv.glmnet(x[train ,],y[ train],alpha=1)
plot(cv.out)
bestlam =cv.out$lambda.min
C) Compute the the test error using the best value of $\lambda$ obtained in part B)
lasso.pred=predict(lasso.mod,s=bestlam ,newx=x[test ,])
mean((lasso.pred -y.test)^2)
But it seems there is an error here? They are using using the $\lambda$ from part B) with the model from part A). Surely they should be using the $\lambda$ from part B) with the model from part B) rather than mixing the results of both A) and B)?
They do the same thing in the previous section (6.6.1) for ridge regression. So if there is a typo/mistake in the Lasso section there is also one in the ridge regression section.
So is it a typo/mistake, or am I mistaken?