I read the explanation by Ocram here about how to calculate the stddev of coefficients in linear regression.
I also run experiment with my sample data. I have test1 which contains 1000 samples; I then created
test2 = pandas.concat([test1, test1])
and run the same regression again. It is true that the standard deviation of each coefficient ($\beta$) decreased.
However, I cannot visually think about this behavior. Can anyone provide an intuitive visual explanation why the stddev goes down when I duplicate my samples?