I have data from a simple experiments where people put (a fixed number of) balls either to the left or to the right of them (each ball is just the same with regards to consequences of putting them to left or right). The observation / dependent variable is the number of balls put to the right (out of the total amount of balls given to them).
First, I fitted a linear regression trying to explain this sum of balls put to the right based on some predictors. Now a linear regression is obviously not the best choice, since the data is discrete instead of continuous and has natural lower and upper bounds.
Then I fitted a binomial regression to the data (or a GLM with logit link function). The estimates are pretty similar (at least when judging the direction), but there is a huge difference in standard errors on the predictors. In the binomial regression they are about 26 times smaller than in the linear regression.
I was wondering why that is. As outlined, the linear regression is not the best model for the data – but is it so much worse? Or do I violate critical assumptions and my data actually is not generated by a binomial distribution and hence the GLM with binomial link function can not be trusted?