I was thinking about bias in the context of simulation studies - defined as the average of the difference between the estimated beta parameter estimate and the true parameter estimate across all simulations.
If bias (unlike consistency as a property of an estimator) is completely unrelated to the sample size, then why evaluate bias as a metric in sample size studies at all? Should we expect bias to be a function of sample size? I was reading this post: What is the difference between a consistent estimator and an unbiased estimator? and it seems to me that I was conflating the meaning of bias and consistency - thinking that an increase in sample size should diminish the degree of bias.
Although the post suggests that they are unrelated - would it be unfair to think that one of the benefits of increased sample size is that it reduces confounding bias (by allowing one to control for them) and hence the estimate of the parameter of interest would be closer to the population/true parameter?
Perhaps there a difference between bias of the estimator and bias of the coefficient?