This is a theoretical one. This question is inspired by recent question and discussion on bootstrap, where a constant estimator, i.e. a constant function
$$f(x) = \lambda$$
was used as an example of estimator to show problems with estimating bias using bootstrap. My question is not if it is "good" or "bad" estimator, since it is independent of data and so it has to be poor. However, while I agree with the definition that Larry Wasserman gave in his handbook "All the Statistics":
A reasonable requirement for an estimator is that it should converge to the true parameter value as we collect more and more data. This requirement is quantified by the following definition:
6.7 Definition. A point estimator $\hat{\theta}_n$ of a parameter $\theta$ is consistent if $\hat{\theta}_n \overset{P}{\rightarrow} \theta$.
then what bothers me is that $\hat{\theta}_n$ estimated using a constant function does not approach $\theta$ even with $n \rightarrow \infty$, since it is constant.
So my questions are: What makes constant function an estimator? What justifies it? What are its properties? What are the similarities between constant function and other estimators? Could you also provide some references?