The definition of complete statistics is from http://en.wikipedia.org/wiki/Completeness_(statistics)#Definition
The statistic $s$ is said to be complete for the distribution of $X$ if for every measurable function $g$ (which must be independent of $θ$) the following implication holds:
$E(g(s(X))) = 0$ for all $θ$ implies that $P_θ(g(s(X)) = 0) = 1$ for all $θ$.
Let the codomain of statistic $s$ be $\mathbb R^m$, then is $g$ a measurable mapping from $\mathbb R^m$ to $\mathbb R$?
Since $g$ acts on the codomain of $s$, does $g$ not know the sample size $n$ of $X = \{X_1, \dots, X_n\}$? Only $s$ acts on the sample $X$,so does only $s$ know the sample size of $X$?
But in a solution to the problem 6.15 in Casella and Berger's Statistical Inference, when proving a statistic is not complete, $g$ is chosen to depend on $n$.
what is $g$ actually? A measurable mapping from $\mathbb R^m$ to $\mathbb R$ which doesn't know the sample size, or something like a statistic which knows the sample size?
Or does the statistic $s$ also output the sample size $n$ of its input $X$? I.e. the codomain of a statistic $s$ is $\mathbb R^m \times \mathbb N$? So that $g$ is defined on $\mathbb R^m \times \mathbb N$, i.e. $g$ get to know the sample size from the output of statistic $s$?
In general (going beyond the concept of complete statistics), when talking about a mapping $g$ on a statistic $s(X)$, i.e. $g(s(X))$, do we always assume $g$ knows the sample size of $X$, i.e. is the sample size of $X$ always an input to $g$?
Even further do we assume $g$ know the entire input sample $X$ (not just its size $n$) to the statistic $s$? e.g. $s(X) = \sum_i X_i$, and does it make sense that $g(s(X)) = (\sum_i X_i) + (\sum_i X_i^2)$?
Thanks.