For large $N$ the idea is to rely on normality thanks to the Central Limit Theorem (CLT). Write $\hat p$ the natural estimator of $p$: the empirical distribution. We use the statistics $f(\hat p)$.
$f(\hat p)$ tends to become normally distributed for large $N$. There are two possible ways to see it:
- $\hat p$ becomes normally distributed thanks to multidimentional CLT with covariance matrix given by the multinomial disbribution, and then $f(\hat p)$ is normally distributed too as a linear combination of it
- $f(\hat p)=\frac{1}{N}\displaystyle\sum_{j=1}^N f(1_{X_j=1},1_{X_j=2},...,1_{X_j=k}$)
thus you can use one dimensional CLT for this sequence of i.i.d. variables
The mean of $f(\hat p)$ is $f(p)$. You can use a z-test.
You need an estimator of the sample variance. You can either derive it from the covariance matrix of $\hat p$ or use the usual estimator for the variable $f(1_{X=1},1_{X=2},...,1_{X=k})$. Both methods will yield the same estimator: it is a quadratic form of $\hat p$.
Note about z-test: formally this is not exactly a z-test since the variance is not known but estimated. Some authors still call it a z-test. Some might prefer a t-test but they are essentially the same: the statistics is the same, only the distribution approximation under $H_0$ differs. The two approximations are extremely close except for very small sample size. For small sample size, it is unclear whether a t-test would be better. See this clarification. The focus was on large $N$ anyway.