I am looking again at a popular statistical testing method used in finance, suspect it's a bit naughty, but would like to have a more experienced eye take a look also.
The method is the following,
estimate a percentile (i.e. the "value at risk") of a history of n portfolio returns
record whether the following return is above or below the percentile
tot up the breaches and use a proportional chi squared test (binomial distribution tends towards normal after sufficient number of draws from the distribution)
The issue comes with the 'n' portfolio returns overlapping each other; i.e. rather than using mutually exclusive returns they overlap (often n-1 returns of adjacent value at risk figures).
However, a 'conditional' test is applied (no real consensus on a particular one) which tests for the independence of the sample of breaches - to ameliorate this possibility.
Is this sort of thing frowned upon? I.e. intentionally baking in overlapping data and dependence; then later trying to test the problem away (however flawed the tests may be).
(There is never enough data; I rather hold my hands up in that case, than push on with a flawed analysis.)
I also posted the question here, without much luck:
https://quant.stackexchange.com/questions/15551/overlapping-value-at-risk-backtest-data-an-issue