Let's say I have a signal which I want to test for normality. I know that the mean from the theoretical distribution is zero, but it has an undetermined (unknown) variance. If I knew mean and variance a priori, I would run a one-sample Kolmogorov-Smirnov test. Since I don't know what the variance should be, I am thinking of generating some random data from a normal distribution with zero mean and same variance as the sample (signal) and perform a two-sample K-S test, one sample being the true (original) sample, and the other being the randomly generated one. Is this procedure correct/valid?
Some contextualization:
♫ I know one should not perform a one-sample K-S test with theoretical distribution parameters predicted from the data, that's why I'm thinking of doing this "simulation" for the two-sample test and I'm not confident of its validity;
♫ The signal is an acoustical room impulse response (its amplitude is proportional to the sound pressure). The normality is expected for the later part of the signal. It is also expected for the real and imaginary parts of its Fourier Transform. The signal may also be a simulated impulse response, created through the decaying exponential weighting of a gaussian noise;
♫ I'm not interested in other tests (like the Lilliefors test) because research with the procedure I am explaining here has already been published (applied to all topics cited above). So I'd like to know if the procedure is valid or not (and its effects) so I can better understand and evaluate the results. I'm not one of the authors, so I don't have access to the exact utilized procedures or algorithms;