4

Let's say I have a wave, with frequency $\omega$ and phase $\phi$, of the form:

$$y(t)=1+A\sin(\omega t+\phi)$$

where $A<1$.

I have $N$ measures of $(\hat{y}_i, \hat{t}_i)$, that are assumed to be evenly spaced over one period. For practical purposes, $\hat{t}_i=t_i$ is assumed to be accurate, while $\hat{y}_i$ is measured with a normally distributed error : $\hat{y}_i= y_i + \epsilon_i$, with $\epsilon_i \sim \mathcal{N}(0, \sigma_y)$.

How well can I measure the phase of this sine wave? Edit: I'm trying to derive an analytic answer, to predict the results of future measurements. What I presume is the previous version of this question was asking how to fit to observed data, which is a different problem. Also, I figured out the answer and placed it in the comments, for anyone else who needs this.

I've tried constructing a Fisher matrix for the problem, but Mathematica thinks it's singular, and I can't see how to invert it either.

Thomas
  • 41
  • 3
  • Are you saying that you measure the value of $y(t)$ at evenly spaced times $t$ covering a duration of $2\pi/\omega$ (one period of the wave)? How do you happen to know $\sigma_y$ and $\omega$--have they perhaps been estimated from previous experiments and therefore are themselves uncertain? Do you know the value of $A$? Why does it matter that $A\lt 1$, since only the ratio $A/\sigma_y$ would seem to be relevant? – whuber Sep 03 '14 at 16:41
  • 1
    How are you trying to estimate $\phi$? Have you considered transforming this into a linear problem via $y(t)-1=(A\cos(\phi))\sin(\omega t)+(A\sin(\phi))\cos(\omega t)=\beta_1\sin(\omega t)+\beta_2\cos(\omega t)$, estimating the $\beta_i$, and back-transforming to an estimate of $\phi \mod 2\pi\omega$? (This is the spirit behind basic estimators in the field of [circular statistics](http://en.wikipedia.org/wiki/Directional_statistics#Distribution_of_the_mean).) – whuber Sep 03 '14 at 17:18
  • 1
    Whuber: I think nemo's edit may have cleared this up, but yes, there are N observations that are evenly spaced over $2\pi/\omega$. For $\sigma_y$, let's assume (per nemo's edit) that the error on each point is gaussian, white, and has the same $\sigma_y$ for all points (and \sigma_y is known perfectly). In reality the amplitude, A, and $\omega$ both have their own associated uncertainties. Ideally I'd like to figure an answer that incorporates these uncertainties as well; but for a start I was attempting the simplified case of these uncertainties being too small to be relevant. – Thomas Sep 03 '14 at 17:28
  • I have not tried that transformation. You mean try and do a Fischer matrix on $\beta_1 \sin(\omega t)+\beta_2 \cos(\omega t)$? – Thomas Sep 03 '14 at 17:35
  • It's even simpler: you can estimate the betas with least squares. Because this is fast, that procedure can later be incorporated in a full-blown nonlinear estimation of all the parameters (including $A$ and $\omega$). Unless you have very few measurements, the Hessian will not be singular. – whuber Sep 03 '14 at 19:33
  • 1
    @whuber's suggestion is discussed in a little more detail [here](http://stats.stackexchange.com/questions/60500/how-to-find-a-good-fit-for-semi-sinusoidal-model-in-r/60504#60504) – Glen_b Sep 04 '14 at 03:49
  • Here's the answer: turns out I was doing the integrals incorrectly for the Fischer matrix. If you ignore the covariance between your ability to measure the period and the phase of the sinusoid, then the uncertainty in your measurement of the phase will be: $$\sigma_\phi = \sqrt{\frac{2}{N}}\,\frac{\sigma_y}{A}\,\frac{P}{2\pi},$$ where $P$ is the period of the wave. – Thomas Sep 09 '14 at 19:18

0 Answers0