16

Ie, if you have as state variables position (p) and velocity (v), and I make low-frequency measurements of p, this also indirectly gives me information about v (since it's the derivative of p). What is the best way to to handle such a relationship?

A) At the update step, should I only say I've measured p, and rely on the filtering process, and my accumulated state-covariance matrix (P), to correct v?

B) Should I create an "extra" prediction step, either after or before my update step for the measurement of p, that uses my measured p and (relatively large) delta-time to make a high-variance prediction of v?

C) In my update/measurement step, should I say I've made a measurement of both p and v, and then somehow encode information about their interdependence into the measurement co-variance matrix (R)?


For a little more background, here's the specific situation in which I've run into the problem:

I'm working with a system where I want to estimate the position (p) of an object, and I make frequent measurements of acceleration (a) and infrequent, high-noise measurements of p.

I'm currently working with a codebase that does this with an Extended Kalman Filter, where it keeps as state variables p and v. It runs a "prediction" step after every acceleration measurement, in which it uses the measured a and delta-time to integrate and predict new p and v. It then runs an "update"/"measurement" step for every (infrequent) p measurement.

The problem is this - I get occasional high-error measurements of a, which result in highly-erroneous v. Obviously, further measurements of a will never correct this, but measurements of p should get rid of this. And, in fact, this does seem to happen... but VERY slowly.

I was thinking that this may be partially because the only way p affects v in this system is through the covariance matrix P - ie, method A) from above - which seems fairly indirect. I was wondering if there would be a better way to incorporate our knowledge of this relationship between p and v into the model, so that measurements of p would correct v faster.

Thanks!

David
  • 546
  • 1
  • 3
  • 16
  • 1
    I'll try to come back with a longer answer later, but my immediate reactions to your questions would be A) Yes, B and C) Probably not. Are you able to detect the high-error measurements of $\mathbf{a}$ in some way? If you could detect the outliers you could throw them out to mitigate their effects. You might be hard-pressed to get great performance if your sample rate of the system's position is too low compared to its dynamics. – Jason R Sep 14 '12 at 12:52
  • 2
    One other thing; there should be an implicit relationship between $\mathbf{p}$ and $\mathbf{v}$ expressed in your state transition matrix. Specifically, it should express that $\mathbf{p_{k+1}} = \mathbf{p_k} + \mathbf{v_k} \Delta t$ or similar. – Jason R Sep 14 '12 at 14:57

1 Answers1

6

In ideal world you'd have the correct model and use it.
In your case, the model isn't perfect.

Yet the steps you're suggesting are based on a knowledge you have about the process - which you should incorporate into your process equation using your dynamic model matrix:

  1. The classic and correct way given F matrix is built correctly according to your knowledge.

  2. "Extra" prediction step will yield nothing, since $ {F}_{ik} = {F}_{ij} {F}_{jk} $ and if you reduce the time frame you should alter $ Q $ and $ R $ accordingly which should get you at the end of the chain of small steps the same $P_{k \mid k - 1}$.

  3. If you don't measure V you'll must "Estimate" it somehow. Yet by definition, if your case falls under Kalman's assumptions using Kalman's filter would yield best results.

All in all, stick with the "Classic".

lennon310
  • 3,520
  • 13
  • 20
  • 27
Royi
  • 33,983
  • 4
  • 72
  • 179