This article is rated Start-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | |||||||||||
|
I am thinking over the details of a major rewrite for this page. Anybody objecting, having suggestions, willing to help...? ParaTechNoid ( talk) 03:54, 9 November 2008 (UTC)
The very first line is not quite accurate:
"An alpha beta filter is a simplified form of Kalman filter which has static weighting constants instead of using co-variance matrices."
It not completely correct to compare the correction gains of the alpha beta filter to the noise model matrices of the Kalman filter. Kalman filtering uses formally computed, time-varying Kalman gains while alpha beta filtering uses the informally selected alpha beta gains, but both have gain terms, and this can be a point of confusion. A better wording might be to the effect: uses fixed correction gains instead of computing time-varying gains from a covariance model. But that is really only a secondary part of the story.
If you fix the Kalman gains and adjust them manually what you get is essentially a State observer, an intermediate form between Kalman filters and alpha beta filters. Explaining the alpha beta filters in terms of the state observers rather than Kalman filters makes some things easier. The more complicated relationship to Kalman filters could be discussed later.
The two really important difference are (1) that the Kalman and observer filters use a detailed dynamic model, while alpha beta filters assume a generic, simplified model for system dynamics; (2) the gain matrix for Kalman and observer filters in general maps multiple prediction errors (innovation terms, residuals) into corrections for multiple state estimates. The alpha beta filter uses a two-term gain matrix to map one prediction error into corrections for two simplified states. But of course, all of this is too much to say in one introductory line.
What to do about this? Well, I am working on it, really I am... but so far the revisions appear incompatible with the rest of the text.
ParaTechNoid (
talk)
05:54, 10 November 2008 (UTC)
I'm boldly proposing the following outline for this article. This could seriously affect major linkages and that has me concerned... But the damage should not be extensive.
I rather like showing the application of the filter in a pseudo-coded algorithmic style, as currently done at the end of the implementation section, in addition to the update equation form. It helps to tie all of the pieces together. I just haven't found the obvious good place for it.
ParaTechNoid (
talk)
06:33, 10 November 2008 (UTC)
These are some technical details to correct in the next revision of the page. Consider this a working checklist.
Describing the variables is good, but it is important to clearly describe which variables refer to values prior to a state update, and which refer to values after a state update update.
xs is the current estimate of state x,
vs is the current estimate of state v,
xp is the predicted value of state x at the next step, projected from the current value xs
xm is the measured value at the next step, corresponding to the time of the prediction xp
xs, vs, or both can be used as filter outputs.
To avoid misinterpretation, a clearer display might be:
α, β >= 0
α, β > 0
Untrue. History is the previous estimate. State estimates always start from the previous state estimates, plus incremental projection, plus incremental correction. Making larger corrections does not change this.
ParaTechNoid (
talk)
07:50, 10 November 2008 (UTC)
One more final addition to the checklist. In the pseudocode description,
x=search true position around x
is extraneous. You can't search because you can't trust your measurements: they are noisy. The point of the alpha-beta algorithm is that it is a gradient process. The alpha and beta give a push (you hope!) in the right direction, and many such small pushes should average out to the corrections you want. No searching.
ParaTechNoid (
talk)
08:09, 10 November 2008 (UTC)
I just read this article (in its simple form) and it was exactly what I was looking for. Please be careful not to kill its simplicity and clarity with rigour and too much thoroughness. Also, I don't agree with your "no searching" comments; of course searching is optional and the effectiveness depends on noise, but the filter's prediction can help at least in some cases. (For example, in my application I'm using it to track an object in a video feed, and it suggests a good place to start the search from).-- 41.157.12.3 ( talk) 19:53, 10 November 2008 (UTC)
This looks about the same as "double exponential smoothing" in the time series/forecasting field, in that it updates estimates of both position and velocity (derivative), and makes the prediction portion on the assumption that the velocity remains unchanged before the measurement-based correction of both position and velocity. See, for example, the section on double exponential smoothing in the Wikipedia article on Exponential smoothing. This is also discussed, with comparison to the Kalman filter, at [1] This link is broken, maybe you mean this paper? http://cs.brown.edu/people/jlaviola/pubs/kfvsexp_final_laviola.pdf — Preceding unsigned comment added by 111.223.77.82 ( talk) 01:55, 2 January 2020 (UTC)
If it is basically the same thing, the article should say so. If it is different, the article should say why. Gmstanley ( talk) 18:42, 26 September 2012 (UTC)
Maybe the best way to address this is to have a short section on "related filters". This could include the comment that the general goal and approach is the same between alpha-beta filters and double exponential smoothing. We would cite the Wikipedia article on double exponential smoothing. Thus, this isn't original research, and is not making the claim that they are identical. They are simply related because they accomplish the same thing, and might be considered competitors. (We'd do the complementary comments under double exponential smoothing.) But a "related filters" section would be also good because we should have a place to compare to other competitive filters such as least squares filters, especially the Savitzky-Golay filter which is very simple to implement. A Savitzky-Golay filter can be thought of as a finite memory (FIR - finite impulse response) competitor for these IIR (infinite impulse response) filters. All of these filters have the goal of tracking a ramp input (and doing it exactly after a transient period), accomplishing it by simultaneously estimating and using the derivative. People reading about any one of these filters should be made aware of competitive approaches, and all of them already have articles written in Wikipedia. I could write up a short paragraph on this - is that agreeable? Gmstanley ( talk) 18:47, 21 June 2013 (UTC)
I am not sure if this is an error or not because I did not do the math :-(
The article currently says for alpha-beta-gamma filter:
The book "Tracking & Kalman filtering made easy" by Eli Brooker p. 51 and http://www.comp.nus.edu.sg/~cs6240/lecture/tracking.pdf
say:
Should I edit myself? — Preceding unsigned comment added by Jamjamandcheese ( talk • contribs) 16:27, 4 May 2020 (UTC)