In
statistics, the Hájek–Le Cam convolution theorem states that any
regular estimator in a
parametric model is asymptotically equivalent to a sum of two
independent random variables, one of which is
normal with asymptotic variance equal to the inverse of
Fisher information, and the other having arbitrary distribution.
The obvious corollary from this theorem is that the “best” among regular estimators are those with the second component identically equal to zero. Such estimators are called efficient and are known to always exist for
regular parametric models.
Let ℘ = {Pθ | θ ∈ Θ ⊂ ℝk} be a
regular parametric model, and q(θ): Θ → ℝm be a parameter in this model (typically a parameter is just one of the components of vector θ). Assume that function q is differentiable on Θ, with the m × k matrix of derivatives denoted as q̇θ. Define
If the map θ → q̇θ is continuous, then the convergence in (A) holds uniformly on compact subsets of Θ. Moreover, in that case Δθ = 0 for all θ if and only if Tn is uniformly (locally) asymptotically linear with influence function ψq(θ)
References
Bickel, Peter J.; Klaassen, Chris A.J.; Ritov, Ya’acov; Wellner Jon A. (1998). Efficient and adaptive estimation for semiparametric models. New York: Springer.
ISBN0-387-98473-9.