Fitting of a noisy curve by an asymmetrical peak model with parameters by mimimizing the sum of squared residuals at grid points , using the GaussâNewton algorithm. Top: Raw data and model. Bottom: Evolution of the normalised sum of the squares of the errors.
The GaussâNewton algorithm is used to solve
non-linear least squares problems, which is equivalent to minimizing a sum of squared function values. It is an extension of
Newton's method for finding a
minimum of a non-linear
function. Since a sum of squares must be nonnegative, the algorithm can be viewed as using Newton's method to iteratively approximate
zeroes of the components of the sum, and thus minimizing the sum. In this sense, the algorithm is also an effective method for
solving overdetermined systems of equations. It has the advantage that second derivatives, which can be challenging to compute, are not required.[1]
Non-linear least squares problems arise, for instance, in
non-linear regression, where parameters in a model are sought such that the model is in good agreement with available observations.
The method is named after the mathematicians
Carl Friedrich Gauss and
Isaac Newton, and first appeared in Gauss's 1809 work Theoria motus corporum coelestium in sectionibus conicis solem ambientum.[2]
Description
Given functions (often called residuals) of variables with the GaussâNewton algorithm
iteratively finds the value of that minimize the sum of squares[3]
Starting with an initial guess for the minimum, the method proceeds by the iterations
At each iteration, the update can be found by rearranging the previous equation in the following two steps:
With substitutions , , and , this turns into the conventional matrix equation of form , which can then be solved in a variety of methods (see
Notes).
If m = n, the iteration simplifies to
which is a direct generalization of
Newton's method in one dimension.
In data fitting, where the goal is to find the parameters such that a given model function best fits some data points , the functions are the
residuals:
Then, the GaussâNewton method can be expressed in terms of the Jacobian of the function as
The assumption m ⼠n in the algorithm statement is necessary, as otherwise the matrix is not invertible and the normal equations cannot be solved (at least uniquely).
The GaussâNewton algorithm can be derived by
linearly approximating the vector of functions ri. Using
Taylor's theorem, we can write at every iteration:
with . The task of finding minimizing the sum of squares of the right-hand side; i.e.,
is a
linear least-squares problem, which can be solved explicitly, yielding the normal equations in the algorithm.
The normal equations are n simultaneous linear equations in the unknown increments . They may be solved in one step, using
Cholesky decomposition, or, better, the
QR factorization of . For large systems, an
iterative method, such as the
conjugate gradient method, may be more efficient. If there is a linear dependence between columns of Jr, the iterations will fail, as becomes singular.
When is complex the conjugate form should be used: .
Example
Calculated curve obtained with and (in blue) versus the observed data (in red)
In this example, the GaussâNewton algorithm will be used to fit a model to some data by minimizing the sum of squares of errors between the data and model's predictions.
In a biology experiment studying the relation between substrate concentration S and reaction rate in an enzyme-mediated reaction, the data in the following table were obtained.
i
1
2
3
4
5
6
7
S
0.038
0.194
0.425
0.626
1.253
2.500
3.740
Rate
0.050
0.127
0.094
0.2122
0.2729
0.2665
0.3317
It is desired to find a curve (model function) of the form
that fits best the data in the least-squares sense, with the parameters and to be determined.
Denote by and the values of S and rate respectively, with . Let and . We will find and such that the sum of squares of the residuals
is minimized.
The Jacobian of the vector of residuals with respect to the unknowns is a matrix with the -th row having the entries
Starting with the initial estimates of and , after five iterations of the GaussâNewton algorithm, the optimal values and are obtained. The sum of squares of residuals decreased from the initial value of 1.445 to 0.00784 after the fifth iteration. The plot in the figure on the right shows the curve determined by the model for the optimal parameters with the observed data.
Convergence properties
The Gauss-Newton iteration is guaranteed to converge toward a local minimum point under 4 conditions:[4] The functions are twice continuously differentiable in an open convex set , the Jacobian is of full column rank, the initial iterate is near , and the local minimum value is small. The convergence is quadratic if .
It can be shown[5] that the increment Î is a
descent direction for S, and, if the algorithm converges, then the limit is a
stationary point of S. For large minimum value , however, convergence is not guaranteed, not even
local convergence as in
Newton's method, or convergence under the usual Wolfe conditions.[6]
The rate of convergence of the GaussâNewton algorithm can approach
quadratic.[7] The algorithm may converge slowly or not at all if the initial guess is far from the minimum or the matrix is
ill-conditioned. For example, consider the problem with equations and variable, given by
The optimum is at . (Actually the optimum is at for , because , but .) If , then the problem is in fact linear and the method finds the optimum in one iteration. If |Îť| < 1, then the method converges linearly and the error decreases asymptotically with a factor |Îť| at every iteration. However, if |Îť| > 1, then the method does not even converge locally.[8]
Solving overdetermined systems of equations
The Gauss-Newton iteration
is an effective method for solving
overdetermined systems of equations in the form of with
If the solution doesn't exist but the initial iterate is near a point at which the sum of squares reaches a small local minimum, the Gauss-Newton iteration linearly converges to . The point is often called a
least squares solution of the overdetermined system.
Derivation from Newton's method
In what follows, the GaussâNewton algorithm will be derived from
Newton's method for function optimization via an approximation. As a consequence, the rate of convergence of the GaussâNewton algorithm can be quadratic under certain regularity conditions. In general (under weaker conditions), the convergence rate is linear.[9]
The recurrence relation for Newton's method for minimizing a function S of parameters is
Elements of the Hessian are calculated by differentiating the gradient elements, , with respect to :
The GaussâNewton method is obtained by ignoring the second-order derivative terms (the second term in this expression). That is, the Hessian is approximated by
where are entries of the Jacobian Jr. Note that when the exact hessian is evaluated near an exact fit we have near-zero , so the second term becomes near-zero as well, which justifies the approximation. The gradient and the approximate Hessian can be written in matrix notation as
These expressions are substituted into the recurrence relation above to obtain the operational equations
Convergence of the GaussâNewton method is not guaranteed in all instances. The approximation
that needs to hold to be able to ignore the second-order derivative terms may be valid in two cases, for which convergence is to be expected:[10]
The function values are small in magnitude, at least around the minimum.
The functions are only "mildly" nonlinear, so that is relatively small in magnitude.
Improved versions
With the GaussâNewton method the sum of squares of the residuals S may not decrease at every iteration. However, since Î is a descent direction, unless is a stationary point, it holds that for all sufficiently small . Thus, if divergence occurs, one solution is to employ a fraction of the increment vector Î in the updating formula:
In other words, the increment vector is too long, but it still points "downhill", so going just a part of the way will decrease the objective function S. An optimal value for can be found by using a
line search algorithm, that is, the magnitude of is determined by finding the value that minimizes S, usually using a
direct search method in the interval or a
backtracking line search such as
Armijo-line search. Typically, should be chosen such that it satisfies the
Wolfe conditions or the
Goldstein conditions.[11]
In cases where the direction of the shift vector is such that the optimal fraction Îą is close to zero, an alternative method for handling divergence is the use of the
LevenbergâMarquardt algorithm, a
trust region method.[3] The normal equations are modified in such a way that the increment vector is rotated towards the direction of
steepest descent,
where D is a positive diagonal matrix. Note that when D is the identity matrix I and , then , therefore the
direction of Î approaches the direction of the negative gradient .
The so-called Marquardt parameter may also be optimized by a line search, but this is inefficient, as the shift vector must be recalculated every time is changed. A more efficient strategy is this: When divergence occurs, increase the Marquardt parameter until there is a decrease in S. Then retain the value from one iteration to the next, but decrease it if possible until a cut-off value is reached, when the Marquardt parameter can be set to zero; the minimization of S then becomes a standard GaussâNewton minimization.
Large-scale optimization
For large-scale optimization, the GaussâNewton method is of special interest because it is often (though certainly not always) true that the matrix is more
sparse than the approximate Hessian . In such cases, the step calculation itself will typically need to be done with an approximate iterative method appropriate for large and sparse problems, such as the
conjugate gradient method.
In order to make this kind of approach work, one needs at least an efficient method for computing the product
for some vector p. With
sparse matrix storage, it is in general practical to store the rows of in a compressed form (e.g., without zero entries), making a direct computation of the above product tricky due to the transposition. However, if one defines ci as row i of the matrix , the following simple relation holds:
so that every row contributes additively and independently to the product. In addition to respecting a practical sparse storage structure, this expression is well suited for
parallel computations. Note that every row ci is the gradient of the corresponding residual ri; with this in mind, the formula above emphasizes the fact that residuals contribute to the problem independently of each other.
Related algorithms
In a
quasi-Newton method, such as that due to
Davidon, Fletcher and Powell or BroydenâFletcherâGoldfarbâShanno (
BFGS method) an estimate of the full Hessian is built up numerically using first derivatives only so that after n refinement cycles the method closely approximates to Newton's method in performance. Note that quasi-Newton methods can minimize general real-valued functions, whereas GaussâNewton, LevenbergâMarquardt, etc. fits only to nonlinear least-squares problems.
Another method for solving minimization problems using only first derivatives is
gradient descent. However, this method does not take into account the second derivatives even approximately. Consequently, it is highly inefficient for many functions, especially if the parameters have strong interactions.
Example implementations
Julia
The following implementation in
Julia provides one method which uses a provided Jacobian and another computing with
automatic differentiation.
""" gaussnewton(r,J,βâ,maxiter,tol)Perform Gauss-Newton optimization to minimize the residual function `r` with Jacobian `J` starting from `βâ`. The algorithm terminates when the norm of the step is less than `tol` or after `maxiter` iterations."""functiongaussnewton(r,J,βâ,maxiter,tol)β=copy(βâ)for_in1:maxiterJβ=J(β);Î=-(Jβ'*Jβ)\(Jβ'*r(β))β+=Îifsqrt(sum(abs2,Î))<tolbreakendendreturnβendimportAbstractDifferentiationasAD,Zygotebackend=AD.ZygoteBackend()# other backends are available""" gaussnewton(r,βâ,maxiter,tol)Perform Gauss-Newton optimization to minimize the residual function `r` starting from `βâ`. The relevant Jacobian is calculated usign automatic differentiation. The algorithm terminates when the norm of the step is less than `tol` or after `maxiter` iterations."""functiongaussnewton(r,βâ,maxiter,tol)β=copy(βâ)for_in1:maxiterrβ,Jβ=AD.value_and_jacobian(backend,r,β)Î=-(Jβ1'*Jβ1])\(Jβ1'*rβ)β+=Îifsqrt(sum(abs2,Î))<tolbreakendendreturnβend
Notes
^Mittelhammer, Ron C.; Miller, Douglas J.; Judge, George G. (2000).
Econometric Foundations. Cambridge: Cambridge University Press. pp. 197â198.
ISBN0-521-62394-4.
^
abJ.E. Dennis, Jr. and R.B. Schnabel (1983). Numerical Methods for Unconstrained Optimization and Nonlinear Equations. SIAM 1996 reproduction of Prentice-Hall 1983 edition. p. 222.
Nocedal, Jorge; Wright, Stephen (1999). Numerical optimization. New York: Springer.
ISBN0-387-98793-2.
External links
Probability, Statistics and Estimation The algorithm is detailed and applied to the biology experiment discussed as an example in this article (page 84 with the uncertainties on the estimated values).
Implementations
Artelys Knitro is a non-linear solver with an implementation of the GaussâNewton method. It is written in C and has interfaces to C++/C#/Java/Python/MATLAB/R.