In
statistics, the projection matrix,[1] sometimes also called the influence matrix[2] or hat matrix, maps the vector of
response values (dependent variable values) to the vector of
fitted values (or predicted values). It describes the
influence each response value has on each fitted value.[3][4] The diagonal elements of the projection matrix are the
leverages, which describe the influence each response value has on the fitted value for that same observation.
Definition
If the vector of
response values is denoted by and the vector of fitted values by ,
As is usually pronounced "y-hat", the projection matrix is also named hat matrix as it "puts a
hat on ".
Application for residuals
The formula for the vector of
residuals can also be expressed compactly using the projection matrix:
where is the
identity matrix. The matrix is sometimes referred to as the residual maker matrix or the annihilator matrix.
A matrix, has its column space depicted as the green line. The projection of some vector onto the column space of is the vector
From the figure, it is clear that the closest point from the vector onto the column space of , is , and is one where we can draw a line orthogonal to the column space of . A vector that is orthogonal to the column space of a matrix is in the nullspace of the matrix transpose, so
.
From there, one rearranges, so
.
Therefore, since is on the column space of , the projection matrix, which maps onto is just , or .
Linear model
Suppose that we wish to estimate a linear model using linear least squares. The model can be written as
where is a matrix of
explanatory variables (the
design matrix), β is a vector of unknown parameters to be estimated, and ε is the error vector.
The above may be generalized to the cases where the weights are not identical and/or the errors are correlated. Suppose that the
covariance matrix of the errors is Σ. Then since
.
the hat matrix is thus
and again it may be seen that , though now it is no longer symmetric.
Properties
The projection matrix has a number of useful algebraic properties.[5][6] In the language of
linear algebra, the projection matrix is the
orthogonal projection onto the
column space of the design matrix .[4] (Note that is the
pseudoinverse of X.) Some facts of the projection matrix in this setting are summarized as follows:[4]
and
is symmetric, and so is .
is idempotent: , and so is .
If is an n × r matrix with , then
The
eigenvalues of consist of r ones and n − r zeros, while the eigenvalues of consist of n − r ones and r zeros.[7]
For
linear models, the
trace of the projection matrix is equal to the
rank of , which is the number of independent parameters of the linear model.[8] For other models such as LOESS that are still linear in the observations , the projection matrix can be used to define the
effective degrees of freedom of the model.
Practical applications of the projection matrix in regression analysis include
leverage and
Cook's distance, which are concerned with identifying
influential observations, i.e. observations which have a large effect on the results of a regression.
Blockwise formula
Suppose the design matrix can be decomposed by columns as .
Define the hat or projection operator as . Similarly, define the residual operator as .
Then the projection matrix can be decomposed as follows:[9]
where, e.g., and .
There are a number of applications of such a decomposition. In the classical application is a column of all ones, which allows one to analyze the effects of adding an intercept term to a regression. Another use is in the
fixed effects model, where is a large
sparse matrix of the dummy variables for the fixed effect terms. One can use this partition to compute the hat matrix of without explicitly forming the matrix , which might be too large to fit into computer memory.
History
The hat matrix was introduced by John Wilder in 1972. An article by Hoaglin, D.C. and Welsch, R.E. (1978) gives the properties of the matrix and also many examples of its application.