In numerical analysis and linear algebra, lower–upper (LU) decomposition or factorization factors a matrix as the product of a lower triangular matrix and an upper triangular matrix (see matrix decomposition). The product sometimes includes a permutation matrix as well. L U LU LU decomposition can be viewed as the matrix form of Gaussian elimination. Computers usually solve square systems of linear equations using LU decomposition, and it is also a key step when inverting a matrix or computing the determinant of a matrix. The L U LU LU decomposition was introduced by the Polish mathematician Tadeusz Banachiewicz in 1938.
Contents
1 Definitions
Let
A
A
A be a square matrix. An LU factorization refers to the factorization of
A
A
A, with proper row and/or column orderings or permutations, into two factors – a lower triangular matrix
L
L
L and an upper triangular matrix
U
U
U:
A
=
L
U
.
{\displaystyle A=LU.}
A=LU.
In the lower triangular matrix all elements above the diagonal are zero, in the upper triangular matrix, all the elements below the diagonal are zero. For example, for a
3
×
3
3 × 3
3×3 matrix
A
A
A, its
L
U
LU
LU decomposition looks like this:
[
a
11
a
12
a
13
a
21
a
22
a
23
a
31
a
32
a
33
]
=
[
ℓ
11
0
0
ℓ
21
ℓ
22
0
ℓ
31
ℓ
32
ℓ
33
]
[
u
11
u
12
u
13
0
u
22
u
23
0
0
u
33
]
.
{\displaystyle {\begin{bmatrix}a_{11}&a_{12}&a_{13}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33}\end{bmatrix}}={\begin{bmatrix}\ell _{11}&0&0\\\ell _{21}&\ell _{22}&0\\\ell _{31}&\ell _{32}&\ell _{33}\end{bmatrix}}{\begin{bmatrix}u_{11}&u_{12}&u_{13}\\0&u_{22}&u_{23}\\0&0&u_{33}\end{bmatrix}}.}
a11a21a31a12a22a32a13a23a33
=
ℓ11ℓ21ℓ310ℓ22ℓ3200ℓ33
u1100u12u220u13u23u33
.
Without a proper ordering or permutations in the matrix, the factorization may fail to materialize. For example, it is easy to verify (by expanding the matrix multiplication) that
a
11
=
ℓ
11
u
11
{\textstyle a_{11}=\ell _{11}u_{11}}
a11=ℓ11u11. If
a
11
=
0
{\textstyle a_{11}=0}
a11=0, then at least one of
ℓ
11
{\textstyle \ell _{11}}
ℓ11 and
u
11
{\textstyle u_{11}}
u11 has to be zero, which implies that either
L
L
L or
U
U
U is singular. This is impossible if
A
A
A is nonsingular (invertible). This is a procedural problem. It can be removed by simply reordering the rows of
A
A
A so that the first element of the permuted matrix is nonzero. The same problem in subsequent factorization steps can be removed the same way; see the basic procedure below.
LDU decomposition of a Walsh matrix
1.1 LU factorization with partial pivoting
It turns out that a proper permutation in rows (or columns) is sufficient for LU factorization. LU factorization with partial pivoting (LUP) refers often to LU factorization with row permutations only:
P
A
=
L
U
,
{\displaystyle PA=LU,}
PA=LU,
where
L
L
L and
U
U
U are again lower and upper triangular matrices, and
P
P
P is a permutation matrix, which, when left-multiplied to
A
A
A, reorders the rows of
A
A
A. It turns out that all square matrices can be factorized in this form, and the factorization is numerically stable in practice. This makes LUP decomposition a useful technique in practice.
1.2 LU factorization with full pivoting
An LU factorization with full pivoting involves both row and column permutations:
P
A
Q
=
L
U
,
{\displaystyle PAQ=LU,}
PAQ=LU,
where
L
,
U
L, U
L,U and
P
P
P are defined as before, and
Q
Q
Q is a permutation matrix that reorders the columns of
A
A
A.
1.3 Lower-diagonal-upper (LDU) decomposition
A Lower-diagonal-upper (LDU) decomposition is a decomposition of the form
A
=
L
D
U
,
{\displaystyle A=LDU,}
A=LDU,
where
D
D
D is a diagonal matrix, and
L
L
L and
U
U
U are unitriangular matrices, meaning that all the entries on the diagonals of
L
L
L and
U
U
U are one.
1.4 Rectangular matrices
Above we required that A A A be a square matrix, but these decompositions can all be generalized to rectangular matrices as well. In that case, L L L and D D D are square matrices both of which have the same number of rows as A A A, and U U U has exactly the same dimensions as A A A. Upper triangular should be interpreted as having only zero entries below the main diagonal, which starts at the upper left corner. Similarly, the more precise term for U U U is that it is the “row echelon form” of the matrix A A A.
2 Example
We factorize the following 2-by-2 matrix:
[
4
3
6
3
]
=
[
ℓ
11
0
ℓ
21
ℓ
22
]
[
u
11
u
12
0
u
22
]
.
{\displaystyle {\begin{bmatrix}4&3\\6&3\end{bmatrix}}={\begin{bmatrix}\ell _{11}&0\\\ell _{21}&\ell _{22}\end{bmatrix}}{\begin{bmatrix}u_{11}&u_{12}\\0&u_{22}\end{bmatrix}}.}
[4633]=[ℓ11ℓ210ℓ22][u110u12u22].
One way to find the LU decomposition of this simple matrix would be to simply solve the linear equations by inspection. Expanding the matrix multiplication gives
ℓ
11
⋅
u
11
+
0
⋅
0
=
4
ℓ
11
⋅
u
12
+
0
⋅
u
22
=
3
ℓ
21
⋅
u
11
+
ℓ
22
⋅
0
=
6
ℓ
21
⋅
u
12
+
ℓ
22
⋅
u
22
=
3.
{\displaystyle {\begin{aligned}\ell _{11}\cdot u_{11}+0\cdot 0&=4\\\ell _{11}\cdot u_{12}+0\cdot u_{22}&=3\\\ell _{21}\cdot u_{11}+\ell _{22}\cdot 0&=6\\\ell _{21}\cdot u_{12}+\ell _{22}\cdot u_{22}&=3.\end{aligned}}}
ℓ11⋅u11+0⋅0ℓ11⋅u12+0⋅u22ℓ21⋅u11+ℓ22⋅0ℓ21⋅u12+ℓ22⋅u22=4=3=6=3.
This system of equations is underdetermined. In this case any two non-zero elements of
L
L
L and
U
U
U matrices are parameters of the solution and can be set arbitrarily to any non-zero value. Therefore, to find the unique LU decomposition, it is necessary to put some restriction on
L
L
L and
U
U
U matrices. For example, we can conveniently require the lower triangular matrix
L
L
L to be a unit triangular matrix (i.e. set all the entries of its main diagonal to ones). Then the system of equations has the following solution:
ℓ
11
=
ℓ
22
=
1
ℓ
21
=
1.5
u
11
=
4
u
12
=
3
u
22
=
−
1.5
{\displaystyle {\begin{aligned}\ell _{11}=\ell _{22}&=1\\\ell _{21}&=1.5\\u_{11}&=4\\u_{12}&=3\\u_{22}&=-1.5\end{aligned}}}
ℓ11=ℓ22ℓ21u11u12u22=1=1.5=4=3=−1.5
Substituting these values into the LU decomposition above yields
[
4
3
6
3
]
=
[
1
0
1.5
1
]
[
4
3
0
−
1.5
]
.
{\displaystyle {\begin{bmatrix}4&3\\6&3\end{bmatrix}}={\begin{bmatrix}1&0\\1.5&1\end{bmatrix}}{\begin{bmatrix}4&3\\0&-1.5\end{bmatrix}}.}
[4633]=[11.501][403−1.5].
3 Existence and uniqueness
3.1 Square matrices
Any square matrix A {\textstyle A} A admits L U P LUP LUP and P L U PLU PLU factorizations. If A {\textstyle A} A is invertible, then it admits an L U LU LU (or L D U LDU LDU) factorization if and only if all its leading principal minors are nonzero (for example [ 0 1 1 0 ] {\displaystyle {\begin{bmatrix}0&1\\1&0\end{bmatrix}}} [0110] does not admit an L U LU LU or L D U LDU LDU factorization). If A {\textstyle A} A is a singular matrix of rank k {\textstyle k} k, then it admits an L U LU LU factorization if the first k {\textstyle k} k leading principal minors are nonzero, although the converse is not true.
If a square, invertible matrix has an L D U LDU LDU (factorization with all diagonal entries of L L L and U U U equal to 1 1 1), then the factorization is unique. In that case, the L U LU LU factorization is also unique if we require that the diagonal of L {\textstyle L} L (or U {\textstyle U} U) consists of ones.
In general, any square matrix A n × n {\displaystyle A_{n\times n}} An×n could have one of the following:
- a unique L U LU LU factorization (as mentioned above)
- infinitely many L U LU LU factorizations if two or more of any first ( n − 1 ) (n−1) (n−1) columns are linearly dependent or any of the first ( n − 1 ) (n−1) (n−1) columns are 0 0 0, then A A A has infinitely many LU factorizations.
- no L U LU LU factorization if the first ( n − 1 ) (n−1) (n−1) columns are non-zero and linearly independent and at least one leading principal minor is zero.
In Case 3 3 3, one can approximate an L U LU LU factorization by changing a diagonal entry a j j {\displaystyle a_{jj}} ajj to a j j ± ε {\displaystyle a_{jj}\pm \varepsilon } ajj±ε to avoid a zero leading principal minor.
3.2 Symmetric positive-definite matrices
If
A
A
A is a symmetric (or Hermitian, if
A
A
A is complex) positive-definite matrix, we can arrange matters so that
U
U
U is the conjugate transpose of
L
L
L. That is, we can write
A
A
A as
A
=
L
L
∗
.
{\displaystyle A=LL^{*}.\,}
A=LL∗.
This decomposition is called the Cholesky decomposition. The Cholesky decomposition always exists and is unique — provided the matrix is positive definite. Furthermore, computing the Cholesky decomposition is more efficient and numerically more stable than computing some other
L
U
LU
LU decompositions.
3.3 General matrices
For a (not necessarily invertible) matrix over any field, the exact necessary and sufficient conditions under which it has an L U LU LU factorization are known. The conditions are expressed in terms of the ranks of certain submatrices. The Gaussian elimination algorithm for obtaining L U LU LU decomposition has also been extended to this most general case.