In linear algebra, an n-by-n square matrix
A
A
A is called invertible (also nonsingular or nondegenerate), if there exists an n-by-n square matrix
B
B
B such that
A
B
=
B
A
=
I
n
{\displaystyle \mathbf {AB} =\mathbf {BA} =\mathbf {I} _{n}\ }
AB=BA=In
where
I
n
I_n
In denotes the n-by-n identity matrix and the multiplication used is ordinary matrix multiplication. If this is the case, then the matrix
B
B
B is uniquely determined by
A
A
A, and is called the (multiplicative) inverse of
A
A
A, denoted by
A
−
1
A^{−1}
A−1. Matrix inversion is the process of finding the matrix
B
B
B that satisfies the prior equation for a given invertible matrix
A
A
A.
A square matrix that is not invertible is called singular or degenerate. A square matrix is singular if and only if its determinant is zero. Singular matrices are rare in the sense that if a square matrix’s entries are randomly selected from any finite region on the number line or complex plane, the probability that the matrix is singular is 0 0 0, that is, it will “almost never” be singular. Non-square matrices (m-by-n matrices for which m ≠ n m ≠ n m=n) do not have an inverse. However, in some cases such a matrix may have a left inverse or right inverse. If A A A is m-by-n and the rank of A A A is equal to n n n ( n ≤ m n ≤ m n≤m), then A A A has a left inverse, an n-by-m matrix B B B such that B A = I n BA = I_n BA=In. If A A A has rank m m m (m ≤ n), then it has a right inverse, an n-by-m matrix B B B such that A B = I m AB = I_m AB=Im.
While the most common case is that of matrices over the real or complex numbers, all these definitions can be given for matrices over any ring. However, in the case of the ring being commutative, the condition for a square matrix to be invertible is that its determinant is invertible in the ring, which in general is a stricter requirement than being nonzero. For a noncommutative ring, the usual determinant is not defined. The conditions for existence of left-inverse or right-inverse are more complicated, since a notion of rank does not exist over rings.
The set of n × n n × n n×n invertible matrices together with the operation of matrix multiplication (and entries from ring R R R) form a group, the general linear group of degree n n n, denoted G L n ( R ) GL_n(R) GLn(R).
Singular matrices are rare in the sense that if a square matrix’s entries are randomly selected from any finite region on the number line or complex plane, the probability that the matrix is singular is 0, that is, it will “almost never” be singular.
Contents
- 1 Properties
- 2 Examples
- 3 Methods of matrix inversion
- 4 Derivative of the matrix inverse
- 5 Generalized inverse
- 6 Applications
- 7 See also
- 8 References
- 9 Further reading
- 10 External links
1 Properties
1.1 The invertible matrix theorem
Let A A A be a square n by n matrix over a field K K K (e.g., the field R R R of real numbers). The following statements are equivalent (i.e., they are either all true or all false for any given matrix):
- There is an n-by-n matrix B B B such that A B = I n = B A AB = I_n = BA AB=In=BA.
- The matrix A A A has a left inverse (that is, there exists a B B B such that B A = I BA = I BA=I) or a right inverse (that is, there exists a C C C such that A C = I AC = I AC=I), in which case both left and right inverses exist and B = C = A − 1 B = C = A^{−1} B=C=A−1.
- A A A is invertible, that is, A A A has an inverse, is nonsingular, and is nondegenerate.
- A A A is row-equivalent to the n-by-n identity matrix I n I_n In.
- A A A is column-equivalent to the n-by-n identity matrix I n I_n In.
- A A A has n n n pivot positions.
-
A
A
A has full rank; that is, rank
A
=
n
A = n
A=n.
Based on the rank A = n A=n A=n, the equation A x = 0 Ax = 0 Ax=0 has only the trivial solution x = 0 x = 0 x=0. and the equation A x = b Ax = b Ax=b has exactly one solution for each b b b in K n K^n Kn. - The kernel of A A A is trivial, that is, it contains only the null vector as an element, k e r ( A ) = { 0 } ker(A) = \{0\} ker(A)={0}.
- The columns of A A A are linearly independent.
- The columns of
A
A
A span
K
n
K^n
Kn.
Col A = K n A = K^n A=Kn. - The columns of A form a basis of K n K^n Kn.
- The linear transformation mapping x x x to A x Ax Ax is a bijection from K n K^n Kn to K n K^n Kn.
- det A ≠ 0 A ≠ 0 A=0. In general, a square matrix over a commutative ring is invertible if and only if its determinant is a unit in that ring.
- The number 0 0 0 is not an eigenvalue of A A A.
- The transpose A T A^T AT is an invertible matrix (hence rows of A A A are linearly independent, span K n K^n Kn, and form a basis of K n K^n Kn).
- The matrix A A A can be expressed as a finite product of elementary matrices.
1.2 Other properties
Furthermore, the following properties hold for an invertible matrix A A A:
- ( A − 1 ) − 1 = A ; (A^{−1})^{−1} = A; (A−1)−1=A;
- ( k A ) − 1 = k − 1 A − 1 (kA)^{−1} = k^{−1}A^{−1} (kA)−1=k−1A−1 for nonzero scalar k k k;
- ( A x ) + = x + A − 1 (Ax)^+ = x^+A^{−1} (Ax)+=x+A−1 if A A A has orthonormal columns, where + ^+ + denotes the Moore–Penrose inverse and x x x is a vector;
- ( A T ) − 1 = ( A − 1 ) T (A^T)^{−1} = (A^{−1})^T (AT)−1=(A−1)T;
- For any invertible n-by-n matrices A A A and B B B, ( A B ) − 1 = B − 1 A − 1 (AB)^{−1} = B^{−1}A^{−1} (AB)−1=B−1A−1. More generally, if A 1 , . . . , A k A_1, ..., A_k A1,...,Ak are invertible n-by-n matrices, then ( A 1 A 2 ⋯ A k − 1 A k ) − 1 = A k − 1 A k − 1 − 1 . . . A 1 − 1 (A_1A_2⋯A_{k−1}A_k)^{−1} = A_{k}^{−1} A_{k-1}^{−1} ... A_{1}^{−1} (A1A2⋯Ak−1Ak)−1=Ak−1Ak−1−1...A1−1
- d e t A − 1 = ( d e t A ) − 1 det \ A^{−1} = (det \ A)^{−1} det A−1=(det A)−1.
The rows of the inverse matrix V V V of a matrix U U U are orthonormal to the columns of U U U (and vice versa interchanging rows for columns). To see this, suppose that U V = V U = I UV = VU = I UV=VU=I where the rows of V V V are denoted as v i T {\displaystyle v_{i}^{\mathrm {T} }} viT and the columns of U U U as u j {\displaystyle u_{j}} uj for 1 ≤ i , j ≤ n {\displaystyle 1\leq i,j\leq n} 1≤i,j≤n. Then clearly, the Euclidean inner product of any two v i T u j = δ i , j {\displaystyle v_{i}^{\mathrm {T} }u_{j}=\delta _{i,j}} viTuj=δi,j. This property can also be useful in constructing the inverse of a square matrix in some instances, where a set of orthogonal vectors (but not necessarily orthonormal vectors) to the columns of U U U are known. In which case, one can apply the iterative Gram–Schmidt process to this initial set to determine the rows of the inverse V V V.
A matrix that is its own inverse (i.e., a matrix A A A such that A = A − 1 A = A^{−1} A=A−1 and A 2 = I A^2 = I A2=I), is called an involutory matrix.