Linear Algebra Handnote(1)
If L is lower triangular with 1’s on the diagonal, so is
L−1 Elimination = Facotization: A=LU
AT is the matrix that makes these two inner products equal for every x and
y :
(Ax)Ty=xT(ATy)
Inner product of Ax with y = Inner product ofx with ATyDEFINITION: The space Rn consists of all column vectors v with
n componentsDEFINITION: A subspace of a vector space is a set of vectors (including 0) that satisfies two requirements: (1) v+w is in the subspace, (2) cv is in the subspace
The colomn space consists of all linear combinations of the columns. The combinations are all possible vectors Ax . They fill the column space C(A)
The system Ax=b is solvable if and only if b is in the column space of
A The nullspace of A consists of all solutions to Ax=0 . These vectors x are in
Rn . The nullspace containing all solutions of Ax=0 is denoted by N(A)
- the nullspace is a subspace of Rn , the column space is a subspace of Rm
- the nullspace consists of all combinations of the special solutions
Nullspace(plane) perpendicular to row space(line)
Ax=0 has r pivots and
n−r free variables: n columns minusr pivot columns. The nullspace matrix N (contains all special solutions) contains then−r special solutions. Then AN=0Ax=0 has r independent equations so it has
n−r independent solutions.xparticular : the particular solution solves Axp=b
xnullspace : the n−r special solutions solve Axn=0
Complete solution: one xp , many xn : x=xp+xn
The four possibilities for linear equations depend on the rank r :
r=m , and r=n : Square ane invertible, Ax=b has 1 solution- r=m , and r<n : Short and wide, Ax=b has ∞ solutions
- r<m , and r=n : Tall and thin, Ax=b has 0 or 1 solutions
- r<m , and r<n : Not full rank, Ax=b has 0 or ∞ solutions
Independent vections (no extra vectors)
- Spanning a space (enough vectors to produce the rest)
- Basis for a space (not too many or too few)
Dimension of a space (the number of vectors in a basis)
Any set of n vectors in
Rm must be linearly dependent if n>mThe columns spans the column space. The rows span the row space
- The column space / row space of a matrix is the subspace of Rm / Rn spanned by the columns/rows.
A basis for a vector space is a sequence of vectors with two properties: linear independent and span the space.
- The basis is not unique. But the combination that produces the vector is unique.
- The columns of a n×n invertible matrix are a basis for Rn .
- The pivot columns of A are a basis for its column space.
DEFINITION: The dimension of a space is the number of vectors in every basis.
The space Z that contains only the zero vector. The dimension of this space is zero. The empty set (containing no vectors) is a basis for Z. We can never allow the zero vector into a basis, because then linear independence is lost.
Four Fundamental Subspaces
1. The row spaceC(AT) , a subspace of Rn
2. The column space C(A) , a subspace of Rm
3. The nullspace is N(A) , a subspace of Rn
4. The left nullspace N(AT) , a subspace of Rm-
A
has the same row space as
R . Same dimension r and same basis. - The column space of
A has dimension r . The number of independent columns equals the number of independent rows. A has the same nullspace as R . Same dimensionn−r and same basis.- The left nullspace of
A
(the nullspace of
AT has dimension m−r .
Fundamental Theorem of Linear Algebra, Part 1
- The column space and row space both have dimension r .
The nullspaces have dimensions
n−r and m−r .Every rank one matrix has the special form A=uvT=column×row.
The nullspace N(A) and the row space C(AT) are orthogonal subspaces of Rn .
DEFINITION: The orthogonal complement of a subspace V contains every vector that is perpendicular to
V .
Fundamental Theorem of Linear Algebra, Part 2
* N(A) is the orthogonal complement of the row space C(AT) (in Rn )
* N(AT) is the orthogonal complement of the column space C(A) (in Rm )Projection Onto a Line
* The projection matrix P=aaTaTa onto the line through a
* The projectionp=x¯a=aTbaTaa Projection Onto a Subspace
Problem: Find the combination p=x1¯a1+⋯+xn¯an closest to a given vector b . The
n vectors a1,⋯,an in Rm span the column space of A . Thus the problem is to find the particular combinationp=Ax¯ (the projection) that is closest to b . Whenn=1 , the best choice is aTbaTaAT(b−Ax¯)=0 , or ATAx¯=ATb
The symmetric matrix ATA is n×n . It is inverible if the a ’s are independent.
- The solution is
x¯=(ATA)−1ATb - The projection of
b
onto the subspace
p=Ax¯=A(ATA)−1ATb - The projection matrix P=A(ATA)−1AT
* ATA is invertible if and only if A has linearly independent columns*
Least Squares Approximations
When
Ax=b has no solution, multiply by AT and solve ATAx¯=ATbThe least squares solution x¯ minimizes E=||Ax−b||2 . This is the sum of squares of the errors in the m equations (
m>n )- The best x¯ comes from the normal equations ATAx¯=ATb
Orthogonal Bases and Gram-Schmidt
orthonormal vectors
- A matrix with orthonormal columns is assigned the special letter
Q
. The matrix
Q is easy to work with because QTQ=I - When
Q
is square,
QTQ=I means that QT=Q−1 : transpose = inverse. - If the columns are only orthogonal (not unit vectors), dot products give a diagonal matrix (not the identity matrix)
- A matrix with orthonormal columns is assigned the special letter
Q
. The matrix
Every permutation matrix is an orthogonal matrix.
If Q has orthonormal columns (
QTQ=I ), it leaves lengths unchangedOrthogonal is good
Use Gram-Schmidt for the Factorization A=QR
[abc]=[q1q2q3]⎡⎣⎢⎢qT1aqT1bqT2bqT1cqT2cqT3c⎤⎦⎥⎥
(Gram-Schmidt) From independent vectors a1,⋯,an , Gram-Schmidt constructs orthonormal vectors q1,⋯,qn . The matrces with these columns satisfy A=QR . Then R=QTA is upper triangular because later q ’s are orthogonal to earlier
a ’s.Least squares: RTRx¯=RTQTb or Rx¯=QTb or x¯=R−1QTb
Determinants
- The determinant is zero when the matrix has no inverse
- The product of the pivots is the determinant
- The determinant changes sign when two rows (or two columns) are exchanged
- Determinants give A−1 and A−1b (this formulat is called Cramer’s Rule)
- When the edge of a box are the rows of
A
, the volume is
|detA| - For
n
special numbers
λ , called eigenvalues, the determinants of A−λI is zero.
The properties of the determinant
- The determinant of the n×n identity matrix is 1.
- The determinant changes sign when two rows are exchanged
- The determinant is a linear function of each row separately (all other rows stay fixed!)
- If two rows of
A
are equal, then
detA=0 - Subtracting a multiple of one row from another row leave
detA
unchanged.
- |ab c−lad−lb|=∣∣∣acbd∣∣∣
- A matrix with a row of zeros has detA=0
- If
A
is triangular then
detA=a11a22⋯ann=productofdiagonalentries - If
A
is singular then
detA=0 . If A is invertible thendetA≠0
- Elimination goes from
A
to
U . - detA=+−detU=+−(productofthepivots)
- Elimination goes from
A
to
- The determinant of AB is detAtimesdetB
- The transpose AT has the same determinant as A
Every rule of the rows can apply to columns*
Cramer’s Rule
- If
detA is not zero, Ax=b is solved by determinants:
- x1=detB1detA,x2=detB2detA,⋯,xn=detBndetA
- The matrix
Bj
has the
j
th column of
A replaced by the vector b
Cross Product
||u×v||=||u||||v|||sinθ| |u⋅v|=||u||||v|||cosθ|
The length of u×v equals the area of the parallelogram with sides u and
v It points by the right hand rule (points along your right thumb when the fingers curl from u to
v
Eigenvalues and Eigenvectors
The basic equation is Ax=λx , The number λ is an eigenvalue of A
- When
A is squared, the eigenvectors stay the same. The eigenvalues are squared.
- When
The projection matrix has eigenvalues λ=1 and λ=0
-
P
is singular, so
λ=0 is an eigenvalue - Each column of
P
adds to 1, so
λ=1 is an eigenvalue - P is symmetric, so its eigenvectors are perpendicular
-
P
is singular, so
- Permutations have all
|λ|=1 The reflection matrix has eigenvalues 1 and -1
Solve the eigenvalue problem for an n×n matrix
- Compute the determinant of A−λI . It is a polynomial in λ of degree n
- Find the roots of this polynomial
- For each eigenvalue
λ , solve (A−λI)x=0 to find an eigenvector x
Bad news: elimination does not preserve the
λ ’s- Good news: the product of eigenvalues equals the determinant, the sum of the eigenvalues equals the sum of the diagonal entries (trace)
Diagonalizing a Matrix
Suppose the n×n matrix A has
n linearly independent eigenvectors x1,⋯,xn . Put them into the columns of an eigenvector matrix S . ThenS−1AS is the eigenvalue matrix Λ :- S−1AS=Λ=[λ1 ⋱ λn]
There is no connection between invertibility and diagonalizability:
- Invertibility is concerned with the eigenvalues ( λ=0 or λ≠0 )
- Diagonalizability is concerned with the eigenvectors (too few or enough for S )
Applications to differential equations
One equation
dudt=λu has the solution u(t)=Ceλtn equations
dudt=Au starting from the vector u(0) at t=0Solve linear constant coefficient equations by exponentials eλtx , when Ax=λx
Symmetric Matrices
- A symmetric matrix has only real eigenvalues.
The eigenvectors can be chosen orthonormal.
(Spectral Theorem) Every symmetric matrix haas the facorization A=QΛQT with real eigenvalues in Λ and orthonormal eigenvectors in S=Q :
- Symmetric diagonalization: A=QΛQ−1=QΛQT with Q−1=QT
(Orthogonal Eigenvectors) Eigenvectors of a real symmetric matrix (when they correspond to different λ ’s) are always perpendicular.
product of pivots = determinant = product of eigenvalues
Eigenvalues VS. Pivots
For symmetric matrices the pivots and the eigenvalues have the same signs:
- The number of positive eigenvalues of A=AT equals the number of positive pivots.
All symmetric matrices are diagonalizable
Positive Definite Matrices
Symmetric matrices that have positive eigenvalues
2 × 2 matrices
- The eigenvalues of
A
are positive if and only if
a>0 and ac−b2>0 .
- The eigenvalues of
A
are positive if and only if
xTAx is positive for all nonzero vectors x
- If
A and B are symmetric positive definite, so isA+B
- If
When a symmetric matrix has one of these five properties, it has them all:
- All n pivots are positive
- All
n upper left determinants are positive - All n eigenvalues are positive
xTAx is positive except at x=0 . This is the energy-based definition-
A
equals
RTR for a matrix R with independent columns
Positive Semidefinite Matrices
Similar Matrices
DEFINITION: Let
M be any invertible matrix. Then B=M−1AM is similar to A(No change in
λ ’s) Similar matrices A andM−1AM have the same eigenvalues. If x is an eigenvector ofA , then M−1x is an eigenvector of B . But two matrices can have the same repeatedλ , and fail to be similar.
Jordan Form
- What is “Jordan Form”?
- For every
A
, we want to choose
M so that M−1AM is nearly diagonal as possible
- For every
A
, we want to choose
JT is the similar to J , the matrix
M that produces the similarity happens to be the reverse identity(Jordan form) If A has
s independent eigenvectors, it is similar to a matrix J that hass Jordan blocks on its diagonal: Some matrix M putsA into Jordan form.- Jordan block: The eigenvalue is on the diagonal with
1
’s just above it. Each block in
J has one eigenvalue λi , one eigenvector. and 1’s above the diagonal
- Jordan block: The eigenvalue is on the diagonal with
1
’s just above it. Each block in
M−1AM=[J1 ⋱ Js]=J
Ji=[λi1 ⋱1 ⋱1 λi]
-
A
is similar to
B if they share the same Jordan form J – not otherwise
Singular Value Decomposition (SVD)
Two sets of singular vectors,
u ’s and v ’s. Theu ’s are eigenvectors of AAT and the v ’s are eigenvectors ofATA .-
The singular vectos v1,⋯,vr are in the row space of A . The outputs
u1,⋯,ur are in the column space of A . The singular valuesσ1,⋯,σr are all positive numbers, the equatinos Avi=σiui tell us:
A[v1⋯vr]=[u1⋯ur]⎡⎣⎢⎢σ1⋱σr⎤⎦⎥⎥
- We need
n−r
more
v
’s and
m−r more u ’s, from the nullspaceN(A) and the left nullspace N(AT) . They can be orthonormal bases for those two nullspaces. Include all the v ’s andu ’s in V andU , so these matrices become square.
A[v1⋯vr⋯vn]=[u1⋯ur⋯um]⎡⎣⎢⎢⎢⎢⎢σ1⋱σr⎤⎦⎥⎥⎥⎥⎥
V is now a square orthogonal matrix, with
V−1=VT . So AV=UΣ can become A=UΣVT . This is the Singular Value Decomposition:
A=UΣVT=u1σ1vT1+⋯+urσrvTrThe orthonormal columns of U and
V are eigenvectors of AAT and ATA
-
A
has the same row space as