These pages are a collection of my personal review on Matrix Analysis, mainly about matrices and something relating to them, such like the Space, Norm, etc. They are really the things that matter in data science and almost all the machine learning algorithms. Hence, I collected them in this form for the convenience of anyone who wants a quick desktop or mobile reference.
1 Algebraic and Analytic Structures
1.1 Group
1.2 Abelian
1.3 Ring (R,+) or (R,⋅)
1.4 Equivalence Relation ≡
1.5 Partial Order ⪯
1.6 Majorization and Weak majorization
1)Marjorization
For
x,y∈Rn
, we say that
x
is majorized by
2)Weak Majorization
For
x,y∈Rn
, we say that
x
is weak majorized by
1.7 Supremum and Infimum
T
is a subset of poset
a
is said to be a infimum of
1.8 Lattice
Let a,b∈S , then inf{ a,b } is also denoted by a∧b , called the meet of a,b ; and sup { a,b } is denoted by a∨b , called the join of a,b. Then, a poset (S,⪯) is called a lattice if a∧b and a∨b exist for all a,b∈S .
2 Linear Spaces
2.1 Linear Space
A set
χ
is said to be a linear space(or vector space) over a filed
F
, if
2.2 Dimension and Basis
Several vectors
x1,x2,...xm∈χ
are said to be linear independent if
implies α1=α2=...=αm=0 . ‘m’ is the dimension of χ , x1,x2,...,xm is the basis of χ .
2.3 Null Space and Range Space
2.4 Normed Linear Space
For vectors:
For matrices:
where σ1 is the maximum sigular value of A.
2.5 Inner Prouduct Space
A linear space with an inner product is called an inner product space.
2.6 Gram Schimidt Orthonormalization
3 Matrix Factorization and Decompositions
3.1 Eigenvalues and Eigenvectors
The characteristic polynomial of A is defined to be
A complex number λ satisfying CA(λ)=0 is called an eigenvalue of A, and the vector x∈Cn such that Ax=λx is called the right eigenvector of A corresponding to the eigenvalue λ .
3.2 Spectrum
Spectrum is the set of eigenvalues of A.
Spectral Radius
ρ(A)
is the maximum modulus of the eigenvalues of A, i.e.,
ρ(A)=max|λi|
.
3.3 Diagonalization
where ni≥1 and ∑li=1ni=n , ni is the algebraic multiplicity of λi .
Eigenspace:
εi=N(A−λiI)
Generalized Eigenspace:
εi˜=N[(A−λiI)ni]
3.4 Jordan Canonical Form
Choosing arbitrary basis from
εi˜
to form P, and tranfer A by
P−1AP
to get a Jordan Canonical Form. We can also get P from:
3.5 QR Factorization
3.6 Schur Factorization
U: unitary matrix
A: with eigenvalues λ1,...,λn
T: an upper triangular matrix
3.7 SVD Decomposition
The left -singular vectors of A(columns of U) are a set of orthonormal eigenvectors of AA∗ .
The right-singular vectors of A(columns of V) are a set of orthonormal eigenvectors of A∗A .
The diagnal entries of S are the square roots of the non-negative eigenvalues of both A∗A and AA∗ , known as the singular values.
e.g. For a square matrix T
3.8 Spectral Decompostion
where Gk=αiβTi . There are some properties of Gi :
3.9 Matrix Functions
holds when AB=BA,A,B∈Cm×n .
4 Matrix Analysis
4.1 Positive Definite
The following three statements are equivalent.
4.2 Rayleigh Quotient
For A, let λmin=λ1≤λ2≤...≤λn=λmax , 1≤i1≤i2≤...≤ik≤n are integers, xi1,xi2,...xik are orthonormal vectors such that Axip=λipxip , S=span{xi1,xi2,...,xik} ,then we have
4.3 Hermitian Matrix
Hermitian Matrix:
A=A∗
Skew-Hermitian:
A=−A∗
Theorem: If
A
is a Hermitian Matrix, then
a)
b)
λ(A)
are real.
c)
S∗AS
is Hermitian.
5 Special Topics
5.1 Stochastic Matrix
A nonnegative matrix
Sn×n
is said to be a stochastic matrix if each of its row sums is equal to one. S satisfies
Se=e
, which means the eigenvalue and eigenvector of
S
are respectively 1 and