Vector Spaces
- rules of vector space
- The Null Space of a Matrix
- The column Space of a Matrix
- Kernel and Range of a Linear Transformation
- Coordinate Systems
- The Coordinate Mapping
- dimension of $Nul A$ and $Col A$
- row space
- The rank theorem
- Rank and the Invertible Matrix Theorem
- Applications to Linear Difference Equations
- Application to Markove Chains
rules of vector space
for each u , v u,v u,v in vector space V V V. u + v u+v u+v, c u cu cu, c v cv cv are in V V V
The Null Space of a Matrix
the set of x x x that satisfy A x = 0 Ax=0 Ax=0 is the null space of the matrix A A A.
The column Space of a Matrix
The column space of
m
×
n
m \times n
m×n matrix
A
A
A written as
C
o
l
A
Col A
ColA is a set of all linear combinations of the columns of
A
A
A if
A
=
[
a
1
⋯
a
n
]
A=[a_1 \cdots a_n]
A=[a1⋯an], thus
C
o
l
A
=
s
p
a
n
{
a
1
⋯
a
n
}
ColA=span\{a_1 \cdots a_n\}
ColA=span{a1⋯an}
Noted:
column space of an
m
×
n
m \times n
m×n matrix
A
A
A is all of
R
m
R^m
Rm if and only if the equation
A
x
=
b
Ax=b
Ax=b has a solution for each b in
R
m
R^m
Rm.
Kernel and Range of a Linear Transformation
For a linear transformation
T
T
T:
V
→
W
V\rightarrow W
V→W, the kernel of
T
T
T is a set of vectors
x
x
x: for each
u
u
u
T
(
u
)
=
0
T(u)=0
T(u)=0, the range of
T
T
T is the set of all vectors in
W
W
W of form
T
(
x
)
T(x)
T(x) for some
x
x
x in
V
V
V.
If this transformation is matrix transformation, then kernel of
A
A
A is
N
u
l
A
Nul A
NulA, range of
A
A
A is
C
o
l
A
Col A
ColA
β
=
{
b
1
,
b
2
,
⋯
b
n
}
\beta=\{b_1,b_2,\cdots b_n\}
β={b1,b2,⋯bn} is the bais of
H
H
H if:
β
\beta
β is a linear independent set and
H
=
s
p
a
n
{
b
1
,
b
2
,
⋯
 
,
b
n
}
H=span \{b_1,b_2,\cdots , b_n\}
H=span{b1,b2,⋯,bn}
Coordinate Systems
Suppose β = { b 1 , b 2 , ⋯ b n } \beta=\{b_1,b_2,\cdots b_n\} β={b1,b2,⋯bn} is the basis for V V V and x x x is in V V V. The coordinates of x x x relative to the basis β \beta β are weights c 1 , ⋯   , c n c_1,\cdots,c_n c1,⋯,cn such that x = c 1 b 1 , ⋯   , c n b n x=c_1b_1,\cdots,c_nb_n x=c1b1,⋯,cnbn
The Coordinate Mapping
Let β = { b 1 , b 2 , ⋯ b n } \beta=\{b_1,b_2,\cdots b_n\} β={b1,b2,⋯bn} be the basis of a vector space V V V. Then the coordinating mapping x → x β x\rightarrow x_{\beta} x→xβ is a one-to-one linear transformation from V V V onto R n R^n Rn
dimension of N u l A Nul A NulA and C o l A Col A ColA
Pivotal columns of a matrix
A
A
A form a basis for
C
o
l
A
ColA
ColA, thus the demension of
C
o
l
A
Col A
ColA is the number of pivotal columns in
A
A
A.
The dimension of
N
u
l
A
Nul A
NulA is the free variables in equation
A
x
=
0
Ax=0
Ax=0
row space
If two matrices A A A and B B B are row equivalent, then their row spaces are the same. If B B B is in echelon form, the nonzero rows of B B B form a basis for the row space of A A A as well as for that of B B B.
The rank theorem
the rank of
A
A
A is the dimension of the column space of
A
A
A.
Dimension of
C
o
l
A
ColA
ColA and
R
o
w
A
RowA
RowA for
m
×
n
m\times n
m×n matrix
A
A
A are equal.
R
a
n
k
A
+
d
i
m
N
u
l
A
=
n
Rank A+dimNulA=n
RankA+dimNulA=n
Rank and the Invertible Matrix Theorem
Let A be an
n
×
n
n \times n
n×n matrix. Then the following statements are each equivalent to the statement that
A
A
A is an invertible matrix.
m. The columns of
A
A
A form a basis of
R
n
R^n
Rn
n.
C
o
l
A
=
R
n
Col A=R^n
ColA=Rn
o.
d
i
m
C
o
l
A
=
n
dimColA=n
dimColA=n
p.
r
a
n
k
A
=
n
rankA=n
rankA=n
q.
N
u
l
A
=
{
0
}
NulA=\{0\}
NulA={0}
r.
d
i
m
N
u
l
A
=
0
dimNulA=0
dimNulA=0
in practical, the effective rank of a matrix
A
A
A is often determined from a singular value decomposition of
A
A
A.
Let
β
=
{
b
1
,
b
2
,
⋯
 
,
b
n
}
\beta=\{b_1,b_2,\cdots,b_n\}
β={b1,b2,⋯,bn} and
c
=
{
c
1
,
⋯
 
,
c
n
}
c=\{c_1,\cdots,c_n\}
c={c1,⋯,cn} be the bases of a vector space
V
V
V. Then there is a unique
n
×
n
n \times n
n×n matrix
P
C
←
B
\mathop{P}\limits_{C\leftarrow B}
C←BP such that
[
x
]
c
=
P
C
←
B
[
x
]
b
[x]_c=\mathop{P}\limits_{C\leftarrow B}[x]_b
[x]c=C←BP[x]b
The columns of
P
C
←
B
\mathop{P}\limits_{C\leftarrow B}
C←BP are
C
−
C-
C− coordinate vecctors of the vectors in the basis
β
\beta
β. That is,
P
C
←
B
=
[
[
b
1
]
c
[
b
2
]
c
⋯
[
b
n
]
c
]
\mathop{P}\limits_{C\leftarrow B}=[ [b_1]_c[b_2]_c \cdots[b_n]_c]
C←BP=[[b1]c[b2]c⋯[bn]c]
Applications to Linear Difference Equations
If
a
n
≠
0
a_n \ne 0
an̸=0 and if
{
z
k
}
\{z_k\}
{zk} is given, the equation for all
k
k
k:
y
k
+
n
+
a
0
y
k
+
n
−
1
+
a
1
y
k
+
n
−
2
+
⋯
+
a
n
y
k
=
z
k
y_{k+n}+a_0y_{k+n-1}+a_1y_{k+n-2}+\cdots+a_ny_k=z_k
yk+n+a0yk+n−1+a1yk+n−2+⋯+anyk=zk
has a uniques solution whenever the
y
0
,
⋯
 
,
y
n
−
1
y_0,\cdots,y_{n-1}
y0,⋯,yn−1 are specified.
The set of H H H of all solutions of the n n nth-order homogeneous linear difference equation y k + n + a 0 y k + n − 1 + a 1 y k + n − 2 + ⋯ + a n y k = z k y_{k+n}+a_0y_{k+n-1}+a_1y_{k+n-2}+\cdots+a_ny_k=z_k yk+n+a0yk+n−1+a1yk+n−2+⋯+anyk=zk is an n-demensional vector space
Application to Markove Chains
A vector with nonnegative entries that add up to 1 is called a probability vector. A stochastic matrix is a square matrix whose columns are probability vectors. A markove chain is a sequence of probability vectors
x
0
,
x
1
,
x
2
,
⋯
x_0,x_1,x_2,\cdots
x0,x1,x2,⋯, together with a stochastic matrix
P
P
P, such that:
x
1
=
P
x
0
,
x
2
=
P
x
1
,
⋯
x
n
=
P
x
n
−
1
x_1=Px_0,x_2=Px_1,\cdots x_n=Px_{n-1}
x1=Px0,x2=Px1,⋯xn=Pxn−1