线性代数提纲整理(大一上)

        各位西交利物浦的同学们大家好啊!期末在即,为了不挂科,我费了好大力气,尽自己所能的整理了这些提纲。内容粗浅,如果大家能看到,并且为大家的学习提供了一点点的帮助,那真的再好不过了。哦还有,我是按 MTH007 的课程进行整理的,由于课程不同,可能知识点有所欠缺,欢迎各位大佬在评论区补充,本人才疏学浅,若内容有误,欢迎大家指正。^_^

本博客仅供学习交流使用,侵删。




Chapter 1 : Linear Equations in Linear Algebra


1.1 Systems of Linear Equation


1.1.1 Linear Equation


  • A Linear equation in the variables x 1 , x 2 , ⋅ ⋅ ⋅ , x n x_1,x_2,···,x_n x1,x2,,xn is an equation that can be written in the form a 1 x 1 + a 2 x 2 + ⋅ ⋅ ⋅ + a n x n = b a_1x_1+a_2x_2+···+a_nx_n=b a1x1+a2x2++anxn=b, where b b b and the coefficients a 1 , a 2 , ⋅ ⋅ ⋅ , a n a_1,a_2,···,a_n a1,a2,,an are real or complex numbers.
  • A system of linear equations is a collection of one or more linear equations involving the same variables.
  • A solution of the system is a list ( s 1 , s 2 , ⋅ ⋅ ⋅ , s n ) (s_1,s_2,···,s_n) (s1,s2,,sn) of numbers that makes each equation a true statement when the values s 1 , s 2 , ⋅ ⋅ ⋅ , s n s_1,s_2,···, s_n s1,s2,,sn are substituted for x 1 , x 2 , ⋅ ⋅ ⋅ , x n x_1,x_2,···,x_n x1,x2,,xn respectively.
  • The set of all possible solutions is called the solution set of the linear system.
  • Two linear systems are called equivalent if they have the same solution set.

A system of linear equations has :

  1. no solution
  2. exactly one solution
  3. infinitely many solutions

A system of linear equations is said to be consistent (相容的) if it has either one solution
or infinitely many solutions.

A system is inconsistent (不相容) if it has no solution.


1.1.2 Matrix


with the coefficients of each variable aligned in columns, the matrix is called the coefficient matrix (系数矩阵) of the system


An augmented matrix (增广矩阵) of a system consists of the coefficient matrix with an added column containing the constants from the right sides of the equations.


1.1.3 Elementary Row Operations


  1. Replacement (倍加变换) : Replace one row by the sum of itself and a multiple of another row.
  2. Interchange (对换变换) : Interchange two rows.
  3. Scaling (倍乘变换) : Multiply all entries in a row by a nonzero constant.

Two matrices are called row equivalent if there is a sequence of elementary row operations that transforms one matrix into the other.



1.2 Row Reduction and Echelon Forms


1.2.1 Echelon Form


A nonzero row or column in a matrix means a row or column that contains at least one nonzero entry, and a leading entry (先导元素) of a row refers to the leftmost nonzero entry in a nonzero row.


A rectangular matrix is in echelon form if it has the following three properties:

  1. All nonzero rows are above any rows of all zeros.
  2. Each leading entry of a row is in a column to the right of the leading entry of the
    row above it.
  3. All entries in a column below a leading entry are zeros.

If a matrix in echelon form satisfies the following additional conditions, then it is in reduced echelon form :

  1. The leading entry in each nonzero row is 1.
  2. Each leading 1 is the only nonzero entry in its column.

The reduced echelon form one obtains from a matrix is unique.


Theorem 1 : Uniqueness of the Reduced Echelon Form

Each matrix is row equivalent to one and only one reduced echelon matrix.


1.2.2 Pivot Position

A pivot position in a matrix A A A is a location in A A A that corresponds to a leading 1 in the reduced echelon form of A A A. A A A pivot column is a column of A A A that contains a pivot position.


1.2.3 Solutions of Linear Systems


The variables x 1 x_1 x1 and x 2 x_2 x2 corresponding to pivot columns in the matrix are called basic variables (基本变量). The other variable, x 3 x_3 x3, is called a free variable (自由变量).


Theorem 2 : Existence and Uniqueness Theorem

A linear system is consistent if and only if the rightmost column of the augmented matrix is not a pivot column. If a linear system is consistent, then the solution set contains either

(i) a unique solution, when there are no free variables, or

(ii) infinitely many solutions, when there is at least one free variable.



1.3 Vector Equations


A matrix with only one column is called a column vector, or simply a vector.

The set of all vectors with n entries is denoted by R 2 ℝ^2 R2


Definition

If v ⃗ 1 , ⋅ ⋅ ⋅ , v ⃗ p \vec{v}_1,···,\vec{v}_p v 1,,v p are in R n ℝ^n Rn , then the set of all linear combinations of v ⃗ 1 , ⋅ ⋅ ⋅ , v ⃗ p \vec{v}_1,···,\vec{v}_p v 1,,v p is denoted by Span { v ⃗ 1 , ⋅ ⋅ ⋅ , v ⃗ p } \{\vec{v}_1,···,\vec{v}_p\} {v 1,,v p} and is called the subset of R n ℝ^n Rn spanned by v ⃗ 1 , ⋅ ⋅ ⋅ , v ⃗ p \vec{v}_1,···,\vec{v}_p v 1,,v p.



1.4 Matrix Equation

If A A A is an m × n m\times n m×n matrix, with columns a 1 , ⋅ ⋅ ⋅ , a n a_1,···,a_n a1,,an, and if x ⃗ \vec{x} x is in R n ℝ^n Rn, then the product of A A A and x ⃗ \vec{x} x , denoted by A x ⃗ A\vec{x} Ax , is the linear combination of the columns of A A A using the corresponding entries in x ⃗ \vec{x} x as weights.


A x ⃗ A\vec{x} Ax is defined only if the number of columns of A A A equals the number of entries in x ⃗ \vec{x} x .


Theorem 3

Let A A A be an m × n m\times n m×n matrix. Then the following statements are logically equivalent.

  1. For each b b b in R m ℝ^m Rm, the equation A x ⃗ = b A\vec{x}=b Ax =b has a solution.
  2. Each b b b in R m ℝ^m Rm is a linear combination of the columns of A.
  3. The columns of A span R m ℝ^m Rm
  4. A has a pivot position in every row.

The matrix with 1’s on the diagonal and 0’s elsewhere is called an identity matrix and is denoted by I I I



1.5 Homogeneous Linear Systems (齐次线性系统)


A system of linear equations is said to be homogeneous if it can be written in the form A x ⃗ = 0 A\vec{x}=0 Ax =0, where A A A is an matrix and 0 0 0 is the zero vector in R m ℝ^m Rm.


Such a system A x ⃗ = 0 A\vec{x}=0 Ax =0 always has at least one solution x ⃗ = 0 \vec{x}=0 x =0, this solution is usually called the trivial solution. The important question is whether there exists a nontrivial solution, that is, a nonzero vector x ⃗ \vec{x} x that satisfies A x ⃗ = 0 A\vec{x}=0 Ax =0.


The equation of the form x ⃗ = s u ⃗ + t v ⃗ \vec{x}=s\vec{u}+t\vec{v} x =su +tv is called a parametric vector equation (参数向量方程) of the plane. A solution set is described explicitly with vectors, we say that the solution is in parametric vector form.


Theorem 4

Suppose the equation A x ⃗ = b A\vec{x}=b Ax =b is consistent for some given b b b, and let p p p be a solution. Then the solution set of A x ⃗ = b A\vec{x}=b Ax =b is the set of all vectors of the form w ⃗ = p + v ⃗ h \vec{w}=p+\vec{v}_h w =p+v h, where v ⃗ h \vec{v}_h v h is any solution of the homogeneous equation.



1.7 Linear Independence

An indexed set of vectors { v ⃗ 1 , ⋅ ⋅ ⋅ , v ⃗ p } \{\vec{v}_1,···,\vec{v}_p\} {v 1,,v p} in R n ℝ^n Rn is said to be linearly independent if the vector equation has only the trivial solution. The set { v ⃗ 1 , ⋅ ⋅ ⋅ , v ⃗ p } \{\vec{v}_1,···,\vec{v}_p\} {v 1,,v p} is said to be linearly dependent if there exist the nontrivial solution.


Theorem 5 : Characterization of Linearly Dependent Sets

An indexed set S = { v ⃗ 1 , ⋯ , v ⃗ p } S=\{\vec{v}_1,⋯,\vec{v}_p\} S={v 1,,v p} of two or more vectors is linearly dependent if and only if at least one of the vectors in S is a linear combination of the others.


Theorem 6

Any set { v ⃗ 1 , ⋅ ⋅ ⋅ , v ⃗ p } \{\vec{v}_1,···,\vec{v}_p\} {v 1,,v p} in R n ℝ^n Rn is linearly dependent if p > n p>n p>n or contains the zero vector.



1.8 Introduction to Linear Transformations (线性变换)


  • A transformation (or function or mapping) T T T from R n ℝ^n Rn to R m ℝ^m Rm is a rule that assigns to each vector x ⃗ \vec{x} x in R n ℝ^n Rn a vector T ( x ⃗ ) T(\vec{x}) T(x ) in R m ℝ^m Rm .
  • The set R n ℝ^n Rn is called domain of T T T, and R m ℝ^m Rm is called the codomain of T T T.
  • For x ⃗ \vec{x} x in R n ℝ^n Rn, the vector T ( x ⃗ ) T(\vec{x}) T(x ) in R m ℝ^m Rm is called the image of x ⃗ \vec{x} x and the set of all images T ( x ⃗ ) T(\vec{x}) T(x ) is called the range of T T T.
  • The transformation T : R 2 → R 2 T:ℝ^2\to ℝ^2 T:R2R2 defined by T ( x ) = A x ⃗ T(x)=A\vec{x} T(x)=Ax is called a shear transformation (剪切变换)

Definition : Linear Transformation

A transformation T T T is linear if:
(i) T ( u ⃗ + v ⃗ ) T(\vec{u}+\vec{v}) T(u +v )= T ( u ⃗ ) T(\vec{u}) T(u )+ T ( v ⃗ ) T(\vec{v}) T(v )
(ii) T ( c u ⃗ ) = c T ( u ⃗ ) T(c\vec{u})=cT(\vec{u}) T(cu )=cT(u )

Given a scalar r r r, define T : R 2 → R 3 T:ℝ^2\toℝ^3 T:R2R3 by T ( x ⃗ ) = r x ⃗ T(\vec{x})=r\vec{x} T(x )=rx . T T T is called a contraction when 0 ≤ r ≤ 1 0\le r\le1 0r1 and a dilation when r > 1 r>1 r>1 .



1.9 The Matrix of a Linear Transformation


Theorem 7

Let T : R n → R m T:ℝ^n\toℝ^m T:RnRm be a linear transformation. Then there exists a unique matrix A A A such that T ( x ⃗ ) = A x ⃗ T(\vec{x})=A\vec{x} T(x )=Ax for all x ⃗ \vec{x} x in R n ℝ^n Rn. In fact, A A A is the m × n m\times n m×n matrix whose j t h j^{th} jth column is the vector T ( e j ) T(e_j) T(ej), where e j e_j ej is the j t h j^{th} jth column of the identity matrix in R n ℝ^n Rn , and is called the standard matrix for the linear transformation T T T.


Definition : onto (满射)

A mapping T : R n → R m T:ℝ^n\toℝ^m T:RnRm is said to be onto R m ℝ^m Rm if each b ⃗ \vec{b} b in R m ℝ^m Rm is the image of at least one x ⃗ \vec{x} x in R n ℝ^n Rn. That is, T T T maps R n ℝ^n Rn onto R m R^m Rm if for each b ⃗ \vec{b} b in the codomain R m ℝ^m Rm, there exists at least one solution of T ( x ⃗ ) = b ⃗ T(\vec{x})=\vec{b} T(x )=b , or the columns of A A A span R m ℝ^m Rm.


Definition : one-to-one

A mapping T : R n → R m T:ℝ^n\toℝ^m T:RnRm is said to be one-to-one if each b ⃗ \vec{b} b in R m ℝ^m Rm is the image of at most one x ⃗ \vec{x} x in R n ℝ^n Rn. That is, T T T is one-to-one if the equation T ( x ⃗ ) = 0 T(\vec{x})=0 T(x )=0 has only the trivial solution, or the columns of A A A are linearly independent.



Chapter 2 : Matrix Algebra


2.1 Matrix Operations


Definition : Diagonal Matrix (对角矩阵)

A diagonal matrix is a square n × n n\times n n×n matrix whose nondiagonal entries are zero.


The two matrices are equal if they have the same size and if their corresponding columns are equal.


2.1.1 Sums


(1) A + B = B + A A+B=B+A A+B=B+A

(2) ( A + B ) + C = A + ( B + C ) (A+B)+C=A+(B+C) (A+B)+C=A+(B+C)

(3) A + 0 = A A+0=A A+0=A

(4) r ( A + B ) = r A + r B r(A+B)=rA+rB r(A+B)=rA+rB

(5) ( r + s ) A = r A + s A (r+s)A=rA+sA (r+s)A=rA+sA

(6) r ( s A ) = ( r s ) A r(sA)=(rs)A r(sA)=(rs)A


2.1.2 Scalar Multiples


If A A A is an m × n m\times n m×n matrix and B B B is an n × p n\times p n×p matrix, then the product A B AB AB is the m × p m\times p m×p matrix whose columns are A b 1 , ⋅ ⋅ ⋅ , A b p Ab_1,···,Ab_p Ab1,,Abp


(1) A ( B C ) = ( A B ) C A(BC)=(AB)C A(BC)=(AB)C

(2) A ( B + C ) = A B + A C A(B+C)=AB+AC A(B+C)=AB+AC

(3) ( B + C ) A = B A + C A (B+C)A=BA+CA (B+C)A=BA+CA

(4) r ( A B ) = ( r A ) B = A ( r B ) r(AB)=(rA)B=A(rB) r(AB)=(rA)B=A(rB)

(5) I m A = A = A I n I_mA=A=AI_n ImA=A=AIn

(6) A B ≠ B A AB\ne BA AB=BA

(7) A B = A C   ⇏   B = C AB=AC\ \nRightarrow\ B=C AB=AC  B=C

(8) A B = 0   ⇏   B = 0 AB=0\ \nRightarrow\ B=0 AB=0  B=0 or A = 0 A=0 A=0


2.1.3 The Transpose of a Matrix


(1) ( A T ) T = A (A^T)^T=A (AT)T=A

(2) ( A + B ) T = A T + B T (A+B)^T=A^T+B^T (A+B)T=AT+BT

(3) ( r A ) T = r A T (rA)^T=rA^T (rA)T=rAT

(4) ( A B ) T = B T A T (AB)^T=B^TA^T (AB)T=BTAT



2.2 The Inverse of a Matrix


Definition : Invertible

An n × n n\times n n×n matrix A A A is said to be invertible if there is an n × n n\times n n×n matrix C C C such that :
C A = I CA=I CA=I and A C = I AC=I AC=I
We say that C C C is an inverse of A A A, denoted by A − 1 A^{-1} A1.


Theorem 1

Let A = [ a c b d ] A=\begin{bmatrix}a&c\\b&d\end{bmatrix} A=[abcd]. If a d − b c ≠ 0 ad-bc\ne0 adbc=0, then A A A is invertible and A − 1 = 1 a d − b c [ d − c − b a ] A^{-1}=\frac{1}{ad-bc}\begin{bmatrix}d&-c\\-b&a\end{bmatrix} A1=adbc1[dbca]

a d − b c ad-bc adbc is called the determinant (行列式) of A A A, that is det ⁡ A = a d − b c \det A=ad-bc detA=adbc.


Theorem 2

If A A A is an invertible n × n n\times n n×n matrix, then for each b ⃗ \vec{b} b in R n ℝ^n Rn, the equation A x ⃗ = b ⃗ A\vec{x}=\vec{b} Ax =b has the unique solution x ⃗ = A − 1 b ⃗ \vec{x}=A^{-1}\vec{b} x =A1b


Theorem 3

(1) ( A − 1 ) − 1 = A (A^{-1})^{-1}=A (A1)1=A

(2) ( A B ) − 1 = B − 1 A − 1 (AB)^{-1}=B^{-1}A^{-1} (AB)1=B1A1

(3) ( A T ) − 1 = ( A − 1 ) T (A^T)^{-1}=(A^{-1})^T (AT)1=(A1)T


Definition : Elementary Matrix

An elementary matrix is one that is obtained by performing a single elementary row operation on an identity matrix.


Theorem 4

An n × n n\times n n×n matrix A A A is invertible if and only if A A A is row equivalent to I n I_n In, and in this case, any sequence of elementary row operations that reduces A A A to I n I_n In also transforms I n I_n In into A − 1 A^{-1} A1.



2.3 Characterizations of Invertible Matrices


Let A A A be a square n × n n\times n n×n matrix. Then the following statements are equivalent.

(1) A A A is an invertible matrix.

(2) A A A is row equvivalent to the n × n n\times n n×n identity matrix.

(3) A A A has n n n pivot positions.

(4) The equation A x ⃗ = 0 A\vec{x}=0 Ax =0 has only the trivial solution.

(5) The columns of A A A span R n ℝ^n Rn.

(6) The columns of A A A form a linearly independent set.

(7) The linear transformation x ⃗ ↦ A x ⃗ \vec x\mapsto A\vec x x Ax is onr-to-one.

(8) The equation A x ⃗ = b ⃗ A\vec x=\vec b Ax =b has at least one solution for each b ⃗ \vec b b in R n ℝ^n Rn.

(9) The linear transformation x ⃗ ↦ A x ⃗ \vec x\mapsto A\vec x x Ax maps R n ℝ^n Rn onto R n R^n Rn.

(10) A T A^T AT is an invertible matrix.


Theorem 5

Let T : R n → R n T:ℝ^n\to ℝ^n T:RnRn be a linear fransformation and let A A A be the standard matrix for T T T. Then T T T is invertible if and only if A A A is an invertible matrix.



2.4 Partitioned Matrices


  • If matrices A A A and B B B are the same size and are partitioned in exactly the same way, then it is natural to make the same partition of the ordinary matrix sum A + B A+B A+B. In this case, each block of A + B A+B A+B is the sum of the corresponding blocks of A A A and B B B.
  • Multiplication of a partitioned matrix by a scalar is also computed block by block.
  • Partitioned matrices can be multiplied by the usual row—column rule as if the block entries were scalars, provided that for a product AB, the column partition of A matches the row partition of B. (We say that the partitions of A and B are conformable for block multiplication. )
    在这里插入图片描述
    在这里插入图片描述

Theorem : Column—Row Expansion of A B AB AB

if A A A is m × n m\times n m×n and B B B is n × p n\times p n×p, then

A B = [ c o l 1 ( A ) c o l 2 ( A ) ⋯ c o l n ( A ) ] [ r o w 1 ( B ) r o w 2 ( B ) ⋮ r o w n ( B ) ] = c o l 1 ( A ) r o w 1 ( B ) + ⋯ + c o l n ( A ) r o w n ( B ) AB=\begin{bmatrix}\mathrm{col}_1(A)&\mathrm{col}_2(A)&\cdots&\mathrm{col}_n(A)\end{bmatrix}\begin{bmatrix}\mathrm{row}_1(B)\\\mathrm{row}_2(B)\\\vdots\\\mathrm{row}_n(B)\end{bmatrix}\\=\mathrm{col}_1(A)\mathrm{row}_1(B)+\cdots+\mathrm{col}_n(A)\mathrm{row}_n(B) AB=[col1(A)col2(A)coln(A)]row1(B)row2(B)rown(B)=col1(A)row1(B)++coln(A)rown(B)



Chapter 3 : Determinants


3.1 Introduction To Determinants


For n ≥ 2 n\ge 2 n2, the determinant of an n × n n\times n n×n matrix A = [ a i j ] A=[a_{ij}] A=[aij] is the sum of n n n terms of the form ± a 1 j det ⁡ A 1 j \pm a_{1j}\det A_{1j} ±a1jdetA1j.
det ⁡ A = ∑ j = 1 n ( − 1 ) j + 1 a 1 j det ⁡ A 1 j \det A=\sum\limits^{n}_{j=1}(-1)^{j+1}a_{1j}\det A_{1j} detA=j=1n(1)j+1a1jdetA1j


Theorem 1

Let C i j = ( − 1 ) i + j det ⁡ A i j C_{ij}=(-1)^{i+j}\det A_{ij} Cij=(1)i+jdetAij , then det ⁡ A = ∑ j = 1 n a 1 j C 1 j \det A=\sum\limits^{n}_{j=1}a_{1j}C_{1j} detA=j=1na1jC1j

This fomula is called a cofactor (余子式) expansion across the first row of A A A. The determinant of an n × n n\times n n×n matrix A A A can be computed by a cofactor across any row or down any column, that is :

det ⁡ A = ∑ j = 1 n a i j C i j \det A=\sum\limits^{n}_{j=1}a_{ij}C_{ij} detA=j=1naijCij   or   det ⁡ A = ∑ i = 1 n a i j C i j \det A=\sum\limits^{n}_{i=1}a_{ij}C_{ij} detA=i=1naijCij


Theorem 2

If A A A is a triangular matrix, then det ⁡ A \det A detA is the product of the entries on the main diagonal of A A A.



3.2 Properties of determinants


Theorem 3

Let A A A be a square matrix

(1) If a multiple of one row of A A A is added to another row to produce a matrix B B B, then det ⁡ B = det ⁡ A \det B=\det A detB=detA .

(2) If two rows of A A A are interchanged to produce B B B, then det ⁡ B = − det ⁡ A \det B=-\det A detB=detA .

(3) If one row of A A A is multiplied by k k k to produce B B B, then det ⁡ B = k ⋅ det ⁡ A \det B=k\cdot\det A detB=kdetA .


Theorem 4

det ⁡ A T = det ⁡ A \det A^T=\det A detAT=detA

det ⁡ A B = ( det ⁡ A ) ( det ⁡ B ) \det AB=(\det A)(\det B) detAB=(detA)(detB)



3.3 Cramer’s Rule, Volume and Linear Transformations


Theorem 1 : Cramer’s Rule

Let A A A be an invertible n × n n\times n n×n matrix. For any b ⃗ \vec b b in R n ℝ^n Rn, the unique solution x ⃗ \vec x x of A x ⃗ = b ⃗ A\vec x=\vec b Ax =b has entries given by

x i = det ⁡ A i ( b ⃗ ) det ⁡ A ,   i = 1 , 2 , ⋯   , n x_i=\frac{\det A_i(\vec b)}{\det A},\ i=1,2,\cdots,n xi=detAdetAi(b ), i=1,2,,n

where A i ( b ⃗ ) A_i(\vec b) Ai(b ) denotes the i   t h i\ th i th column replaced by the vector b ⃗ \vec b b .


Theorem 2

Let A A A be an invertible n × n n\times n n×n matrix. Then A − 1 = a d j   A det ⁡ A A^{-1}=\frac{\mathrm{adj}\ A}{\det A} A1=detAadj A , where a d j A \mathrm{adj} A adjA is called the adjugate (伴随矩阵) of A .

a d j A = [ C 11 C 21 ⋯ C n 1 C 12 C 22 ⋯ C n 2 ⋮ ⋮ C 1 n C 2 n ⋯ C n n ] \mathrm{adj}A=\begin{bmatrix}C_{11}&C_{21}&\cdots& C_{n1}\\C_{12}&C_{22}&\cdots&C_{n2}\\\vdots&&&\vdots\\ C_{1n}&C_{2n}&\cdots&C_{nn}\end{bmatrix} adjA=C11C12C1nC21C22C2nCn1Cn2Cnn


Theorem 3 : Determinants as Area or Volume

If A A A is a 2 × 2 2\times2 2×2 matrix, the area of the parallelogram determinant by the columns of A A A is ∣ det ⁡ A ∣ |\det A| detA If A A A is a 3 × 3 3\times 3 3×3 matrix, the volume of the parallelepiped determined by the columns of A A A is ∣ det ⁡ A ∣ |\det A| detA .


Theorem 4 : Linear Transformations

Let T : R 2 → R 2 T:ℝ^2\toℝ^2 T:R2R2 be the linear transformation determined by a 2 × 2 2\times2 2×2 matrix A A A . If S S S is a parallelogram in R 2 ℝ^2 R2, and p ⃗ \vec p p is a vector, then

{ a r e a   o f   T ( p ⃗ + S ) } = ∣ det ⁡ A ∣ ⋅ { a r e a   o f   S } \{area\ of\ T(\vec p+S)\}=|\det A|\cdot\{area\ of\ S\} {area of T(p +S)}=detA{area of S}



Chpater 4 : Vector Spaces


4.1 Vector Spaces and Subspaces


Definition : Vector Spaces

A set V ≠ ∅ V\ne \varnothing V= admits V × V = V V\times V=V V×V=V (加法) and R × V = V ℝ\times V=V R×V=V (纯量乘法) that obey the rules u ⃗ + v ⃗ = v ⃗ + u ⃗ \vec u+\vec v=\vec v+\vec u u +v =v +u , ( u ⃗ + v ⃗ ) + w ⃗ = u ⃗ + ( v ⃗ + w ⃗ ) (\vec u+\vec v)+\vec w=\vec u+(\vec v+\vec w) (u +v )+w =u +(v +w ) , c ( u ⃗ + v ⃗ ) = c u ⃗ + c v ⃗ c(\vec u+\vec v)=c\vec u+c\vec v c(u +v )=cu +cv , 1 u ⃗ = u ⃗ 1\vec u=\vec u 1u =u , ( c + d ) u ⃗ = c u ⃗ + d u ⃗ (c+d)\vec u=c\vec u+d\vec u (c+d)u =cu +du , ( c d ) u ⃗ = c ( d u ⃗ ) , ∀ c , d ∈ R , u , v , w ∈ V (cd)\vec u=c(d\vec u), \forall c,d\inℝ, u,v,w\in V (cd)u =c(du ),c,dR,u,v,wV . Denote by 0 ⃗ = 0 u ⃗ \vec 0=0\vec u 0 =0u , − u ⃗ = ( − 1 ) u ⃗ -\vec u=(-1)\vec u u =(1)u . We call V V V a vector space and the objects in V V V vectors.


Definition : Subspaces

A subspace of a vector space V V V is a subset H H H of V V V that has three properties:
(1) 0 ⃗ \vec 0 0 of V V V is also in H H H .
(2) H H H is closed under vector addition.
(3) H H H is closed under scalar multiple.


Definition : linear variety

Let H H H be a subspace of the vector space V V V. Translate H H H by a vector u ⃗ ∈ V \vec u\in V u V, we get u ⃗ + H : = { u ⃗ + v ⃗   ∣   v ⃗ ∈ H } \vec u+H:=\{\vec u+\vec v\ |\ \vec v\in H\} u +H:={u +v   v H} called a linear variety (线性簇) .

  • In R 3 ℝ^3 R3 , linear varieties can be any planes, lines and points that may be through or not through the origin.
  • The solution set of a consistent linear system A x ⃗ = b ⃗ A\vec x=\vec b Ax =b is a linear variety u ⃗ + H \vec u+H u +H with u ⃗ \vec u u a particular solution of A x ⃗ = b ⃗ A\vec x=\vec b Ax =b and H : = { x h   ∣   A x h = 0 } H:=\{x_h\ |\ Ax_h=0\} H:={xh  Axh=0} a subspace.

H , K H,K H,K subspaces of V V V, then H + K : = { h ⃗ + k ⃗   ∣   h ⃗ ∈ H , k ⃗ ∈ K } H+K:=\{\vec h+\vec k\ |\ \vec h\in H,\vec k\in K\} H+K:={h +k   h H,k K} is also a subspace of V V V. Then, H ∩ K H\cap K HK is again a subspace of V V V. However, H ∪ K H\cup K HK is not a subspace of V V V.


Theorem 1

If V V V is a vector space, v ⃗ 1 , ⋯   , v ⃗ p ∈ V \vec v_1,\cdots,\vec v_p\in V v 1,,v pV, then H = S p a n { v ⃗ 1 , ⋯   , v ⃗ p } H=\mathrm{Span}\{\vec v_1,\cdots,\vec v_p\} H=Span{v 1,,v p} is a subspace of V V V. And { v ⃗ 1 , ⋯   , v ⃗ p } \{\vec v_1,\cdots,\vec v_p\} {v 1,,v p} is a spanning set (生成集) of H H H.


{ v ⃗ 1 , ⋯   , v ⃗ p } \{\vec v_1,\cdots,\vec v_p\} {v 1,,v p} is a spanning set (生成集) of H H H if and only if

∀ b ⃗ ∈ H ,   ∃ x ⃗ = [ x 1 , ⋯   , x p ] T ∈ R p \forall\vec b\in H,\ \exists\vec x=[x_1,\cdots,x_p]^T\in ℝ^p b H, x =[x1,,xp]TRp such that x 1 v ⃗ 1 + ⋯ + x p v ⃗ p = b ⃗ x_1\vec v_1+\cdots+x_p\vec v_p=\vec b x1v 1++xpv p=b



4.2 Null Space and Column Space


4.2.1 Null Space (零空间)


Let A ∈ M m × n A\in M_{m\times n} AMm×n. We call N u l A : = { x ⃗   ∣   x ⃗ ∈ R n   a n d   A x ⃗ = 0 } \mathrm{Nul}A:=\{\vec x\ |\ \vec x\inℝ^n\ \mathrm{and}\ A\vec x=0\} NulA:={x   x Rn and Ax =0} the Null Space of A A A.


Theorem 2

Let A ∈ M m × n A\in M_{m\times n} AMm×n, then N u l A \mathrm{Nul}A NulA is a subspace of R n ℝ^n Rn


Properties of Null Space

Let A ∈ M m × n A\in M_{m\times n} AMm×n

  • N u l A = { 0 } \mathrm{Nul}A=\{0\} NulA={0} is equivalent to that A x ⃗ = 0 A\vec x=0 Ax =0 has only the trivial solution x ⃗ = 0 \vec x=0 x =0 .
  • N u l A = { 0 } \mathrm{Nul}A=\{0\} NulA={0} is equivalent to that x ⃗ ↦ A x ⃗ \vec x\mapsto A\vec x x Ax is a ont-to-one map from R n ℝ^n Rn to R m ℝ^m Rm.
  • N u l A = { 0 } \mathrm{Nul}A=\{0\} NulA={0} implies m ≥ n m\ge n mn
  • N u l A = R n \mathrm{Nul}A=ℝ^n NulA=Rn is equivalent to A = 0 A=0 A=0

Definition : nullity (零化度)

零空间的维度,即 : dim ⁡ N u l A \dim\mathrm{Nul}A dimNulA


4.2.2 Column Space (列空间)


Let A = [ c ⃗ 1 , ⋯   , c ⃗ n ] ∈ M m × n A=[\vec c_1,\cdots,\vec c_n]\in M_{m\times n} A=[c 1,,c n]Mm×n . We call C o l A : = S p a n { c ⃗ 1 , ⋯   , c ⃗ n } \mathrm{Col}A:=\mathrm{Span}\{\vec c_1,\cdots,\vec c_n\} ColA:=Span{c 1,,c n} the Column Space of A A A.


Properties of Column Space

Let A ∈ M m × n A\in M_{m\times n} AMm×n .

  • C o l A = { b ⃗ ∈ R m   ∣   ∃ x ⃗ ∈ R n   s . t .   A x ⃗ = b ⃗ } \mathrm{Col}A=\{\vec b\inℝ^m\ |\ \exists\vec x\inℝ^n\ s.t.\ A\vec x=\vec b\} ColA={b Rm  x Rn s.t. Ax =b }
  • C o l A = \mathrm{Col}A= ColA= range A A A with A : R n → R m   ,   x ⃗ ↦ A x ⃗ A:ℝ^n\toℝ^m\ ,\ \vec x\mapsto A\vec x A:RnRm , x Ax
  • C o l A \mathrm{Col}A ColA is a subspace of R m ℝ^m Rm
  • C o l A = R m \mathrm{Col}A=ℝ^m ColA=Rm if and only if ∀ b ⃗ ∈ R m ,   ∃ x ⃗ ∈ R n \forall\vec b\inℝ^m,\ \exists\vec x\inℝ^n b Rm, x Rn such that A x ⃗ = b ⃗ A\vec x=\vec b Ax =b
  • C o l A = R m \mathrm{Col}A=ℝ^m ColA=Rm if and only if A : R n → R m   ,   x ⃗ ↦ A x ⃗ A:ℝ^n\toℝ^m\ ,\ \vec x\mapsto A\vec x A:RnRm , x Ax is onto.
  • C o l A = R m \mathrm{Col}A=ℝ^m ColA=Rm if and only if the number of pivot rows equals m m m .
  • C o l A = R m \mathrm{Col}A=ℝ^m ColA=Rm implies that m ≤ n m\le n mn
  • if m = n m=n m=n then C o l A = R n ⇔ N u l A = { 0 } \mathrm{Col}A=ℝ^n\Leftrightarrow\mathrm{Nul}A=\{0\} ColA=RnNulA={0}


4.3 Basis


Theorem 1

Let V V V be a vector space. { v ⃗ 1 , ⋯   v ⃗ p } ⊂ V ( p ≥ 2 ) \{\vec v_1,\cdots\,\vec v_p\}\subset V(p\ge2) {v 1,v p}V(p2) and v ⃗ 1 ≠ 0 \vec v_1\ne0 v 1=0, then { v ⃗ 1 , ⋯   v ⃗ p } \{\vec v_1,\cdots\,\vec v_p\} {v 1,v p} is linearly dependent if and only if ∃ j ∈ { 2 , ⋯   , p } \exists j\in\{2,\cdots,p\} j{2,,p} such that v j v_j vj is a linear combination of { v ⃗ 1 , ⋯   , v ⃗ j − 1 } \{\vec v_1,\cdots,\vec v_{j-1}\} {v 1,,v j1}


Theorem 2

Let S = { v ⃗ 1 , ⋯   , v ⃗ p } ⊂ S=\{\vec v_1,\cdots,\vec v_p\}\subset S={v 1,,v p} vector space V V V, H = S p a n S H=\mathrm{Span}S H=SpanS .

(1) If S ∋ v ⃗ k ∈ S p a n ( S \ { v ⃗ k } ) S\ni\vec v_k\in\mathrm{Span}(S\backslash\{\vec v_k\}) Sv kSpan(S\{v k}), then S p a n ( S \ { v ⃗ k } ) = H \mathrm{Span}(S\backslash\{\vec v_k\})=H Span(S\{v k})=H.

(2) If S S S is linearly independent, then every proper subset (真子集) of S S S generate a proper subspace (真子空间) of H H H.


Definition : Basis

Let H H H be a subspace of the vector space V V V. B = { b ⃗ 1 , ⋯   , b ⃗ p } ⊂ H B=\{\vec b_1,\cdots,\vec b_p\}\subset H B={b 1,,b p}H is said to be a basis for H H H if :
(i) B B B is linearly independent and (ii) S p a n B = H \mathrm{Span}B=H SpanB=H


Theorem 3

Let S = { v ⃗ 1 , ⋯   , v ⃗ p } ⊂ S=\{\vec v_1,\cdots,\vec v_p\}\subset S={v 1,,v p} vector space V V V, H = S p a n S H=\mathrm{Span}S H=SpanS. If H ≠ { 0 } H\ne\{0\} H={0}, then ∃ B ⊂ S \exists B\subset S BS is a basis of H H H.


Theorem 4

The pivot columns of A A A is a basis of H = C o l A H=\mathrm{Col}A H=ColA .

  • Basis for a vector space is a spanning set as small as possible.
  • Basis for a vector space is a linearly independent set as large as possible.


4.4 Coordinate Systems

Suppose B = { b ⃗ 1 , ⋯   , b ⃗ n } B=\{\vec b_1,\cdots,\vec b_n\} B={b 1,,b n} is a basis for a vector space V V V. Let x ⃗ ∈ V \vec x\in V x V. The coordinates of x ⃗ \vec x x relative to the basis B B B are the weights c 1 , ⋯ c n c_1,\cdots c_n c1,cn. We denoted by [ x ⃗ ] B = [ c 1 ⋮ c n ] [\vec x]_B=\begin{bmatrix}c_1\\\vdots\\c_n\end{bmatrix} [x ]B=c1cn the B-coordinate vector of x ⃗ \vec x x . The mapping x ⃗ ↦ [ x ⃗ ] B \vec x\mapsto[\vec x]_B x [x ]B is the coordinate mapping.


Theorem 1

Let B = { b ⃗ 1 , ⋯   , b ⃗ n } B=\{\vec b_1,\cdots,\vec b_n\} B={b 1,,b n} is a basis for a vector space V V V. Then the coordinate mapping x ⃗ ↦ [ x ⃗ ] B \vec x\mapsto [\vec x]_B x [x ]B is a one to one linear transformation from V V V onto R n ℝ^n Rn.


4.4.1 Isomorphic Vector Spaces (同构的向量空间)

If there exists a one-to-one linear map T T T from a vector space V V V onto a vector space W W W, then we say V V V and W W W are isomorphic and T T T is an isomorphism from V V V onto W W W.

  • Coordinate mapping is an isomorphism.
  • Every vector space with a basis of n n n vectors is isomorphic to R n ℝ^n Rn.

If the vector spaces V V V and W W W are isomorphic with an isomorphism T T T, then

  • Linearly (in)dependent set in V V V is mapped to linearly (in)dependent set in W W W by T T T.
  • Linearly dependence relations in V V V are carried over by T T T to W W W.

4.4.2 Coordinate Mapping in Subspaces of R n ℝ^n Rn


  • Let H H H be a subspace of R n ℝ^n Rn and B = { b ⃗ 1 , ⋯   , b ⃗ p } B=\{\vec b_1,\cdots,\vec b_p\} B={b 1,,b p} be a basis for H H H. Then, we must have 1 ≤ p ≤ n 1\le p\le n 1pn.
  • Let P B = [ b ⃗ 1 , ⋯   , b ⃗ p ] P_B=[\vec b_1,\cdots,\vec b_p] PB=[b 1,,b p], then the coordinate mapping is ‘inverse’ in some sense.
    在这里插入图片描述
    在这里插入图片描述

4.5 Dimension of a Vector Space


Theorem 1

If a vector space V V V has a basis B = { b ⃗ 1 , ⋯   , b ⃗ n } B=\{\vec b_1,\cdots,\vec b_n\} B={b 1,,b n}, then any set in V V V containing more than n n n vectors must be linearly dependent.


Corollary

A linearly independent set must have ≤ n \le n n vectors.


Theorem 2

If a vector space V V V has a basis of n n n vectors, then every basis of V V V must contain exactly n n n vectors.


Definition : Dimension of a Vector Space

If V V V is spanned by a finite set, then V V V is said to be finite-dimensional, and the dimension of V V V, written as dim ⁡ V \dim V dimV, is the number of vectors in a basis for V V V. The dimension of the zero vector space { 0 } \{0\} {0} is defined to be zero. If V V V is not spanned by a finite set, then V V V is said to be infinite-dimensional.

  • Isomorphic vector spaces have the same dimension.
  • Dimension is the cardinality of a basis.
  • Basis for N u l A \mathrm{Nul}A NulA and C o l A \mathrm{Col}A ColA can be found by the echelon form of A.
  • Number of free variables is dim ⁡ N u l A \dim\mathrm{Nul}A dimNulA
  • Number of pivot columns is dim ⁡ C o l A \dim\mathrm{Col}A dimColA
  • dim ⁡ N u l A \dim\mathrm{Nul}A dimNulA+ dim ⁡ C o l A = \dim\mathrm{Col}A= dimColA= total number of columns in A A A .

Theorem 3 : Expand a Linearly Independent Set to a Basis

Let V V V be a vector space of finite dimension, H H H a nonzero subspace of V V V.

  • If we expand a linearly independent set in H H H with more vectors from H H H until inserting any vector will make the set linearly dependent, then this maximal linearly independent set must be a basis for H H H.
  • dim ⁡ H ≤ dim ⁡ V \dim H\le \dim V dimHdimV

Theorem 4

Let V V V be a p p p-dimensional vector space, p ≥ 1 p\ge1 p1 .

(1) Any linearly independent set of exactly p p p vectors in V V V is automatically a basis for V V V .

(2) Any spanning set of exactly p p p vectors is also a basis for V V V.


cardinality of any linearly independent set   ≤ \ \le    cardinality of any basis   =   dim ⁡ V   ≤ \ =\ \dim V\ \le  = dimV   cardinality of any spanning set



4.6 Row Space and Rank of a Matrix


4.6.1 Row Space


R o w A : = S p a n { r ⃗ 1 , ⋯   , r ⃗ m } \mathrm{Row}A:=\mathrm{Span}\{\vec r_1,\cdots,\vec r_m\} RowA:=Span{r 1,,r m} for A ∈ M m × n A\in M_{m\times n} AMm×n with the rows { r ⃗ 1 , ⋯   , r ⃗ m } \{\vec r_1,\cdots,\vec r_m\} {r 1,,r m} is called the row space of A A A .

  • R o w A \mathrm{Row}A RowA is a subspace of R n ℝ^n Rn
  • R o w A \mathrm{Row}A RowA can be identified with C o l A T \mathrm{Col}A^T ColAT

Theorem 1

(1) If A A A and B B B are row equivalent, then R o w A = R o w B \mathrm{Row}A=\mathrm{Row}B RowA=RowB .

(2) If B B B is an echelon form of A A A, then the nonzero rows of B B B form a basis for R o w A \mathrm{Row}A RowA .


4.6.2 Rank

r a n k A : = dim ⁡ C o l A \mathrm{rank}A:=\dim\mathrm{Col}A rankA:=dimColA


Rank Theorem

(1) r a n k A = dim ⁡ R o w A \mathrm{rank}A=\dim\mathrm{Row}A rankA=dimRowA

(2) r a n k A + dim ⁡ N u l A = n \mathrm{rank}A+\dim\mathrm{Nul}A=n rankA+dimNulA=n for A ∈ M m × n A\in M_{m\times n} AMm×n



Chapter 5 : Eigenvalues and Eigenvectors


5.1 Eigenvalues and Eigenvectors


An eigenvector of an n × n n\times n n×n matrix A A A is a nonzero vector x ⃗ \vec x x such that A x ⃗ = λ x ⃗ A\vec x=\lambda \vec x Ax =λx for some scalar λ \lambda λ, and the scalar λ \lambda λ is called eigenvalue.

λ \lambda λ is an eigenvalue of an n × n n\times n n×n matrix A A A if and only if the equation ( A − λ I ) x ⃗ = 0 (A-\lambda I)\vec x=0 (AλI)x =0 has a nontrivial solution. The set of all solutions is a subspace of R n ℝ^n Rn and is called the eigenspace of A A A corresponding to λ \lambda λ .


Theorem 1

The eigenvalues of a triangular matrix are the entries on its main diagonal.


Theorem 2

If v ⃗ 1 , ⋯   , v ⃗ r \vec v_1,\cdots,\vec v_r v 1,,v r are eigenvectors that correspond to distinct eigenvalues λ 1 , ⋯   , λ r \lambda_1,\cdots,\lambda_r λ1,,λr of an n × n n\times n n×n matrix A A A, then the set { v ⃗ 1 , ⋯   , v ⃗ r } \{\vec v_1,\cdots,\vec v_r\} {v 1,,v r} is linearly independent.

λ k \lambda^k λk is the eigenvalue of matrix A k A^k Ak.



5.2 The Characteristic equation


Let A A A be an n × n n\times n n×n matrix, let U U U be any echelon form obtained from A A A by row replacements and row interchanges, and let r r r be the number of such row interchanges. The the determinant of A A A, is ( − 1 ) r (-1)^r (1)r times the product of the diagonal entries.


Theorem 1

Let A A A be an n × n n\times n n×n matrix. Then A A A is invertible if and only if:

(1) The number 0 is not an eigenvalue of A A A .

(2) The determinant of A A A is not zero.


Definition : Characteristic Equation

The scalar equation det ⁡ ( A − λ I ) = 0 \det(A-\lambda I)=0 det(AλI)=0 is called the characteristic equation of A A A . A scalar λ \lambda λ is an eigenvalue of an n × n n\times n n×n matrix A A A if and only if λ \lambda λ satisfies the characteristic equation.
det ⁡ ( A − λ I ) \det(A-\lambda I) det(AλI) is a polynomial of degree n n n called the characteristic polynomial of A A A.
在这里插入图片描述


Definition : Similarity

If A A A and B B B are n × n n\times n n×n matrices, then A A A is similar to B B B if there is an invertible matrix P P P such that P − 1 A P = B P^{-1}AP=B P1AP=B, or, equivalently, A = P B P − 1 A=PBP^{-1} A=PBP1.
Change A A A into P − 1 A P P^{-1}AP P1AP is called a similarity transformation.


Theorem 2

If n × n n\times n n×n matrices A A A and B B B are similar, then they have the same characteristic polynomial and hence the same eigenvalues.


Warnings :

  1. The matrices [ 2 1 0 2 ] \begin{bmatrix}2&1\\0&2\end{bmatrix} [2012] and [ 1 0 0 2 ] \begin{bmatrix}1&0\\0&2\end{bmatrix} [1002] are not similar even though they ha ve the same eigenvalues.
  2. Similarity is not the same as row equivalence. Row operations on a matrix usually change its eigenvalues.




洋洋洒洒 2w字,努力将所有知识点记了下来,也不知道会有多大用处,但如果对你起到作用了的话,请务必 关注+点赞+收藏 感激不尽。临表涕零,不知所言。

评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

SP FA

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值