Introduction to Linear Algebra(7) Symmetric Matrices and Quadratic Forms

@[TOC](Introduction to Linear Algebra(7) Symmetric Matrices and Quadratic Forms)

Diagonolization Of Sysmmetric Matrices

If A A A is symmetric, then any tow eigenvectors from different eigenspaces are orthogonal.
PROOF: λ v 1 ⋅ v 2 = ( λ v 1 ) T v 2 = ( A v 1 ) T v 2 = ( v 1 T A T ) v 2 = v 1 T ( A v 2 ) = λ 2 v 1 ⋅ v 2 \lambda v_1 \cdot v_2 = (\lambda v_1)^{T}v_2=(Av_1)^{T}v_2=(v_1^{T}A^{T})v_2=v_1^T(Av_2)=\lambda_2 v_1\cdot v_2 λv1v2=(λv1)Tv2=(Av1)Tv2=(v1TAT)v2=v1T(Av2)=λ2v1v2 for λ 1 ≠ λ 2 \lambda_1 \ne \lambda_2 λ1̸=λ2, v 1 ⋅ v 2 = 0 v_1 \cdot v_2 =0 v1v2=0
An n × n n \times n n×n matrix A A A is orthogonally diagonalizable if and only if A A A is a symmetric matrix.
The Spectral Theorem
An n × n n \times n n×n sysmmetric matrix A A A has the following properties:
a.A has n n n real eigenvalues, counting multiplicities.
b. The demension of the eigenspace for each eigenvalues l a m b d a lambda lambda equals the multiplicity of λ \lambda λ as a root of the characterustic equation.
c. The eigenspaces are mutually orthogonal, in the sense that eigenvectors corresponding to different eigenvalues are orthogonal.
d. A A A is orthogonally diagonalizable.
Spectral Decomposition
if A A A is an sysmmetric matrix. then A A A could be written as A = λ 1 u 1 u 1 T + λ 2 u 2 u 2 T + ⋯ + λ n u n u n T A=\lambda_1u_1u_1^T+\lambda_2u_2u_2^T+\cdots+\lambda_nu_nu_n^T A=λ1u1u1T+λ2u2u2T++λnununT

Quadratic Forms

The Principla Axes Theorem
Let A A A be an n × n n \times n n×n symmetric matrix. Then there is an orthogonal change of variable, x = P y x=Py x=Py, that transforms the quadratic form x T A x x^TAx xTAx into a quadratic form y T D y y^TDy yTDy with no cross-product term.

Classifying Quadratic Forms

A quadratic form Q Q Q is:
a.positive definite if Q ( x ) > 0 Q(x)>0 Q(x)>0 for all X ≠ 0 X \ne 0 X̸=0,
b. negative definite if Q ( x ) &lt; 0 Q(x)&lt;0 Q(x)<0 for all x ≠ 0 x \ne 0 x̸=0,
c. indefinite if Q(x) assumes both positive and negative values.
Quadratic Forms and Eigenvalues
Let A A A be an n × n n \times n n×n symmetric matrix. Then a quadratic form x T A x x^TAx xTAx is:
a. positive definite if and only if the eigenvalues of A A A are all positive,
b. negative definite if and only if the eigenvalues of A A A are all negative,
c.indefinite if and only if A A A has both positive and negative eigenvalues.
Let A A A be a symmetric matrix, and define m m m and M M M . Then M M M is the greatest eigenvalue λ 1 \lambda_1 λ1 of A A A amd m m m is the least eigenvalue of A A A. Then value of x T A x x^TAx xTAx is M M M when x x x is a unit eigenvector u 1 u_1 u1 corresponding to M M M. Then value of x T A x x^TAx xTAx is m m m when x x x is a unit eigenvector corresponding to m m m.
Let A , λ 1 A,\lambda_1 A,λ1, and u 1 u_1 u1 be as in Theorem 6. Then the maximum value of x T A x x^TAx xTAx subject to the constraints x T x = 1 , x T u 1 = 0 x^Tx=1,x^Tu_1=0 xTx=1,xTu1=0
is the second greatset eigenvalue, λ 2 \lambda_2 λ2, and this maximum is attained when x x x is an eigenvector u 2 u_2 u2 corresponding to λ 2 \lambda_2 λ2. This theorem is also extended to λ k \lambda_k λk

The Singular Value Decomposition

Singular Value Decomposition: a factorization A = Q D P − 1 A=QDP^{-1} A=QDP1 is possible for any m × n m \times n m×n matrix A A A
Decomposition
Let A A A be an m × n m \times n m×n matrix with rank r r r. Then there exists an m × n m \times n m×n matrix Σ \Sigma Σ as Σ = [ D 0 0 0 ] \Sigma= \begin{bmatrix}D&amp;0\\ 0&amp;0 \end{bmatrix} Σ=[D000] for which the diagonal entries in D D D are the first r r r singular values of A A A, σ 1 ≥ σ 2 ≥ ⋯ ≥ σ r &gt; 0 \sigma_1 \ge \sigma_2 \ge \cdots \ge \sigma_r &gt;0 σ1σ2σr>0, and there exist an m × m m \times m m×m orthogonal matrix U U U and an n × n n \times n n×n orthogonal matrix V V V such that A = U Σ V T A=U \Sigma V^T A=UΣVT
PROOF:
KaTeX parse error: Expected 'EOF', got '\lambd' at position 1: \̲l̲a̲m̲b̲d̲_i and v i v_i vi are the eigenvalues and eigenvectors of A T A A^TA ATA seperately, so that { A v 1 , ⋯ &ThinSpace; , A v r } \{Av_1,\cdots,Av_r\} {Av1,,Avr} is an orthogonal basis for C o l A ColA ColA. Normalize each A v i Av_i Avi to obtain an orthonormal basis { u 1 , ⋯ &ThinSpace; , u r } \{u_1,\cdots,u_r\} {u1,,ur}, where u i = A v i ∣ ∣ A v i ∣ ∣ = A v i σ i u_i = \frac{Av_i}{||Av_i||}=\frac{Av_i}{\sigma_i} ui=AviAvi=σiAvi and A v i = σ u i , ( ≤ i ≤ r ) Av_i=\sigma u_i ,(\le i \le r) Avi=σui,(ir)
Now extend { u i , ⋯ &ThinSpace; , u r } \{u_i,\cdots,u_r\} {ui,,ur} to an orthonormal basis { u 1 , ⋯ &ThinSpace; , u m } \{u_1,\cdots,u_m\} {u1,,um} of R m R^m Rm, and let U = [ u 1 u 2 ⋯ u m ] a n d V = [ v 1 v 2 ⋯ v n ] U=[u_1\quad u_2 \cdots \quad u_m] and V=[v_1\quad v_2 \cdots v_n] U=[u1u2um]andV=[v1v2vn]
By construction, U U U and v v v are orthogonal matrices. A V = [ A v 1 ⋯ A v r 0 ⋯ 0 ] = [ σ u 1 ⋯ σ u r 0 ⋯ 0 ] AV=[Av_1 \cdots Av_r \quad 0 \cdots 0 ]=[\sigma u_1 \cdots \sigma u_r \quad 0 \cdots 0 ] AV=[Av1Avr00]=[σu1σur00]
Then U Σ = A V U\Sigma =AV UΣ=AV Thus KaTeX parse error: Expected 'EOF', got '\SigmaV' at position 19: …\Sigma V^{-1}=U\̲S̲i̲g̲m̲a̲V̲^{T}
The Invertible Matrix Theorem (concluded)
Let A A A be an n × n n \times n n×n matrix. Then the following statements are each equivalent to the statement that A A A is an invertible matrix.
u. ( C o l A ) ⊥ = 0. (Col A)^{\perp}={0}. (ColA)=0.
v. ( N u l A ) ⊥ = R n . (Nul A)^{\perp}=R^n. (NulA)=Rn.
w. R o w A = R n . RowA=R^n. RowA=Rn.
x. A A A has n n n nonzero sigular values.
Reduced S V D SVD SVD and the Pseudoinverse of A A A
let r r r = rank A A A then the U U U and V V V could be KaTeX parse error: Expected & or \\ or \cr or \end at end of input: …-r}^T=U_rDV_r^T
This factorization of A A A is called a reduced singular value decomposition of A A A. The following matrix is called the pseudoinverse of A A A: A + = V r D − 1 U r T A^+=V_rD^{-1}U_r^T A+=VrD1UrT

Principal Component Analysis

For simplicity, assume that the matrix [ X 1 ⋯ X N ] [X_1 \cdots X_N] [X1XN] is already in mean-deviation form. The goal of principal component analysis is to find an orthogonal p × p p \times p p×p matrix P = [ u 1 ⋯ u p ] P=[u_1 \cdots u_p] P=[u1up] that determines a changeof variable, X = P Y X=PY X=PY,or [ x 1 x 2 ⋮ x p ] = [ u 1 u 2 ⋯ u p ] [ y 1 y 2 ⋮ y p ] \begin{bmatrix}x_1\\x_2\\ \vdots \\x_p \end{bmatrix}=\begin{bmatrix}u_1 &amp; u_2 &amp; \cdots &amp; u_p \end{bmatrix} \begin{bmatrix}y_1 \\y_2 \\ \vdots \\ y_p \end{bmatrix} x1x2xp=[u1u2up]y1y2yp

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值