Introduction to Linear Algebra(6) Orthogonal and Least Squares

Inner Product,Length, and Orthiginality

Two vectors u u u and v v v in R n R^n Rn are orthogonal if u ⋅ v = 0 u\cdot v=0 uv=0.
Two vectors u u u and v v v are orthogonal if and only if ∣ ∣ u + v ∣ ∣ 2 = ∣ ∣ u ∣ ∣ 2 + ∣ ∣ v ∣ ∣ 2 ||u+v||^2=||u||^2+||v||^2 u+v2=u2+v2.
Let A be an m × n m \times n m×n matrix. The orthogonal complement of the row space of A A A is the null space of A A A, and teh orthogonal somplement of the column space of A A A is the null space of A ⊥ A^\perp A: ( R o w A ) ⊥ = N u l A , ( C o l A ) ⊥ = N u l A ⊥ (RowA)^{\perp} = Nul A , (ColA)^{\perp}=NulA^{\perp} (RowA)=NulA,(ColA)=NulA

Orthogonal Sets

A set of vectors KaTeX parse error: Can't use function '\u' in math mode at position 14: \{u_1,\cdots,\̲u̲_p\} in R n R^n Rn is said to be an orthogonal set if each pair of distinct vectors from theset is orthogonal, that is, if KaTeX parse error: Expected 'EOF', got '\cdotu' at position 4: u_i\̲c̲d̲o̲t̲u̲_j=0 whenever i ≠ j i\ne j i̸=j.
Let { u 1 , ⋯   , u p } \{u_1,\cdots,u_p\} {u1,,up} be an orthogonal baisis for a subspace W W W of R n R^n Rn. For each y y y in W W W, the weights in the lienar combination y = c 1 u 1 + ⋯ + c p u p y=c_1u_1+\cdots+c_pu_p y=c1u1++cpup
are given by c j = y ⋅ u j u j ⋅ u j ( j = 1 , ⋯   , P ) c_j=\frac{y\cdot u_j}{u_j\cdot u_j} (j=1,\cdots,P) cj=ujujyuj(j=1,,P)

An Orthogonal Projection

KaTeX parse error: Expected '}', got '\cdotu' at position 24: …proj_Ly=\frac{y\̲c̲d̲o̲t̲u̲}{u\cdotu}u

Orthonormal Sets

A set { u 1 , ⋯   , u p } \{u_1,\cdots,u_p\} {u1,,up} is an orthonormal set if it is an oorthogonal set of
An m × n m \times n m×n matrix U U U has orthonormal columns if and only if U T U = I U^TU=I UTU=I.
Let U U U be an m × n m \times n m×n matrix with orthonormal columns, and let x x x and y y y be in R n R^n Rn. Then
a. ∣ ∣ U x ∣ ∣ = ∣ ∣ x ∣ ∣ ||Ux||=||x|| Ux=x
b. ( U x ) ⋅ ( u y ) = x ⋅ y (Ux)\cdot(uy)=x\cdot y (Ux)(uy)=xy
c. ( U x ) ⋅ ( u y ) = 0 (Ux)\cdot(uy) = 0 (Ux)(uy)=0 if and only if x ⋅ y = 0 x \cdot y=0 xy=0
The best approsimation Theorem
Let W W W be a subspace of R n R^n Rn, let y y y be any vector in R n R^n Rn, and let y ^ \hat{y} y^ be the orthegonal projection of y y y onto W W W. Then y ^ \hat{y} y^ is the closet point in W W W to y y y, in the sense that ∣ ∣ y − y ^ ∣ ∣ &lt; ∣ ∣ y − v ∣ ∣ ||y-\hat{y}||&lt;||y-v|| yy^<yv
for all v v v in W W W distinct from y ^ \hat{y} y^.
If { u 1 , ⋯ &ThinSpace; , u p } \{u_1,\cdots,u_p\} {u1,,up} is an orthonormal basis for a subspace W W W of R n R^n Rn, then p r o j w y = ( y ⋅ u 1 ) u 1 + ( y ⋅ u 2 ) u 2 + ⋯ + ( y ⋅ u p ) u p proj_{w}y=(y\cdot u_1)u_1+(y\cdot u_2)u_2+\cdots+(y\cdot u_p)u_p projwy=(yu1)u1+(yu2)u2++(yup)up
If U = [ u 1 u 2 ⋯ u p ] , t h e n U=[u_1 u_2 \cdots u_p],then U=[u1u2up],then p r o j w y = U U T y proj_{w}y=UU^Ty projwy=UUTy for all y y y in R n R^n Rn

The GRAM-SCHMIDT PROCESS

Given a basis x 1 , ⋯ &ThinSpace; , x p x_1,\cdots,x_p x1,,xp for a nonzero subspace W W W of R n R^n Rn, define v 1 = x 1 v_1=x_1 v1=x1 x 2 = x 2 − x 2 ⋅ v 1 v 1 ⋅ v 1 v 1 x_2 = x_2-\frac{x_2\cdot v_1}{v_1\cdot v_1}v_1 x2=x2v1v1x2v1v1 v 3 = x 3 − x 3 ⋅ v 1 v 1 ⋅ v 1 v 1 − x 3 ⋅ v 2 v 2 ⋅ v 2 v 2 v_3 = x_3 - \frac{x_3\cdot v_1}{v_1 \cdot v_1}v_1 - \frac{x_3 \cdot v_2}{v_2\cdot v_2} v_2 v3=x3v1v1x3v1v1v2v2x3v2v2 ⋮ \vdots v p = x p − x p ⋅ v 1 v 1 ⋅ v 1 v 1 ⋯ − x p ⋅ v p − 1 v p − 1 ⋅ v p − 1 v p − 1 v_p = x_p - \frac{x_p\cdot v_1}{v_1 \cdot v_1}v_1 \cdots -\frac{x_p\cdot v_{p-1}}{v_{p-1}\cdot v_{p-1}}v_{p-1} vp=xpv1v1xpv1v1vp1vp1xpvp1vp1
Then v 1 , ⋯ &ThinSpace; , v p {v_1,\cdots,v_p} v1,,vp is an orthogonal basis for W W W. In addition S p a n { v 1 , ⋯ &ThinSpace; , v k } = S p a n { x 1 , ⋯ &ThinSpace; , x k } f o r 1 ≤ k ≤ p Span\{v_1,\cdots,v_k\}=Span\{x_1,\cdots, x_k\} for 1 \le k \le p Span{v1,,vk}=Span{x1,,xk}for1kp

QR Factorization of Matrices

If A is an m × n m \times n m×n matrix with linearly independent columns, then A A A cam ne factored as A = Q R A = QR A=QR, where Q Q Q is an m × n m \times n m×n matrix whose columns form an orthonormal basis for C o l A ColA ColA and R R R is an n × n n \times n n×n upper triangular invertible matrix with positive entries on its diagonal.

Least-Squares Problems

If A A A is m × n m \times n m×n and b b b is in R m R^m Rm, a least-squares solution of A x = b Ax=b Ax=b ia an x ^ \hat{x} x^ in R n R^n Rn such that: ∣ ∣ b − A x ^ ∣ ∣ ≤ ∣ ∣ b − A x ∣ ∣ ||b-A\hat{x}||\le ||b-Ax|| bAx^bAx
for all x x x in R n R^n Rn.
Note that
A x ^ A\hat{x} Ax^ is in the C o l A Col A ColA
Let A A A be an m × n m \times n m×n matrix. THe following statements are logically equivalent:
a. The equation A x = b Ax=b Ax=b has a unique least-squares solution for each b b b in R m R^m Rm.
b. The columns of A A A are linearly independent.
c. The matrix A T A A^{T}A ATA is invertible.
When these statments are true, the least-squares solution x ^ \hat{x} x^ is given by x ^ = ( A T A ) − 1 A T b \hat{x}=(A^{T}A)^{-1}A^{T}b x^=(ATA)1ATb
The set of least-squares solutions of A x = b Ax=b Ax=b coincides with the nonempty set of solutions of the normal equations A T A = x = A T b A^{T}A=x =A^{T}b ATA=x=ATb.
Alternative Calculations of Least-Squares Solutions
given an m × n m \times n m×n matrix A with linearly independent columns, let A = Q R A=QR A=QR be a QR factorization of A A A. Then for each b in R m R^m Rm, the equation A x = b Ax=b Ax=b has a unique least-squares solution.given by x ^ = R − 1 Q T b \hat{x}=R^{-1}Q^Tb x^=R1QTb

Least-Squares Lines

For a system: predicted y-value: β 0 + β 1 x 1 \beta_0 +\beta_1 x_1 β0+β1x1 Observed y-value: y 1 y_1 y1
We can write this system as X β = X \beta= Xβ=, where X = X= X=, β = [ β 0 , β 1 ] \beta=[\beta_0 , \beta_1] β=[β0,β1]

Inner Product spaces

An inner product on a vector space V V V is a function that, to each pair of vectors u u u and v v v in V V V, associates a real number <u,v> and satisfies the following axioms, for all u , v u,v u,v, and w w w in V V V and all scalars c c c:
1. &lt; u , v &gt; = &lt; v , u &gt; &lt;u,v&gt; = &lt;v,u&gt; <u,v>=<v,u>
2. &lt; u + v , w &gt; = &lt; u , w &gt; + &lt; v , w &gt; &lt;u+v,w&gt;=&lt;u,w&gt;+&lt;v,w&gt; <u+v,w>=<u,w>+<v,w>
3. &lt; c u , v &gt; = c &lt; u , v &gt; &lt;cu,v&gt;=c&lt;u,v&gt; <cu,v>=c<u,v>
4. &lt; u , u &gt; ≥ 0 &lt;u,u&gt; \ge 0 <u,u>0 and &lt; u , u &gt; = 0 &lt;u,u&gt;=0 <u,u>=0 if and only if u = 0 u=0 u=0
A vector space with an inner product is called an inner product space.

Trend Analysis of Data

g ^ = c 0 p 0 + c 1 p 1 + c 2 p 2 + c 3 p 3 \hat{g}=c_0p_0+c_1p_1+c_2p_2+c_3p_3 g^=c0p0+c1p1+c2p2+c3p3 and g ^ \hat{g} g^ is called cubic trend function, and c 0 , ⋯ &ThinSpace; , c 3 c_0,\cdots,c_3 c0,,c3 are the trend coefficients of the data. Thus could use Gram-schmidt process to construct the coefficients c 0 , ⋯ &ThinSpace; , c 3 c_0,\cdots,c_3 c0,,c3

Fourier Series

Any function could be approsimated as closely as desired by a function of the form F = a 0 / 2 + a 1 c o s t + ⋯ + a n c o s n t + b 1 s i n t + ⋯ + b n s i n n t F=a_0/2+a_1cost+\cdots+a_ncosnt+b_1sint+\cdots+b_nsinnt F=a0/2+a1cost++ancosnt+b1sint++bnsinnt
the set { 1 , c o s t , c o s 2 t , ⋯ &ThinSpace; , c o s n t , s i n t , s i n 2 t , ⋯ &ThinSpace; , s i n n t } \{1,cost,cos2t,\cdots,cosnt,sint,sin2t,\cdots,sinnt\} {1,cost,cos2t,,cosnt,sint,sin2t,,sinnt} si orthogonal with respect to the inner product &lt; f , g &gt; = ∫ 0 2 π f ( t ) g ( t ) d t &lt;f,g&gt;=\int_0^{2\pi}f(t)g(t)dt <f,g>=02πf(t)g(t)dt Thus a k = &lt; f , c o s k t &gt; &lt; c o s k t , c o s k t &gt; , b k = &lt; f , s i n k t &gt; &lt; s i n k t , s i n k t &gt; , k ≥ 1. a_k=\frac{&lt;f,coskt&gt;}{&lt;coskt,coskt&gt;}, b_k=\frac{&lt;f,sinkt&gt;}{&lt;sinkt,sinkt&gt;},k\ge 1. ak=<coskt,coskt><f,coskt>,bk=<sinkt,sinkt><f,sinkt>,k1.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值