关于线性代数的思考

Linear Algebra

1. Linear Transformation
1) What is the Linear Transformation?
https://blog.csdn.net/zhaodongh/article/details/79432521
2)How can we describe the motion?
Every linear transformation is matched with a group of matrices. When we want to describe the motion in physics, we must choose a reference system first. For example, if we want to describe a motion that a person walks along the train for 10 meters, we can choose infinite references system, such as the train, the ground, the other guys and so on. Similarly, when we deal with a linear transformation, we may choose different coordinates.
3) Meaning of C^(-1)AB
4) Similar transformation
Naturally, a question come to our mind that how can we tell whether two matrices describe the same motion? Let’s zoom in!
在这里插入图片描述在这里插入图片描述
Therefore, we can see that T and T’, basically, are the same transformation under different coordinate systems.
5) Eigenvalue & Eigenvectors of similar matrices
T x = λ x
X = λ x
T’ R x = λ R x
So we can see that the similar transformation preserves the eigenvalues and the eigenvectors need to multiply R, which means the eigenvectors are doing the same transformation R as the basis does.

2. Determinant
a) Meaning
The ratio of transformation

3. Eigenvalue & Eigenvectors
On the one hand, if we take the matrices as boxes that contain bunches of information, the eigenvetors represent the characteristics excavated from the whole information. The number of eigenvectors indicate the quantity of the characteristics. And the respective eigenvalue reveal the importance of the eigenvetor. That’s the reason why we can use them to do the principal component analysis and many other fancy things.
On the other hand, if we treat the matrices as transformations, there will be another fancy story. As we mentioned before, every matrix can represent a transformation. And when we do the matrix multiplication, we are doing the transformation to some vectors. After the transformation, the multiplicand vector will move to another place. However, under most of the circumstances, the motion looks like disorganized and it’s usually a hard nut to crack. So whether there exists some methods that can break up those nuts( I mean transformation)?The answer of eigencevtors is yes. The eigenvectors of the matrices can perfectly delineate the transformation behind the matrices. And the respective eigenvalues of the eigenvectors describe the transformation. If they are real numbers, they represent motions of stretch. If they are imaginary numbers, they represent motions of rotations. To some extent, gathering the whole eigenvectors of one matrix is just like choosing a group of basis. And when we want to use one matrix to do multiplication with another vector, we can simply break the vector up and let it born again using the basis of the eigenvectors. After the complex transformation goes through the prism, the transformation looks much more neat and beautiful. If we think a little more about this process, the eigen-decompostion is not the process to hunting for a similar transformation? What we have done is just to get the neatest similar transformation of the transformation. By using this process, we can do lots of fancy things again, such as Markov Chain, Page Ranking and so on.

4. Singular Value Decompostion

  1. Intuition
    As we can see, the eigen-decomposition is extremely useful o solve some problems, such as pca, Markov Chain and so on. However, the prerequirement of eigen-decomposition is kind of strict that the matrix(m*n) must own n independent eigenvectors. Does there exist some ways to do the decomposition for more general matrices? A guy called SVD saved us.
  2. Process
    A = Q1Λ Q1’= V Σ V’=> get V $ Σ
    A = Q2Λ Q2’= U Σ U’=> get U(or Aσi/Vi)
    Note:
  3. The Σof two decompositions may not be the same
  4. Essence
    Basically, the svd process for a matrix is just to find a group of orthogonal basis in its row space and after the transformation, they move to its column space still being orthogonal, which can be represented by th e following picture.
    在这里插入图片描述
    a) Reasons:
    Theorem 1
    AS=S Λ, which means S is in the column space of A. So the egienvectors of A lie in column space of A
    Theorem 2
    N( A)=N(A) => R( ) = R(A)
    Theorem 3
    A x = y
    After the transformation, y must belong to col(A)
    By Theorem 1, eigenvectors of A lie in its column space. Because it is always symmetric, its eigenvectors also lie in its row space.
    By Theorem 2, the eigenvectors of A lie in row space of A.
    By Theorem 3, by multiplying A, they move to the column space of A.
    Therefore, the picture above is obvious.
    Note:
    What if we do not have enough vectors in row(A) and Col(A)?
    Because we need to find an orthogonal matrix V so, after we choose all possible vectors in R(A), if we still need some other vectors in V, we may just choose some from N(A).
    Similarly, we use N( ) to help col(A) to fill up U.

Important facts
1)R(A)
= # of nonzero eigenvalues of ( A)
= # of nonzero singular values of A
2)if B is invertable==>made of set of elementary operations()
R(A)=R(AB)=R(BA)=R(BAB)

Deep Thinking

  1. The determinant, eigenvalues, eigenvetors are the unique characteristics for one transformation. They do not change as the coordinate system changes. Just like a ball in the space, no matter which way we choose to represent it, it is always a ball.
  2. Symmetric matrix
    A)why its eigenvalues are real

参照

https://blog.csdn.net/xiaocong1990/article/details/54909126
https://blog.csdn.net/zhongkejingwang/article/details/43053513
3BLUE1BROWN频道
Gilbert Strang 爷爷视频

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值