十分钟理解线性代数的本质_“线性代数的本质”整理笔记3

线性代数中,矩阵表示向量经过线性变换后的落脚点,逆矩阵则对应着反向变换以找到原始向量。当行列式不为0时,矩阵可逆,意味着空间变换可以恢复;行列式为0时,变换后空间坍缩,解可能不存在或受限。列空间和零空间分别描述了所有可能输出和落在原点的向量集合。非方阵矩阵则表示不同维度间的映射。
摘要由CSDN通过智能技术生成

第六讲:Inverse matrices, column space and null space

mit线性代数第一课,就讲到

和线性方程组(linear system of equations)的对应关系。而这个视屏从开始就从向量的角度去解读线性代数, 矩阵是线性变换的一个独特的表达方式,

表示向量经过线性变换后的落脚点。而到了这里,

,从几何意义解读,是向量x在线性变换后到落脚点,为向量b。而这又恰好和线性方程组对应上了。因此:

It sheds light on a pretty cool geometric interpretation for the problem. The matrix A corresponds with some linear transformation, so solving

means we're looking for a vector x, which after applying the transformation lands on b.

那么如何解这个新的方程,即

呢?

从图形理解就很直观了。即然向量b是向量x通过矩阵A形变而来的。那么向量b通过反向的形变,就可以找到原来的向量x了。这种反向的线性变换,就是逆矩阵。也就是:

.

而同样的,

的几何意义,在于先进行第一次的线性变换,再进行一次和第一次形变程度一样,但方向相反的线性变换,结果是,以基向量为例,在两次形变后,都会到原来的位置。A inverse times A equals the matrix that corresponds to "doing nothing". The transformation that "does nothing" is called the "identity transformation." ---identity matrix.

以上,就是

的几何解读。

然而,当行列式为0时,意味着一次形变后空间发生坍塌,坍塌后的空间是不可逆的。这就是行列式和可逆性的直观的关联。

You cannot "unsquish" a line into a plane. At least that's not something that a function can do. That would require transforming each individual vector into a whole line full of vectors. But functions can only take a single input to a single output.

但是当行列式为0时,

仍然有解。

It's just that when your transformation squishes space onto, say, aline, you have to be lucky enough that the vector b lives somewhere on that line. You might notice that some of these zero determinant cases feel a lot more restrictive than others. Given a

matrix, for example, it seems a lot harder for a solution to exist when it squishes space onto a line compared to when it squishes things onto a plane even though both of those are zero determinant.

关联术语:

When the output of a transformation is a line, meaning it's one-dimensional, we say the transformation has a rank of 1... So the word "rank" means the number of dimensions in the output of a transformation.

These set of all possible outputs for your matrix, whether it's a line, a plane, 3-D space, whatever, is called the "column space" of your matrix...The columns of your matrix tell you where the basis vectors land, and the span of those transformed basis gives you all possible outputs.

So a more precise definition of rank would be that it's the number of dimensions in the column space. When this rank is as high as it can be, meaning it equals the number of columns, we call the matrix "full rank".

Notice, the zero vector will always be included in the column space, since linear transformations must keep the origin fixed in place. For a full rank transformation, the only vector that lands at the origin is the zero vector itself. But for matrices that aren't full rank, which squish to a smaller dimension, you can have a whole bunch of vectors that land on zero. If a 2-D transformation squishes space onto a line, for example, there is a separate line in a different direction full of vectors that get squished onto the origin...This set of vectors that lands on the origin is called the "null space" or the "kernel" of your matrix. It's the space of all vectors that become null in the sense that they land on the zero vector.

In terms of the linear system of equations, when b happens to be the zero vector, the null space gives you all of the possible solutions to the equations.

So that's a very high-level overview of how to think about linear systems of equations geometrically. Each system has some kind of linear transformation associated with it, and when that transformation has an inverse, you can use that inverse to solve your system. Otherwise, the idea of column space lets us understand when a solution even exists, and the idea of a null space helps us to understand what the set of all possible solutions can look like.

注意,这个视屏到现在为止,谈到的矩阵实际上都是方阵。之前我一直觉得,方阵是个特殊的矩阵,正常的矩阵应该都是矩形的。然而通过这个视频,我感觉到,方阵才是普遍的,正常的。接下来,就要谈到non-square matrices了。

以一个瘦高个的矩阵为例:

,you can know it has the geometric interpretation of mapping two dimensions to three dimensions. Since the two columns indicate that the input has two basis vectors, and the three rows indicate that the landing spots for each of those basis vectors is described with three separate coordinates.

Likewise, if you see a 2-by-3 matrix with two rows and three columns...Well, the three columns indicate that you've starting in a space that has three basis vectors, so we're starting in three dimensions; and the two rows indicate that the landing spot for each of those three basis vectors is described with only two coordinates, so they must be landing in two dimensions. So it's a transformation from 3-D space onto the 2-D plane.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值