Normal Matrix

Normal Matrix

Normals are funny.  They're vec3's, since you don't want perspective on normals.   And they don't actually scale quite right--a 45 degree surface with a 45 degree normal, scaled by glScalef(1,0.1,1), drops the surface down to near 0 degrees, but actually tilts the normal *up*, in the opposite direction from the surface, to near 90 degrees.

Mathematically, if between two points a and b on the surface, dot(n,b-a)==0, then after applying a matrix M to the points, you want the normal to still be perpendicular.  The question is, what matrix N do you have to apply to the normal to make this happen?  In other words, find N such that
    dot( N * n , M * a - M * b) == 0

We can solve this by noting that dot product can be expresed as matrix multiplication--dot(x,y) = transpose(x) * y, where we treat an ordinary column-vector as a little matrix, and flip it horizontally.  So
   transpose(N * n) * (M*a - M*b) == 0         (as above, but write using transpose and matrix multiplication)
   transpose(N * n) * M * (a-b) == 0              (collect both copies of M)
   transpose(n) * transpose(N) * M * (a-b) == 0    (transpose-of-product is product-of-transposes in opposite order)

OK.  This is really similar to our assumption that the original normal was perpendicular to the surface--that dot(n,b-a) == transpose(n) * (a-b) == 0.  In fact, the only difference is the new matrices wedged in the middle.  If we pick N to make the term in the middle the identity, then our new normal will be perpendicular to the surface too:
    transpose(N) * M == I   (the identity matrix)
This is the definition for matrix inverses, so the "normal matrix" N = transpose(inverse(M)).

If you look up the GLSL definition for "gl_NormalMatrix", it's defined as "the transpose of the inverse of the gl_ModelViewMatrix".  Now you know why!

 

 

http://www.cs.uaf.edu/2007/spring/cs481/lecture/01_23_matrices.html

### PyTorch `torch.normal` 函数详解 #### 使用场景 在机器学习和深度学习领域,正态分布(高斯分布)被广泛应用于初始化权重、生成随机噪声等任务。PyTorch 提供了多种方法来创建服从特定参数设定的正态分布张量。 #### 函数签名 `torch.normal(mean, std, *, generator=None, out=None)` 或者 `torch.normal(means, std, *, generator=None, out=None)` - 当第一个参数是一个标量时,则该函数会返回一个形状由标准差张量决定的新张量,其中每个元素都独立地从均值为给定标量且方差等于对应位置的标准差平方的正态分布中抽取。 - 如果两个参数都是张量,则它们应该具有相同的尺寸,并且此操作将在这些输入上逐元素执行[^1]。 #### 参数说明 - **mean**: 均值可以是浮点数或任意维度的张量; - **std/means/stds**: 方差或者是包含多个不同方差值的一维或多维张量; - **generator**(可选): 用于伪随机数生成器的状态对象,默认情况下使用全局默认PRNG状态; - **out**(可选): 输出张量;如果指定了这个选项,则结果会被写入到指定的目标张量而不是新建一个。 #### 实际应用案例 下面给出几个具体的例子展示如何利用 `torch.normal()` 来构建满足需求的数据集: ```python import torch # 创建单个正态分布样本 single_sample = torch.normal(0., 1.) print(f'Single sample drawn from N(0,1):\n{single_sample}\n') # 构建大小为 (2,3) 的矩阵,其元素分别来自不同的正态分布 matrix_samples = torch.normal(torch.tensor([0., 1.]), torch.tensor([[1., 2., 3.], [4., 5., 6.]])) print('Matrix with elements sampled independently:\n', matrix_samples) # 将现有张量填充为正态分布数据 existing_tensor = torch.empty((2, 2)) filled_with_normal_distribution = torch.normal(-1., .5, out=existing_tensor) print('\nExisting tensor filled with samples from N(-1,.5^2):\n', existing_tensor) ``` 上述代码片段展示了三种常见情况下的调用方式:获取单一数值型变量作为输出、基于多组平均值与标准偏差组合而成的结果以及直接修改已存在的存储空间内的内容而不重新分配内存地址。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值