pytorch矩阵运算

1.对角矩阵的设置

a = torch.rand(2)
print(a)
# diag设置对角矩阵,diagonal等于0,设置主对角线
x = torch.diag(a,diagonal=0)   # iagonal 对角线
print(x)
# diagonal大于0,设置主对角线上diagonal对应位置的值
x = torch.diag(a,diagonal=1)
print(x)
x = torch.diag(a,diagonal=2)
print(x)
# diagonal小于0,设置主对角线之下diagonal对应位置的值
x = torch.diag(a,diagonal=-1)
print(x)
------------------------------------------------------------------------------
result:
tensor([0.6151, 0.7596])
tensor([[0.6151, 0.0000],
        [0.0000, 0.7596]])
tensor([[0.0000, 0.6151, 0.0000],
        [0.0000, 0.0000, 0.7596],
        [0.0000, 0.0000, 0.0000]])
tensor([[0.0000, 0.0000, 0.6151, 0.0000],
        [0.0000, 0.0000, 0.0000, 0.7596],
        [0.0000, 0.0000, 0.0000, 0.0000],
        [0.0000, 0.0000, 0.0000, 0.0000]])
tensor([[0.0000, 0.0000, 0.0000],
        [0.6151, 0.0000, 0.0000],`在这里插入代码片`
        [0.0000, 0.7596, 0.0000]])

2.矩阵的迹

# 二维矩阵对角线上的元素之和
x = torch.randn(2,4)
print(x)
print(torch.trace(x))     # trace 迹
x = torch.randn(2,2)
print(x)
print(torch.trace(x))
------------------------------------------------------------------------------
result:
tensor([[ 0.0316,  0.8957,  0.6893,  1.3485],
        [-1.7997, -0.6455, -0.3539,  0.0214]])
tensor(-0.6139)
tensor([[ 0.1226,  0.2207],
        [-0.5384,  3.7622]])
tensor(3.8848)

3.上三角矩阵与下三角矩阵

# 下三角矩阵
# diagonal控制对角线,通俗讲就是控制哪条对角线之下都有数字
# 主对角线之下
x = torch.randn(2,5)
print(torch.tril(x))
print(torch.tril(x,diagonal=2))  # tril下三角
print(torch.tril(x,diagonal=-1))
# 上三角
print(torch.triu(x))
print(torch.triu(x,diagonal=4))
print(torch.triu(x,diagonal=-1))
-----------------------------------------------------------------------------
result:
tensor([[-0.1253,  0.0000,  0.0000,  0.0000,  0.0000],
        [-0.8299,  0.8951,  0.0000,  0.0000,  0.0000]])
tensor([[-0.1253, -0.8371,  0.2722,  0.0000,  0.0000],
        [-0.8299,  0.8951, -0.0954, -1.2229,  0.0000]])
tensor([[ 0.0000,  0.0000,  0.0000,  0.0000,  0.0000],
        [-0.8299,  0.0000,  0.0000,  0.0000,  0.0000]])
tensor([[-0.1253, -0.8371,  0.2722,  1.6253,  0.4773],
        [ 0.0000,  0.8951, -0.0954, -1.2229, -1.5745]])
tensor([[0.0000, 0.0000, 0.0000, 0.0000, 0.4773],
        [0.0000, 0.0000, 0.0000, 0.0000, 0.0000]])
tensor([[-0.1253, -0.8371,  0.2722,  1.6253,  0.4773],
        [-0.8299,  0.8951, -0.0954, -1.2229, -1.5745]])

4.矩阵的乘积mm和bmm

# bmm矩阵乘积
# 矩阵A的列数需要等于矩阵B的列数
#1*2*3
x = torch.Tensor([[[1,2,3],[4,5,6]]])
print(x.shape)
y = torch.Tensor([[[9],[8],[7]]])
print(y.shape)
print(torch.bmm(x,y),torch.bmm(x,y).squeeze(0).squeeze(1))
# bmm为batch matrix multiplication,而mm为matrix multiplication
print(torch.mm(x.squeeze(0),y.squeeze(0)))
# 计算两个一维张量的点积torch.dot,向量对应元素相乘再相加
x = torch.Tensor([1,2,3,4])
y = torch.Tensor([4,3,2,1])`在这里插入代码片`
print(torch.dot(x,y))
-----------------------------------------------------------------------------
result:
torch.Size([1, 2, 3])
torch.Size([1, 3, 1])
tensor([[[ 46.],
         [118.]]]) tensor([ 46., 118.])
tensor([[ 46.],
        [118.]])
tensor(20.)

5.矩阵的相乘再相加

# addmm方法用于两个矩阵相乘的结果在加到矩阵M,用beta调节矩阵M的权重,用alpha调节矩阵乘积的系数
# out = (beta*M)+(alpha*mat1·mat2)
x = torch.Tensor([[1,2]])
print(x.shape)
batch1 = torch.Tensor([[1,2,3]])
print(batch1.shape)
batch2 = torch.Tensor([[1,2],[3,4],[5,6]])
print(batch2.shape)
print(torch.addmm(x,batch1,batch2,beta=0.1,alpha=5))
-----------------------------------------------------------------------------
result:
torch.Size([1, 2])
torch.Size([1, 3])
torch.Size([3, 2])
tensor([[110.1000, 140.2000]])

6.批量矩阵相乘再相加

# addbmm方法用于两个矩阵批量相乘的结果在加到矩阵M,用beta调节矩阵M的权重,用alpha调节矩阵乘积的系数
# 两个相乘的矩阵维度为3,分别表示[batch_size,width,length]
# 两个相乘矩阵的batch_size应该相等
# res = (beta*M)+(alpha*sum(batch1i·batch2i,i=0,b)
x = torch.Tensor([[1,2]])
print(x.shape)
batch1 = torch.Tensor([[[1,2,3]]])
print(batch1.shape)
batch2 = torch.Tensor([[[1,2],[3,4],[5,6]]])
print(batch2.shape)
print(torch.addbmm(x,batch1,batch2,beta=0.1,alpha=10))

7.矩阵乘向量再相加

# addmv方法用于矩阵和向量相乘的结果再加到矩阵M,用beta矩阵的权重,用alpha调节矩阵与向量乘积的结果
# 矩阵的列数等于向量的长度则满足相乘的条件
# out=(beta*tensor)+(alpha*(mat*vec))
x = torch.Tensor([1,2,3])
print(x.shape)
mat = torch.Tensor([[1],[2],[3]])
print(mat.shape)
vec = torch.Tensor([3])
print(vec.shape)
print(torch.addmv(x,mat,vec,beta=1,alpha=1))
-----------------------------------------------------------------------------
result:
torch.Size([3])
torch.Size([3, 1])
torch.Size([1])
tensor([ 4.,  8., 12.])

8.特征值及特征向量

# eigenvectors(bool)如果为Ture,同时计算特征值和特征向量,否则只计算特征值
x = torch.Tensor([[9,2,3],[4,5,8],[7,10,9]])
print(torch.eig(x))
print(torch.eig(x,eigenvectors=True))
-----------------------------------------------------------------------------
result:
torch.return_types.eig(
eigenvalues=tensor([[19.0221,  0.0000],
        [ 6.1871,  0.0000],
        [-2.2092,  0.0000]]),
eigenvectors=tensor([]))
torch.return_types.eig(
eigenvalues=tensor([[19.0221,  0.0000],
        [ 6.1871,  0.0000],
        [-2.2092,  0.0000]]),
eigenvectors=tensor([[ 0.3385,  0.7846, -0.0528],
        [ 0.5373, -0.4213, -0.7286],
        [ 0.7725, -0.4548,  0.6829]]))

  • 1
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值