python-torch整理

python-torch整理

一.torch \textbf{一.torch} .torch

1.创建张量

(1)生成全1张量 ( t o r c h . o n e s ) (torch.ones) (torch.ones)

import torch as tc
tensor = tc.ones(size=(2, 3, 4, 5))

(2)生成全0张量 ( t o r c h . z e r o s ) (torch.zeros) (torch.zeros)

import torch as tc
tensor = tc.zeros(size=(2, 3, 4, 5))

(3)生成空张量 ( t o r c h . e m p t y ) (torch.empty) (torch.empty)

import torch as tc
tensor = tc.empty(size=(2, 3, 4, 5))

(4)生成一维均匀分布张量 ( t o r c h . l i n s p a c e ) (torch.linspace) (torch.linspace)

    与numpy.linspace等效。

import torch as tc
import numpy as np
tensor = tc.linspace(start=0, end=20, steps=11)
array = np.linspace(start=0, stop=20, num=11)
print(tensor)
print(array)

在这里插入图片描述

(5)生成数值在一定范围内的整型张量 ( t o r c h . r a n d i n t ) (torch.randint) (torch.randint)

    1可取,20不可取。

import torch as tc
tensor = tc.randint(low=1, high=20, size=(2, 3, 4))
print(tensor)

在这里插入图片描述

(6)将列表或数组强制转换为torch tensor ( t o r c h . t e n s o r ) (torch.tensor) (torch.tensor)

    默认转换为浮点数张量,可强制转换为整型。

import torch as tc
import numpy as np
array = np.random.randint(low=1, high=20, size=(2, 3, 4))
print(tc.Tensor(array))
print(tc.Tensor(array).type())
print(tc.Tensor(array).long())

在这里插入图片描述

(7)返回一个符合均值为0,方差为1的正态分布(标准正态分布)中填充随机数的张量 ( t o r c h . r a n d n ) (torch.randn) (torch.randn)

import torch as tc
tensor = tc.randn(size=[4, 4])
print(tensor)

在这里插入图片描述

2.常用张量操作

(1)张量转置和维度转换 ( t e n s o r . t ( ) 、 t e n s o r . p e r m u t e ) (tensor.t()、tensor.permute) (tensor.t()tensor.permute)

    前者只支持二维张量,后者支持任意shape的张量。

import torch as tc
tensor = tc.randn(size=[4, 3])
print(tensor.t().shape)

tensor1 = tc.randn(size=[2, 3, 4])
print(tensor1.permute([0, 2, 1]).shape)

(2)改变shape ( t o r c h . r e s h a p e ) (torch.reshape) (torch.reshape)

import torch as tc

tensor1 = tc.randn(size=[2, 3, 4])
print(tc.reshape(tensor1, shape=(6, 4)).shape)

(3)张量拼接 ( t o r c h . c a t ) (torch.cat) (torch.cat)

    支持任意shape的张量拼接,唯一要求是要拼接的维度以外的所有维度必须一致

import torch as tc

tensor1 = tc.randn(size=[2, 3, 4])
tensor2 = tc.randn(size=[4, 3, 4])
print(tc.cat((tensor1, tensor2), dim=0).shape)

(4)张量拷贝 ( t e n s o r A . c o p y ( t e n s o r B ) ) (tensorA.copy_(tensorB)) (tensorA.copy(tensorB))

import torch as tc

tensor1 = tc.randn(size=[2, 3])
tensor2 = tc.randn(size=[2, 3])
print(tensor1, tensor2)
tensor2.copy_(tensor1)
print(tensor1 == tensor2)

在这里插入图片描述
    但这有一个小问题,就是不能直接tensor2.copy_(tensor1),而是需要预先定义好才可以这么用,可以使用detach来解决这个问题。

import torch as tc

tensor1 = tc.randn(size=[2, 3])
tensor2 = tc.detach(tensor1)
print(tensor2==tensor1)

在这里插入图片描述

(5)获取张量shape ( t e n s o r . s h a p e 、 t e n s o r . s i z e ( ) ) (tensor.shape、tensor.size()) (tensor.shapetensor.size())

import torch as tc

tensor1 = tc.randn(size=[2, 3])
print(tensor1.shape, tensor1.size())

3.张量计算

(1)缩并 ( t o r c h . t e n s o r d o t ) (torch.tensordot) (torch.tensordot)

    支持任意维度的缩并操作,且可以多缩并。dims提供的第一个列表是tensor2要缩并的维度,第二个列表是tensor1要缩并的维度

import torch as tc

tensor1 = tc.randint(low=1, high=5, size=[2, 2, 4, 5])
tensor2 = tc.randint(low=1, high=5, size=[3, 2])

result = tc.tensordot(tensor2, tensor1, dims=([1], [0]))
print(result.shape)

在这里插入图片描述
    多维度的张量缩并操作:

import torch as tc

tensor1 = tc.randint(low=1, high=5, size=[2, 2, 4, 5])
tensor2 = tc.randint(low=1, high=5, size=[3, 2, 2])

result = tc.tensordot(tensor2, tensor1, dims=([1, 2], [0, 1]))
print(result.shape)

在这里插入图片描述

(2)hadmard积实现对应元素相乘 ( ∗ ) (*) ()

import torch as tc

tensor1 = tc.randint(low=1, high=5, size=[2, 2])
tensor2 = tc.randint(low=1, high=5, size=[2, 2])

result = tensor2 * tensor1
print(tensor2, tensor1)
print(result)
print(result.shape)

(3)矩阵乘积 ( @ ) (@) (@)

    @左边的张量的最后一个维度的维数需要与右边张量的第一个维度的维数相同,尽量都使用二维张量或其中一个是二维张量,不然容易出错,因为针对非二维张量,在矩阵乘积前会先将其reshape为二维张量,对于高阶的,使用tensordot更稳妥,且易读。

import torch as tc

tensor1 = tc.randint(low=1, high=5, size=[3, 3])
tensor2 = tc.randint(low=1, high=5, size=[2, 3])

result = tensor2 @ tensor1
print("tensor2:", tensor2, "\ntensor1:", tensor1)
print(result)
print(result.shape)

在这里插入图片描述

(4)张量除法 ( t o r c h . f l o o r d i v i d e 、 t o r c h . t r u e d i v i d e ) (torch.floor_divide、torch.true_divide) (torch.floordividetorch.truedivide)

import torch as tc

tensor1 = tc.randint(low=1, high=5, size=[3, 3])
result = tc.floor_divide(tensor1, 2)
r = tensor1 // 2
print(result == r)
result = tc.true_divide(tensor1, 2)
r = tensor1 / 2
print(result == r)

在这里插入图片描述
这两个函数与//和/似乎并无区别,由于过得有点久了,具体忘记为什么要用这两个函数了,反正如果//和/不好用就用这两个函数。

(5)爱因斯坦求和约定 ( t c . e i n s u m ) (tc.einsum) (tc.einsum)

    和numpy的差不多,我只是对原来的代码做出了一点点修改就可以使用了。

import torch as tc

tensor1 = tc.randint(3, 6, size=(2, 3))
tensor2 = tc.randint(3, 6, size=(2, 3, 6))
tensor3 = tc.randint(3, 6, size=(4, 4))
tensor4 = tc.randint(3, 6, size=(4, 4, 5, 6, 7))

print("tensor1_shape:", tensor1.shape, "tensor2_shape:", tensor2.shape)
# 任意维度缩并
print("\n任意维度缩并:")
print(tc.einsum("ij, ijl->l", tensor1, tensor2).shape)

# 任意维度外积
print("\n任意维度外积:")
print(tc.einsum("ij, xyz->ijxyz", tensor1, tensor2).shape)

# hardmard积
print("\ntensor1:", tensor1)
print("tensor1求hardmard积:")
print(tc.einsum('ij,ij->ij', tensor1, tensor1))
# 矩阵求迹
print("\ntensor3:", tensor3)
print("tensor3矩阵求迹:")
print(tc.einsum("ii", tensor3))

# 求对角线上的元素
print("\ntensor3求对角线上的元素:")
print(tc.einsum('ii->i', tensor3))

# 维度转换
print("\ntensor2维度转换:")
print(tc.einsum("ijk->kij", tensor2).shape)

# 省略号
print("\ntensor4_shape:", tensor4.shape)
print("tensor4 省略号代替其他未写的指标:")
print(tc.einsum("i...", tensor4).shape)
print(tc.einsum("...i", tensor4).shape)
print(tc.einsum("i...->...", tensor4).shape)
print(tc.einsum("i...k->...", tensor4).shape)

# 求和(变相的缩并)
print("\n求和(变相的缩并):")
print(tc.einsum("ijk->i", tensor2))

在这里插入图片描述
在这里插入图片描述

(6)其他的一些计算

    matmul支持两个矩阵相乘,维度不能超过二维,multiply支持任意相同shape的2个张量进行Hadamard乘积(对应位置元素相乘),而且可以注意到,torch相比numpy有着更严格的类型要求。

import torch as tc

tensor1 = tc.randint(low=-6, high=6, size=(3, 2))
print("abs,sqrt:")
print(tc.abs(tensor1))
print(tc.sqrt(abs(tensor1)))

# 均可指定维度:最大值、均值、标准差、总和
# 也可以直接对整体进行操作
print("max,mean,std,sum:")
print(tc.max(tensor1))
print(tc.mean(tensor1.float()))
print(tc.std(tensor1.float()))
print(tc.sum(tensor1))
print(tc.max(tensor1, dim=1))
print(tc.mean(tensor1.float(), dim=1))
print(tc.std(tensor1.float(), dim=1))
print(tc.sum(tensor1, dim=1))

# 矩阵乘积和hardmard积
print("矩阵乘积和hardmard积")
print("tensor1:", tensor1)
print(tc.matmul(tensor1, tensor1.T))
print(tc.multiply(tensor1, tensor1))

print("共轭:")
tensor1 = tc.randint(-6, 6, size=(3, 2)) + 2j
print(tensor1)
print(tc.conj(tensor1))

4.数据类型强制转换

    在需要进行数据类型转换的张量后加上.type(),比如t.long(),这就将张量t的数据类型强制转换为long型,还有double、int、float等等,相比在生成张量中提供dtype参数,这更加方便,想什么时候用就什么时候用。

二.拓展 \textbf{二.拓展} .拓展

(1)tensor与numpy之间的转换

将numpy数组a转换为tensor

b=torch.from_numpy(a)

将tensor转换为numpy数组

b = a.numpy()

(2)ncon.ncon

    这个函数与einsum函数的区别主要在于把字母换成了数字,是很类似的。

import numpy as np
from numpy import linalg as LA
from ncon import ncon

# 初始化张量
d = 2
chi = 10
# 10*2*10
A = np.random.rand(chi, d, chi)
B = np.random.rand(chi, d, chi)

# 设置辅助指标,两种辅助指标,分别是Sab和Sba
# 10
sAB = np.ones(chi) / np.sqrt(chi)
sBA = np.ones(chi) / np.sqrt(chi)

# 10*10
sigBA = np.random.rand(chi, chi)
tol = 1e-10

# 初始化张量网络
# 10*10 10*10 10*2*10 10*2*10 10*10 10*10 10*2*10 10*2*10
tensors = [np.diag(sBA), np.diag(sBA), A, A.conj(), np.diag(sAB),
           np.diag(sAB), B, B.conj()]

labels = [[1, 2], [1, 3], [2, 4], [3, 5, 6], [4, 5, 7], [6, 8], [7, 9], [8, 10, -1], [9, 10, -2]]
# 10*10 10*10 10*10 10*2*10 10*2*10 10*10 10*10 10*2*10 10*2*10
sigBA_new = ncon([sigBA, *tensors], labels)
print(sigBA_new.shape)

(3)用于深度学习(未完待续……)

三.如遇其他函数,就会整理到上面 \textbf{三.如遇其他函数,就会整理到上面} .如遇其他函数,就会整理到上面

  • 6
    点赞
  • 38
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值