- 📢博客主页:盾山狂热粉的博客_CSDN博客-C、C++语言,机器视觉领域博主
- 📢努力努力再努力嗷~~~✨
💡大纲
⭕了解关于pytorch的拼接、拆分与统计、运算
import torch
一、拼接与拆分
(一)拼接
1、cat
💡对对应的维数进行连接
📑Statistics about scores
👉class1~3, students, scores 班级 学生 学科数
👉class4~9, students, scores
a = torch.rand(3, 32, 8)
b = torch.rand(6, 32, 8)
c = torch.cat([a, b], dim=0) # 在第0维进行连接
print(a.shape)
print(b.shape)
print(c.shape)
'''
torch.Size([3, 32, 8])
torch.Size([6, 32, 8])
torch.Size([9, 32, 8])
'''
a = torch.rand(3, 32, 8)
b = torch.rand(3, 32, 8)
c = torch.cat([a, b], dim=1) # 在第1维进行连接
print(a.shape)
print(b.shape)
print(c.shape)
'''
torch.Size([3, 32, 8])
torch.Size([3, 32, 8])
torch.Size([3, 64, 8])
'''
a = torch.rand(3, 32, 8)
b = torch.rand(2, 32, 8)
c = torch.cat([a, b], dim=1) # 对第1维进行连接,但第0维不一样
print(a.shape)
print(b.shape)
print(c.shape)
⚠️ 在某一个维度进行连接时,其他维度需要相同
2、stack
💡 会创建一个新的维度,扩展出新的意思,可以理解为创建一个数据集将两张图像放进去
a = torch.rand(3, 32, 8)
b = torch.rand(3, 32, 8)
c = torch.stack([a, b], dim=0) # 在第0维进行连接
print(a.shape)
print(b.shape)
print(c.shape)
'''
torch.Size([3, 32, 8])
torch.Size([3, 32, 8])
torch.Size([2, 3, 32, 8])
'''
a = torch.rand(3, 32, 8)
b = torch.rand(1, 32, 8)
c = torch.stack([a, b], dim=0) # 在第0维进行连接,图形的通道数不一样
⚠️ 因为这两个张量的维度内容不一样,不能为他们创建一个新的维度,可以将理解为三通道图片与单通道图片不能存放在同一个数据集里
(二)拆分
1、根据长度拆分 split
💡对选定的维度进行一定长度的拆分
a = torch.rand(4,32,8)
b = torch.split(a,2,0) # 拆分长度为2
print(a.shape)
print(b[0].shape)
print(b[1].shape)
'''
torch.Size([4, 32, 8])
torch.Size([2, 32, 8])
torch.Size([2, 32, 8])
'''
👉对于该维度不能整除给定的长度,先拆出整除的部分,剩余的另组成一个
a = torch.rand(5,32,8)
b = torch.split(a,2,0)
print(a.shape)
print(len(b)) # 拆分之后变成元组,元组内容为张量
print(b[0].shape)
print(b[1].shape)
print(b[2].shape)
'''
torch.Size([5, 32, 8])
3
torch.Size([2, 32, 8])
torch.Size([2, 32, 8])
torch.Size([1, 32, 8])
'''
2、根据数量拆分 chunk
💡拆分成多少维
a=torch.rand(5,32,8)
b=torch.chunk(a,2,0)
print(a.shape)
print(len(b))
print(b[0].shape)
print(b[1].shape)
'''
torch.Size([5, 32, 8])
2
torch.Size([3, 32, 8])
torch.Size([2, 32, 8])
'''
a=torch.rand(5,32,8)
b=torch.chunk(a,3,0)
print(a.shape)
print(len(b))
print(b[0].shape)
print(b[1].shape)
print(b[2].shape)
'''
torch.Size([5, 32, 8])
3
torch.Size([2, 32, 8])
torch.Size([2, 32, 8])
torch.Size([1, 32, 8])
'''
二、统计与计算
(一)常用的加减乘除
1、加、减、除
💡有对应的函数,也可以通过运算符+ - / 进行操作
👉add +
a = torch.rand(3, 4)
b = torch.rand(4)
c = a+b # 广播机制
d = torch.add(a, b) # 两者效果相同
print(a)
print(b)
print(c)
print(d)
print(a+1) # 每个元素加1
'''
tensor([[0.6835, 0.1389, 0.5397, 0.8704],
[0.4811, 0.7664, 0.5389, 0.3035],
[0.1145, 0.8556, 0.8336, 0.3178]])
tensor([0.2216, 0.6220, 0.1831, 0.5293])
tensor([[0.9050, 0.7609, 0.7228, 1.3996],
[0.7027, 1.3883, 0.7221, 0.8328],
[0.3361, 1.4776, 1.0168, 0.8471]])
tensor([[0.9050, 0.7609, 0.7228, 1.3996],
[0.7027, 1.3883, 0.7221, 0.8328],
[0.3361, 1.4776, 1.0168, 0.8471]])
tensor([[1.6835, 1.1389, 1.5397, 1.8704],
[1.4811, 1.7664, 1.5389, 1.3035],
[1.1145, 1.8556, 1.8336, 1.3178]])
'''
👉-
👉div /
a = torch.full([2, 2], 2)
b = 2
c = torch.div(a, b)
d = a/b
print(a)
print(b)
print(c)
print(d)
'''
tensor([[2, 2],
[2, 2]])
2
tensor([[1., 1.],
[1., 1.]])
tensor([[1., 1.],
[1., 1.]])
'''
2、乘法
💡torch.mul() 点乘,也就是对应位置相乘
a = torch.rand(3, 3)
b = torch.eye(3, 3) # 对角矩阵
c = torch.mul(a, b)
print(a)
print(b)
print(c)
'''
tensor([[0.2424, 0.5854, 0.1131],
[0.1886, 0.2874, 0.4422],
[0.4271, 0.9258, 0.2913]])
tensor([[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]])
tensor([[0.2424, 0.0000, 0.0000],
[0.0000, 0.2874, 0.0000],
[0.0000, 0.0000, 0.2913]])
'''
💡torch.mm(mat1,mat2,out=None)--->Tensor 只能进行矩阵乘法,输入的两个二维tensor维度只能是n*m与m*p得到n*p
👉.mm()/@都可以进行矩阵乘法
a = torch.rand(3, 3)
b = torch.eye(3, 3)
# 矩阵乘法
c = a@b # 这种方式更好
d = torch.mm(a, b)
print(a)
print(b)
print(c)
print(d)
'''
tensor([[0.2678, 0.2916, 0.7045],
[0.6727, 0.3934, 0.4550],
[0.7078, 0.2472, 0.3816]])
tensor([[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]])
tensor([[0.2678, 0.2916, 0.7045],
[0.6727, 0.3934, 0.4550],
[0.7078, 0.2472, 0.3816]])
tensor([[0.2678, 0.2916, 0.7045],
[0.6727, 0.3934, 0.4550],
[0.7078, 0.2472, 0.3816]])
'''
👉.t()进行矩阵的转置
a = torch.rand(7, 784)
b = torch.rand(7, 784)
c = a@b.t() # b矩阵的转置
💡张量相乘
👉torch.bmm(batch1,batch2,out=None)--->Tensor 两个三维张量相乘,维度分别为b*n*m与b*m*p得到b*n*p
👉torch.matmul(tensor1,tensor2,out=None)--->Tensor 张量乘法,输入可以是高维,例如j*1*n*m与k*m*p得到j*k*n*p
(二)求次方 pow
a = torch.full([2, 2], 2)
b = a.pow(2)
c = a**2
d = b**(0.5)
print(a)
print(b)
print(c)
print(d)
'''
tensor([[2, 2],
[2, 2]])
tensor([[4, 4],
[4, 4]])
tensor([[4, 4],
[4, 4]])
tensor([[2., 2.],
[2., 2.]])
'''
(三)求开方 sqrt rsqrt
1、sqrt
a = torch.full([2, 2], 2)
b = a.sqrt()
c = b**2
print(a)
print(b)
print(c)
'''
tensor([[2, 2],
[2, 2]])
tensor([[1.4142, 1.4142],
[1.4142, 1.4142]])
tensor([[2.0000, 2.0000],
[2.0000, 2.0000]])
'''
2、rsqrt
💡是对sqrt求到的结果取倒数
a = torch.full([2, 2], 2)
b = a.rsqrt()
c = b**2
print(a)
print(b)
print(c)
'''
tensor([[2, 2],
[2, 2]])
tensor([[0.7071, 0.7071],
[0.7071, 0.7071]])
tensor([[0.5000, 0.5000],
[0.5000, 0.5000]])
'''
(四)近似值
1、 ceil() floor() round() trunc() frac()
a = torch.tensor(3.1415926)
b = a.ceil() # 向上取整
c = a.floor() # 向下取整
d = a.round() # 四舍五入
e = a.trunc() # 裁剪整数部分
f = a.frac() # 裁剪小数部分
print(a)
print(b)
print(c)
print(d)
print(e)
print(f)
'''
tensor(3.1416)
tensor(4.)
tensor(3.)
tensor(3.)
tensor(3.)
tensor(0.1416)
'''
2、clamp
a = torch.rand(3, 3)*20 # 0~1内随机赋值,*20,将范围扩大到0~20
b = a.clamp(0, 10) # 最大值为10
c = a.clamp(7, 14) # 最小值为7,最大值为14,会进行裁剪
print(a)
print(b)
print(c)
'''
tensor([[ 9.1304, 8.8227, 18.7496],
[ 8.4858, 8.0842, 16.7566],
[ 2.7134, 14.6634, 10.2852]])
tensor([[ 9.1304, 8.8227, 10.0000],
[ 8.4858, 8.0842, 10.0000],
[ 2.7134, 10.0000, 10.0000]])
tensor([[ 9.1304, 8.8227, 14.0000],
[ 8.4858, 8.0842, 14.0000],
[ 7.0000, 14.0000, 10.2852]])
'''
📢欢迎点赞 👍 收藏 ⭐留言 📝 如有错误敬请指正!