更多见深度学习.pytorch
目录
张量的操作
1.张量的拼接与拆分
1.1torch.cat()
按照张量的维度dim进行拼接,dim=0,按行进行拼接;dim=1,按列进行拼接。
# ======================================= example 1 =======================================
# torch.cat
# flag = True
flag = False
if flag:
t = torch.ones((2, 3))
print("t:",t)
t_0 = torch.cat([t, t], dim=0)
t_1 = torch.cat([t, t, t], dim=1)
print("t_0:{} shape:{}\nt_1:{} shape:{}".format(t_0, t_0.shape, t_1, t_1.shape))
输出:
t: tensor([[1., 1., 1.],
[1., 1., 1.]])
t_0:tensor([[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]]) shape:torch.Size([4, 3])
t_1:tensor([[1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 1.]]) shape:torch.Size([2, 9])
1.2torch.stack()
在新创建的维度上进行拼接:
# ======================================= example 2 =======================================
# torch.stack
flag = True
# flag = False
if flag:
t = torch.ones((2, 3))
t_stack = torch.stack([t, t, t], dim=0)
print("\nt_stack:{} shape:{}".format(t_stack, t_stack.shape))
输出:
t_stack:tensor([[[1., 1., 1.],
[1., 1., 1.]],
[[1., 1., 1.],
[1., 1., 1.]],
[[1., 1., 1.],
[1., 1., 1.]]]) shape:torch.Size([3, 2, 3])
cat和stack的区别就是,cat不会扩张原来张量的维度,stack会扩张原来张量的维度。
例子中dim=0,先移动原来t的维度【2,3】,然后在前面插入3.
1.3torch.chunk()
将张量按照维度dim进行切分,返回张量列表。chunks要切分的份数;dim=0,按行切分;dim=1,按列切分。
# ======================================= example 3 =======================================
# torch.chunk
flag = True
# flag = False
if flag:
a = torch.ones((2, 7)) # 7
list_of_tensors = torch.chunk(a, dim=1, chunks=3) # 3
for idx, t in enumerate(list_of_tensors):
print("第{}个张量:{}, shape is {}".format(idx+1, t, t.shape))
输出:
第1个张量:tensor([[1., 1., 1.],
[1., 1., 1.]]), shape is torch.Size([2, 3])
第2个张量:tensor([[1., 1., 1.],
[1., 1., 1.]]), shape is torch.Size([2, 3])
第3个张量:tensor([[1.],
[1.]]), shape is torch.Size([2, 1])
1.4torch.split()
将张量按维度dim进行切分;split_size_or_sections:int时,表示每一份长度;为list时按照列表元素进行切分。
# ======================================= example 4 =======================================
# torch.split
flag = True
# flag = False
if flag:
t = torch.ones((2, 5))
# list_of_tensors = torch.split(t, [2, 1, 1], dim=1) # [2 , 1, 2]
# for idx, t in enumerate(list_of_tensors):
# print("第{}个张量:{}, shape is {}".format(idx+1, t, t.shape))
# print("************************************************************")
a = torch.ones((2, 5))
list_of_tensors = torch.split(a, [2, 1, 2], dim=1)
for idx, t in enumerate(list_of_tensors):
print("第{}个张量:{}, shape is {}".format(idx, t, t.shape))
输出:
第0个张量:tensor([[1., 1.],
[1., 1.]]), shape is torch.Size([2, 2])
第1个张量:tensor([[1.],
[1.]]), shape is torch.Size([2, 1])
第2个张量:tensor([[1., 1.],
[1., 1.]]), shape is torch.Size([2, 2])
split_size_or_sections如果是列表,列表元素之和一定要等于原来t对应dim的维数。:
2.张量的索引
2.1torch.index_select()
在维度dim上,按index索引数据;返回值:依index索引数据拼接的张量。
# ======================================= example 5 =======================================
# torch.index_select
flag = True
# flag = False
if flag:
t = torch.randint(0, 9, size=(3, 3))
print("t:", t)
idx = torch.tensor([0, 2], dtype=torch.long) # float
print("idn:", idx)
t_select = torch.index_select(t, dim=0, index=idx)
print("t:\n{}\nt_select:\n{}".format(t, t_select))
输出:
t: tensor([[4, 5, 0],
[5, 7, 1],
[2, 5, 8]])
idn: tensor([0, 2])
t:
tensor([[4, 5, 0],
[5, 7, 1],
[2, 5, 8]])
t_select:
tensor([[4, 5, 0],
[2, 5, 8]])
注意:idx类型必须为torch.long类型。
2.2masked_select()
按照mask中的true进行索引,返回一维张量。
# ======================================= example 6 =======================================
# torch.masked_select
flag = True
# flag = False
if flag:
t = torch.randint(0, 9, size=(3, 3))
mask = t.le(5) # ge is mean greater than or equal/ gt: greater than le lt
t_select = torch.masked_select(t, mask)
print("t:\n{}\nmask:\n{}\nt_select:\n{} ".format(t, mask, t_select))
输出:
t:
tensor([[4, 5, 0],
[5, 7, 1],
[2, 5, 8]])
mask:
tensor([[ True, True, True],
[ True, False, True],
[ True, True, False]])
t_select:
tensor([4, 5, 0, 5, 1, 2, 5])
3.张量变换
3.1torch.reshape()
变换张量形状,
当张量在内存中连续时,新的张量与input共享内存。
# torch.reshape# ======================================= example 7 =======================================
flag = True
# flag = False
if flag:
t = torch.randperm(8)
t_reshape = torch.reshape(t, (-1, 2, 2)) # -1
print("t:{}\nt_reshape:\n{}".format(t, t_reshape))
t[0] = 1024
print("t:{}\nt_reshape:\n{}".format(t, t_reshape))
print("t.data 内存地址:{}".format(id(t.data)))
print("t_reshape.data 内存地址:{}".format(id(t_reshape.data)))
输出:
t:tensor([5, 4, 2, 6, 7, 3, 1, 0])
t_reshape:
tensor([[[5, 4],
[2, 6]],
[[7, 3],
[1, 0]]])
t:tensor([1024, 4, 2, 6, 7, 3, 1, 0])
t_reshape:
tensor([[[1024, 4],
[ 2, 6]],
[[ 7, 3],
[ 1, 0]]])
t.data 内存地址:2394886869144
t_reshape.data 内存地址:2394886869144
3.2torch.transpose()
变换张量的两个维度
# ======================================= example 8 =======================================
# torch.transpose
flag = True
# flag = False
if flag:
# torch.transpose
t = torch.rand((2, 3, 4))
t_transpose = torch.transpose(t, dim0=1, dim1=2) # c*h*w h*w*c
print("t shape:{}\nt_transpose shape: {}".format(t.shape, t_transpose.shape))
输出:
t shape:torch.Size([2, 3, 4])
t_transpose shape: torch.Size([2, 4, 3])
3.3torch.t()
矩阵转置
3.4torch.squeeze()
压缩长度为1的维度(轴)
dim:为None时,删除所有长度为1的轴;指定了dim,当且仅当该轴长度为1时,删除该轴。
# ======================================= example 9 =======================================
# torch.squeeze
flag = True
# flag = False
if flag:
t = torch.rand((1, 2, 3, 1))
t_sq = torch.squeeze(t)
t_0 = torch.squeeze(t, dim=0)
t_1 = torch.squeeze(t, dim=1)
print(t.shape)
print(t_sq.shape)
print(t_0.shape)
print(t_1.shape)
输出:
torch.Size([1, 2, 3, 1])
torch.Size([2, 3])
torch.Size([2, 3, 1]);dim=0时,等于1删除
torch.Size([1, 2, 3, 1]);dim=1时,val=2不可以删除
3.5torch.unsqueeze()
依据dim扩张维度。
张量的数学运算
主要包括三类运算:
1.加减乘除;2.对数、指数、幂运算;3.三角函数。
torch.add()
先乘后加
# ======================================= example 8 =======================================
# torch.add
flag = True
# flag = False
if flag:
t_0 = torch.randn((3, 3))
t_1 = torch.ones_like(t_0)
t_add = torch.add(t_0, 10, t_1)
print("t_0:\n{}\nt_1:\n{}\nt_add_10:\n{}".format(t_0, t_1, t_add))
输出:
t_0:
tensor([[ 0.5636, 1.1431, 0.8590],
[ 0.7056, -0.3406, -1.2720],
[-1.1948, 0.0250, -0.7627]])
t_1:
tensor([[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]])
t_add_10:
tensor([[10.5636, 11.1431, 10.8590],
[10.7056, 9.6594, 8.7280],
[ 8.8052, 10.0250, 9.2373]])
线性回归
# -*- coding:utf-8 -*-
import torch
import matplotlib.pyplot as plt
torch.manual_seed(10)
lr = 0.05 # 学习率 20191015修改
# 创建训练数据
x = torch.rand(20, 1) * 10 # x data (tensor), shape=(20, 1)
y = 2*x + (5 + torch.randn(20, 1)) # y data (tensor), shape=(20, 1)
# 构建线性回归参数
w = torch.randn((1), requires_grad=True)
b = torch.zeros((1), requires_grad=True)
for iteration in range(1000):
# 前向传播
wx = torch.mul(w, x)
y_pred = torch.add(wx, b)
# 计算 MSE loss
loss = (0.5 * (y - y_pred) ** 2).mean()
# 反向传播
loss.backward()
# 更新参数
b.data.sub_(lr * b.grad)
w.data.sub_(lr * w.grad)
# 清零张量的梯度 20191015增加
w.grad.zero_()
b.grad.zero_()
# 绘图
if iteration % 20 == 0:
plt.scatter(x.data.numpy(), y.data.numpy())
plt.plot(x.data.numpy(), y_pred.data.numpy(), 'r-', lw=5)
plt.text(2, 20, 'Loss=%.4f' % loss.data.numpy(), fontdict={'size': 20, 'color': 'red'})
plt.xlim(1.5, 10)
plt.ylim(8, 28)
plt.title("Iteration: {}\nw: {} b: {}".format(iteration, w.data.numpy(), b.data.numpy()))
plt.pause(0.5)
if loss.data.numpy() < 1:
break