什么是PyTorch?
PyTorch是一个基于Python的科学计算库,它有以下特点:
- 类似于NumPy,但是它可以使用GPU
- 可以用它定义深度学习模型,可以灵活地进行深度学习模型的训练和使用
Tensors
Tensor类似与NumPy的ndarray,唯一的区别是Tensor可以在GPU上加速运算。
import torch
构造一个未初始化的5x3矩阵:
x = torch.empty(5,3)
x
tensor([[ 0.0000e+00, -8.5899e+09, 0.0000e+00],
[-8.5899e+09, nan, 0.0000e+00],
[ 2.7002e-06, 1.8119e+02, 1.2141e+01],
[ 7.8503e+02, 6.7504e-07, 6.5200e-10],
[ 2.9537e-06, 1.7186e-04, nan]])
构建一个随机初始化的矩阵:
x = torch.rand(5,3)
x
tensor([[0.4628, 0.7432, 0.9785],
[0.2068, 0.4441, 0.9176],
[0.1027, 0.5275, 0.3884],
[0.9380, 0.2113, 0.2839],
[0.0094, 0.4001, 0.6483]])
构建一个全部为0,类型为long的矩阵:
x = torch.zeros(5,3,dtype=torch.long)
x
tensor([[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
[0, 0, 0]])
x = torch.zeros(5,3).long()
x.dtype
torch.int64
从数据直接直接构建tensor:
x = torch.tensor([5.5,3])
print(x)
print(x.dtype)
tensor([5.5000, 3.0000])
torch.float32
b=torch.ones(5,3)
b
tensor([[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]])
也可以从一个已有的tensor构建一个tensor。这些方法会重用原来tensor的特征,例如,数据类型,除非提供新的数据。
x = x.new_ones(5,3) #和x = torch.tensor([5.5,3])数据类型是一样的
print(x)
print(x.dtype)
tensor([[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]])
torch.float32
如果我们不想使用原来rensor的类型,可以重新构建
x=x.new_ones(5,3,dtype=torch.double)
print(x)
print(x.dtype)
tensor([[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]], dtype=torch.float64)
torch.float64
随机产生和上一个torch相同的tensor,like的意思是产生的数据格式要相同
x = torch.randn_like(x, dtype=torch.float)
x
tensor([[-0.2134, 0.7527, -0.2120],
[ 0.0415, -0.1111, 0.4014],
[ 0.7867, 0.3328, -0.6178],
[-0.4304, -0.0557, -2.8077],
[ 0.1898, -0.3103, 0.0750]])
得到tensor的形状:
x.shape # x.size
torch.Size([5, 3])
注意
``torch.Size`` 返回的是一个tuple
Operations
有很多种tensor运算。我们先介绍加法运算。
y = torch.rand(5,3)
y
tensor([[0.0673, 0.6387, 0.6082],
[0.1486, 0.1579, 0.7911],
[0.3715, 0.1544, 0.3721],
[0.6817, 0.1626, 0.7495],
[0.9223, 0.5138, 0.2845]])
x + y
tensor([[-0.1460, 1.3914, 0.3962],
[ 0.1901, 0.0468, 1.1925],
[ 1.1582, 0.4873, -0.2457],
[ 0.2513, 0.1069, -2.0582],
[ 1.1121, 0.2036, 0.3596]])
另一种着加法的写法
torch.add(x, y)
tensor([[-0.1460, 1.3914, 0.3962],
[ 0.1901, 0.0468, 1.1925],
[ 1.1582, 0.4873, -0.2457],
[ 0.2513, 0.1069, -2.0582],
[ 1.1121, 0.2036, 0.3596]])
加法:把输出作为一个变量
result = torch.empty(5,3)
torch.add(x, y, out=result)
# result = x + y
result
tensor([[ -0.7862, 3.6496, -0.2399],
[ 0.3146, -0.2865, 2.3966],
[ 3.5181, 1.4858, -2.0991],
[ -1.0398, -0.0601, -10.4813],
[ 1.6814, -0.7273, 0.5846]])
y
tensor([[-5.7278e-01, 2.8969e+00, -2.7890e-02],
[ 2.7310e-01, -1.7537e-01, 1.9952e+00],
[ 2.7315e+00, 1.1530e+00, -1.4813e+00],
[-6.0942e-01, -4.4465e-03, -7.6736e+00],
[ 1.4917e+00, -4.1704e-01, 5.0962e-01]])
in-place加法:在y上变化
y.add_(x)
y
tensor([[ -0.7862, 3.6496, -0.2399],
[ 0.3146, -0.2865, 2.3966],
[ 3.5181, 1.4858, -2.0991],
[ -1.0398, -0.0601, -10.4813],
[ 1.6814, -0.7273, 0.5846]])
注意
任何in-place的运算都会以``_``结尾。 举例来说:``x.copy_(y)``, ``x.t_()``, 会改变 ``x``。
各种类似NumPy的indexing都可以在PyTorch tensor上面使用。
x
tensor([[-0.2134, 0.7527, -0.2120],
[ 0.0415, -0.1111, 0.4014],
[ 0.7867, 0.3328, -0.6178],
[-0.4304, -0.0557, -2.8077],
[ 0.1898, -0.3103, 0.0750]])
x[1:, 1:] #第一行和第第一列之后的全部取
tensor([[-0.1111, 0.4014],
[ 0.3328, -0.6178],
[-0.0557, -2.8077],
[-0.3103, 0.0750]])
Resizing: 如果你希望resize/reshape一个tensor,可以使用torch.view
:
x = torch.randn(4,4)
y = x.view(16)
z = x.view(-1,8)
z
tensor([[ 0.4438, 0.5009, -0.3959, -1.0779, -0.6019, -2.1380, 0.1840, 1.2618],
[ 0.3149, -0.0254, -0.5463, -1.3400, 0.5634, -0.4591, -0.1635, -1.6674]])
如果你有一个只有一个元素的tensor,使用.item()
方法可以把里面的value变成Python数值。
x = torch.randn(1)
x
tensor([0.3443])
#dir(x)
x.data
tensor([0.3443])
x.item()
0.34428679943084717
z.transpose(1,0) #本来是2*8,现在变成8*2
tensor([[ 0.4438, 0.3149],
[ 0.5009, -0.0254],
[-0.3959, -0.5463],
[-1.0779, -1.3400],
[-0.6019, 0.5634],
[-2.1380, -0.4591],
[ 0.1840, -0.1635],
[ 1.2618, -1.6674]])
更多阅读
各种Tensor operations, 包括transposing, indexing, slicing,
mathematical operations, linear algebra, random numbers在
<https://pytorch.org/docs/torch>
.
Numpy和Tensor之间的转化
在Torch Tensor和NumPy array之间相互转化非常容易。
Torch Tensor和NumPy array会共享内存,所以改变其中一项也会改变另一项。
把Torch Tensor转变成NumPy Array
a = torch.ones(5)
a
tensor([1., 1., 1., 1., 1.])
b = a.numpy()
b
array([1., 1., 1., 1., 1.], dtype=float32)
改变numpy array里面的值。
b[1] = 2
b
array([1., 2., 1., 1., 1.], dtype=float32)
a
tensor([1., 2., 1., 1., 1.])
把NumPy ndarray转成Torch Tensor
import numpy as np
a = np.ones(5)
b = torch.from_numpy(a)
np.add(a, 1, out=a)
print(a)
[2. 2. 2. 2. 2.]
b
tensor([2., 2., 2., 2., 2.], dtype=torch.float64)
所有CPU上的Tensor都支持转成numpy或者从numpy转成Tensor。
CUDA Tensors
使用.to
方法,Tensor可以被移动到别的device上。
torch.cuda.is_available()
False
if torch.cuda.is_available():
device = torch.device("cuda")
y = torch.ones_like(x, device=device)
x = x.to(device)
z = x + y
print(z)
print(z.to("cpu", torch.double))
y.to("cpu").data.numpy()
y.cpu().data.numpy()
model = model.cuda()
热身: 用numpy实现两层神经网络
一个全连接ReLU神经网络,一个隐藏层,没有bias。用来从x预测y,使用L2 Loss。
- h = W 1 X h = W_1X h=W1X
- a = m a x ( 0 , h ) a = max(0, h) a=max(0,h)
- y h a t = W 2 a y_{hat} = W_2a yhat=W2a
这一实现完全使用numpy来计算前向神经网络,loss,和反向传播。
- forward pass
- loss
- backward pass
numpy ndarray是一个普通的n维array。它不知道任何关于深度学习或者梯度(gradient)的知识,也不知道计算图(computation graph),只是一种用来计算数学运算的数据结构。
N, D_in, H, D_out = 64, 1000, 100, 10 #N表示训练数据的个数
# 随机创建一些训练数据
x = np.random.randn(N, D_in) #64*1000
y = np.random.randn(N, D_out) #64*10
w1 = np.random.randn(D_in, H) # 1000*100
w2 = np.random.randn(H, D_out) # 100*10
learning_rate = 1e-6
for it in range(500):
# Forward pass
h = x.dot(w1) # N * H
h_relu = np.maximum(h, 0) # N * H
y_pred = h_relu.dot(w2) # N * D_out
# compute loss
loss = np.square(y_pred - y).sum()
print(it, loss)
# Backward pass
# compute the gradient
grad_y_pred = 2.0 * (y_pred - y)
grad_w2 = h_relu.T.dot(grad_y_pred)
grad_h_relu = grad_y_pred.dot(w2.T)
grad_h = grad_h_relu.copy()
grad_h[h<0] = 0
grad_w1 = x.T.dot(grad_h)
# update weights of w1 and w2
w1 -= learning_rate * grad_w1
w2 -= learning_rate * grad_w2
PyTorch: Tensors
这次我们使用PyTorch tensors来创建前向神经网络,计算损失,以及反向传播。
一个PyTorch Tensor很像一个numpy的ndarray。但是它和numpy ndarray最大的区别是,PyTorch Tensor可以在CPU或者GPU上运算。如果想要在GPU上运算,就需要把Tensor换成cuda类型。
N, D_in, H, D_out = 64, 1000, 100, 10
# 随机创建一些训练数据
x = torch.randn(N, D_in)
y = torch.randn(N, D_out)
w1 = torch.randn(D_in, H)
w2 = torch.randn(H, D_out)
learning_rate = 1e-6
for it in range(500):
# Forward pass
h = x.mm(w1) # N * H
h_relu = h.clamp(min=0) # N * H
y_pred = h_relu.mm(w2) # N * D_out
# compute loss
loss = (y_pred - y).pow(2).sum().item()
print(it, loss)
# Backward pass
# compute the gradient
grad_y_pred = 2.0 * (y_pred - y)
grad_w2 = h_relu.t().mm(grad_y_pred)
grad_h_relu = grad_y_pred.mm(w2.t())
grad_h = grad_h_relu.clone()
grad_h[h<0] = 0
grad_w1 = x.t().mm(grad_h)
# update weights of w1 and w2
w1 -= learning_rate * grad_w1
w2 -= learning_rate * grad_w2
x = torch.randn(N, D_in)
y = torch.randn(N, D_out)
learning_rate = 1e-6
w1 = torch.randn(D_in, H,requires_grad=True)
w2 = torch.randn(H, D_out,requires_grad=True)
y_pred = x.mm(w1).clamp(min=0).mm(w2)
loss=(y_pred-y).pow(2).sum()
loss.backward()
# w2.grad 对w2求导
简单的autograd
x = torch.tensor(1., requires_grad=True)
w = torch.tensor(2., requires_grad=True)
b = torch.tensor(3., requires_grad=True)
y = w*x + b # y = 2*1+3
y.backward() #自动求导
# dy / dw = x
print(w.grad)
print(x.grad)
print(b.grad)
tensor(1.)
tensor(2.)
tensor(1.)
PyTorch: Tensor和autograd
PyTorch的一个重要功能就是autograd,也就是说只要定义了forward pass(前向神经网络),计算了loss之后,PyTorch可以自动求导计算模型所有参数的梯度。
一个PyTorch的Tensor表示计算图中的一个节点。如果x
是一个Tensor并且x.requires_grad=True
那么x.grad
是另一个储存着x
当前梯度(相对于一个scalar,常常是loss)的向量。
N, D_in, H, D_out = 64, 1000, 100, 10
# 随机创建一些训练数据
x = torch.randn(N, D_in)
y = torch.randn(N, D_out)
w1 = torch.randn(D_in, H, requires_grad=True)
w2 = torch.randn(H, D_out, requires_grad=True)
learning_rate = 1e-6
for it in range(500):
# Forward pass
y_pred = x.mm(w1).clamp(min=0).mm(w2)
# compute loss
loss = (y_pred - y).pow(2).sum() # computation graph
print(it, loss.item())
# Backward pass
loss.backward()
# update weights of w1 and w2
with torch.no_grad(): #不需要记住w1的grad和w2的grad
w1 -= learning_rate * w1.grad
w2 -= learning_rate * w2.grad
w1.grad.zero_()
w2.grad.zero_()
PyTorch: nn
这次我们使用PyTorch中nn这个库来构建网络。
用PyTorch autograd来构建计算图和计算gradients,
然后PyTorch会帮我们自动计算gradient。
import torch.nn as nn
N, D_in, H, D_out = 64, 1000, 100, 10
# 随机创建一些训练数据
x = torch.randn(N, D_in)
y = torch.randn(N, D_out)
model = torch.nn.Sequential(
torch.nn.Linear(D_in, H, bias=False), # w_1 * x + b_1 bias=False,表示没有偏置项
torch.nn.ReLU(),
torch.nn.Linear(H, D_out, bias=False),
)
#初始化的效果并不好,所以normal一下
torch.nn.init.normal_(model[0].weight)
torch.nn.init.normal_(model[2].weight)
# model = model.cuda()
loss_fn = nn.MSELoss(reduction='sum')
learning_rate = 1e-6
for it in range(500):
# Forward pass
y_pred = model(x) # model.forward()
# compute loss
loss = loss_fn(y_pred, y) # computation graph loss = (y_pred - y).pow(2).sum()
print(it, loss.item())
# Backward pass
loss.backward()
# update weights of w1 and w2
with torch.no_grad():
for param in model.parameters(): # param (tensor, grad)
param -= learning_rate * param.grad
model.zero_grad()
model
Sequential(
(0): Linear(in_features=1000, out_features=100, bias=False)
(1): ReLU()
(2): Linear(in_features=100, out_features=10, bias=False)
)
model[0].weight
Parameter containing:
tensor([[-0.3164, 1.4052, -0.4559, ..., -0.4327, 0.4503, 1.3552],
[ 2.9527, 0.8730, -0.8433, ..., -1.2369, 0.2527, -0.9086],
[ 1.1706, -0.7496, -0.5939, ..., -0.4606, 0.2525, -1.8640],
...,
[-0.8998, -0.2335, 0.4697, ..., 0.3832, -1.6095, -0.3557],
[-0.3498, 0.0871, -1.9896, ..., 0.0476, 0.1617, -0.3342],
[ 1.0591, 0.6349, -0.2374, ..., -0.4415, -1.1165, -0.0973]],
requires_grad=True)
w = torch.empty(3, 5)
print(w)
tensor([[-1.7689e-28, 4.5695e-41, 1.0203e-27, 3.0770e-41, 0.0000e+00],
[ 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00],
[ 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00]])
nn.init.normal_(w)
tensor([[ 0.1995, 0.2254, 1.1564, -0.9050, 1.1599],
[ 0.9236, 0.5222, 0.3482, 1.2817, -2.9001],
[ 0.9082, -0.3100, 0.8182, -1.2605, 0.4819]])
PyTorch: optim
这一次我们不再手动更新模型的weights,而是使用optim这个包来帮助我们更新参数。
optim这个package提供了各种不同的模型优化方法,包括SGD+momentum, RMSProp, Adam等等。
import torch.nn as nn
N, D_in, H, D_out = 64, 1000, 100, 10
# 随机创建一些训练数据
x = torch.randn(N, D_in)
y = torch.randn(N, D_out)
model = torch.nn.Sequential(
torch.nn.Linear(D_in, H, bias=False), # w_1 * x + b_1
torch.nn.ReLU(),
torch.nn.Linear(H, D_out, bias=False),
)
torch.nn.init.normal_(model[0].weight)
torch.nn.init.normal_(model[2].weight)
# model = model.cuda()
loss_fn = nn.MSELoss(reduction='sum')
# learning_rate = 1e-4
# optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
learning_rate = 1e-6
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate) #优化模型的参数
for it in range(500):
# Forward pass
y_pred = model(x) # model.forward()
# compute loss
loss = loss_fn(y_pred, y) # computation graph
print(it, loss.item())
optimizer.zero_grad()
# Backward pass
loss.backward()
# update model parameters
optimizer.step()
PyTorch: 自定义 nn Modules
我们可以定义一个模型,这个模型继承自nn.Module类。如果需要定义一个比Sequential模型更加复杂的模型,就需要定义nn.Module模型。
import torch.nn as nn
N, D_in, H, D_out = 64, 1000, 100, 10
# 随机创建一些训练数据
x = torch.randn(N, D_in)
y = torch.randn(N, D_out)
# 集成模型,如果定义更复杂的模型可以这样写
class TwoLayerNet(torch.nn.Module):
def __init__(self, D_in, H, D_out):
super(TwoLayerNet, self).__init__()
# define the model architecture
self.linear1 = torch.nn.Linear(D_in, H, bias=False)
self.linear2 = torch.nn.Linear(H, D_out, bias=False)
def forward(self, x):
y_pred = self.linear2(self.linear1(x).clamp(min=0))
return y_pred
model = TwoLayerNet(D_in, H, D_out)
loss_fn = nn.MSELoss(reduction='sum')
learning_rate = 1e-4
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
for it in range(500):
# Forward pass
y_pred = model(x) # model.forward()
# compute loss
loss = loss_fn(y_pred, y) # computation graph
print(it, loss.item())
optimizer.zero_grad()
# Backward pass
loss.backward()
# update model parameters
optimizer.step()
总结
1.定义输入和输出
2.定义model
3.定义loss
4.定义optimizer