对输入数据进行处理:
1.对于图像,可以用 Pillow,OpenCV
2.对于语音,可以用 scipy,librosa
3.对于文本,可以直接用 Python 或 Cython 基础数据加载模块,或者用 NLTK 和 SpaCy
*尽量用包,避免编写样板代码
定义一个神经网络,包含:
1.网络中每层的定义,__init__()
2.网络前向传播的实现,forward()
3.其他一些前向传播要用到的函数
import torch
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# 1 input image channel, 6 output channels, 5x5 square convolution
# kernel
self.conv1 = nn.Conv2d(1, 6, 5)
self.conv2 = nn.Conv2d(6, 16, 5)
# an affine operation: y = Wx + b
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
# Max pooling over a (2, 2) window
x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
# If the size is a square you can only specify a single number
x = F.max_pool2d(F.relu(self.conv2(x)), 2)
x = x.view(-1, self.num_flat_features(x))
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
def num_flat_features(self, x):
size = x.size()[1:] # all dimensions except the batch dimension
num_features = 1
for s in size:
num_features *= s
return num_features
net = Net()
print(net)
输入数据,并对loss调用反向传播:
1.计算代价函数,并反向传播
2.更新参数
output = net(input)
criterion = nn.MSELoss()
loss = criterion(output, target)
loss.backward()
optimizer = optim.SGD(net.parameters(), lr=0.01)
optimizer.step()
整合迭代
for epoch in range(2): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 2000 == 1999: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
print('Finished Training')
我们怎么在GPU上跑这些神经网络?
在GPU上训练 就像你怎么把一个张量转移到GPU上一样,你要将神经网络转到GPU上。 如果CUDA可以用,让我们首先定义下我们的设备为第一个可见的cuda设备。
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# Assume that we are on a CUDA machine, then this should print a CUDA device:
print(device)
输出:
cuda:0
本节剩余部分都会假定设备就是台CUDA设备。
接着这些方法会递归地遍历所有模块,并将它们的参数和缓冲器转换为CUDA张量。
net.to(device)
记住你也必须在每一个步骤向GPU发送输入和目标:
inputs, labels = inputs.to(device), labels.to(device)
为什么没有注意到与CPU相比巨大的加速?因为你的网络非常小。