转载请注明作者和出处: http://blog.csdn.net/john_bh/
1.处理数据
对于视觉,已经创建了一个叫做 totchvision 的包,该包含有支持加载类似Imagenet,CIFAR10,MNIST 等公共数据集的数据加载模块 torchvision.datasets 和支持加载图像数据数据转换模块 torch.utils.data.DataLoader。
这次学习将使用CIFAR10数据集,它包含十个类别:‘airplane’, ‘automobile’, ‘bird’, ‘cat’, ‘deer’, ‘dog’, ‘frog’, ‘horse’, ‘ship’, ‘truck’。CIFAR-10 中的图像尺寸为
3
∗
32
∗
32
3*32*32
3∗32∗32,也就是RGB的3层颜色通道,每层通道内的尺寸为
32
∗
32
32*32
32∗32。
训练一个图像分类器,将按次序的做如下几步:
- 使用torchvision加载并且归一化CIFAR10的训练和测试数据集
- 定义一个卷积神经网络
- 定义一个损失函数
- 在训练样本数据上训练网络
- 在测试样本数据上测试网络
2.加载并归一化 CIFAR10
加载并归一化 CIFAR10 使用 torchvision ,用它来加载 CIFAR10 数据非常简单,代码如下:
# -*- coding:utf-8 -*-
import torch
import torchvision
import torchvision.transforms as transforms
# torchvision 数据集的输出是范围在[0,1]之间的 PILImage,我们将他们转换成归一化范围为[-1,1]之间的张量 Tensors。
transform=transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.5,0.5,0.5),(0.5,0.5,0.5))])
trainset=torchvision.datasets.CIFAR10(root="./data",train=True,download=True,transform=transform)
trainloader=torch.utils.data.DataLoader(trainset,batch_size=4,shuffle=True,num_workers=2)
testset=torchvision.datasets.CIFAR10(root="./data",train=False,download=True,transform=transform)
testloader=torch.utils.data.DataLoader(testset,batch_size=4,shuffle=False,num_workers=2)
classes=('plane', 'car', 'bird', 'cat','deer', 'dog', 'frog', 'horse', 'ship', 'truck')
输出:
Files already downloaded and verified
Files already downloaded and verified
因为我已经下载了数据集,所以显示已经下载。
展示其中的一些训练图片。
# -*- coding:utf-8 -*-
import torch
import torchvision
import torchvision.transforms as transforms
import matplotlib.pyplot as plt
import numpy as np
# torchvision 数据集的输出是范围在[0,1]之间的 PILImage,我们将他们转换成归一化范围为[-1,1]之间的张量 Tensors。
transform=transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.5,0.5,0.5),(0.5,0.5,0.5))])
trainset=torchvision.datasets.CIFAR10(root="./data",train=True,download=True,transform=transform)
trainloader=torch.utils.data.DataLoader(trainset,batch_size=4,shuffle=True,num_workers=2)
testset=torchvision.datasets.CIFAR10(root="./data",train=False,download=True,transform=transform)
testloader=torch.utils.data.DataLoader(testset,batch_size=4,shuffle=False,num_workers=2)
classes=('plane', 'car', 'bird', 'cat','deer', 'dog', 'frog', 'horse', 'ship', 'truck')
def imshow(img):
img=img/2+0.5
npimg=img.numpy()
plt.imshow(np.transpose(npimg,(1,2,0)))
plt.show()
dataiter=iter(trainloader)
images,labels=dataiter.next()
#show image
imshow(torchvision.utils.make_grid(images))
#print labels
print(' '.join('%5s'%classes[labels[j]] for j in range(4)))
输出:
Files already downloaded and verified
Files already downloaded and verified
frog horse bird bird
3.定义一个卷积神经网络
随机生成一个 32x32 的输入。注意:期望的输入维度是 32x32 。为了使用这个网络在 MNIST 数据及上,你需要把数据集中的图片维度修改为 32x32。
# -*- coding:utf-8 -*-
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
# torchvision 数据集的输出是范围在[0,1]之间的 PILImage,我们将他们转换成归一化范围为[-1,1]之间的张量 Tensors。
transform=transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.5,0.5,0.5),(0.5,0.5,0.5))])
trainset=torchvision.datasets.CIFAR10(root="./data",train=True,download=True,transform=transform)
trainloader=torch.utils.data.DataLoader(trainset,batch_size=4,shuffle=True,num_workers=2)
testset=torchvision.datasets.CIFAR10(root="./data",train=False,download=True,transform=transform)
testloader=torch.utils.data.DataLoader(testset,batch_size=4,shuffle=False,num_workers=2)
classes=('plane', 'car', 'bird', 'cat','deer', 'dog', 'frog', 'horse', 'ship', 'truck')
class Net(nn.Module):
def __init__(self):
super(Net,self).__init__()
self.conv1=nn.Conv2d(3,6,5)
self.pool=nn.MaxPool2d(2,2)
self.conv2=nn.Conv2d(6,16,5)
self.fc1=nn.Linear(16*5*5,120)
self.fc2=nn.Linear(120,84)
self.fc3=nn.Linear(84,10)
def forward(self,x):
x=self.pool(F.relu(self.conv1(x)))
x=self.pool(F.relu(self.conv2(x)))
x=x.view(-1,16*5*5)
x=F.relu(self.fc1(x))
x=F.relu(self.fc2(x))
x=self.fc3(x)
return x
net=Net()
4.定义一个损失函数
定义一个损失函数和优化器 让我们使用分类交叉熵Cross-Entropy 作损失函数,动量SGD做优化器。
# -*- coding:utf-8 -*-
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
# torchvision 数据集的输出是范围在[0,1]之间的 PILImage,我们将他们转换成归一化范围为[-1,1]之间的张量 Tensors。
transform=transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.5,0.5,0.5),(0.5,0.5,0.5))])
trainset=torchvision.datasets.CIFAR10(root="./data",train=True,download=True,transform=transform)
trainloader=torch.utils.data.DataLoader(trainset,batch_size=4,shuffle=True,num_workers=2)
testset=torchvision.datasets.CIFAR10(root="./data",train=False,download=True,transform=transform)
testloader=torch.utils.data.DataLoader(testset,batch_size=4,shuffle=False,num_workers=2)
classes=('plane', 'car', 'bird', 'cat','deer', 'dog', 'frog', 'horse', 'ship', 'truck')
class Net(nn.Module):
def __init__(self):
super(Net,self).__init__()
self.conv1=nn.Conv2d(3,6,5)
self.pool=nn.MaxPool2d(2,2)
self.conv2=nn.Conv2d(6,16,5)
self.fc1=nn.Linear(16*5*5,120)
self.fc2=nn.Linear(120,84)
self.fc3=nn.Linear(84,10)
def forward(self,x):
x=self.pool(F.relu(self.conv1(x)))
x=self.pool(F.relu(self.conv2(x)))
x=x.view(-1,16*5*5)
x=F.relu(self.fc1(x))
x=F.relu(self.fc2(x))
x=self.fc3(x)
return x
net=Net()
# 定义一个损失函数和优化器 让我们使用分类交叉熵Cross-Entropy 作损失函数,动量SGD做优化器。
criterion = nn.CrossEntropyLoss()
optimizer=optim.SGD(net.parameters(),lr=0.001,momentum=0.9)
5.在训练样本数据上训练网络
在数据迭代器上循环传给网络和优化器 输入进行训练网络,具体代码如下:
# -*- coding:utf-8 -*-
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
# torchvision 数据集的输出是范围在[0,1]之间的 PILImage,我们将他们转换成归一化范围为[-1,1]之间的张量 Tensors。
transform=transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.5,0.5,0.5),(0.5,0.5,0.5))])
trainset=torchvision.datasets.CIFAR10(root="./data",train=True,download=True,transform=transform)
trainloader=torch.utils.data.DataLoader(trainset,batch_size=4,shuffle=True,num_workers=2)
testset=torchvision.datasets.CIFAR10(root="./data",train=False,download=True,transform=transform)
testloader=torch.utils.data.DataLoader(testset,batch_size=4,shuffle=False,num_workers=2)
classes=('plane', 'car', 'bird', 'cat','deer', 'dog', 'frog', 'horse', 'ship', 'truck')
class Net(nn.Module):
def __init__(self):
super(Net,self).__init__()
self.conv1=nn.Conv2d(3,6,5)
self.pool=nn.MaxPool2d(2,2)
self.conv2=nn.Conv2d(6,16,5)
self.fc1=nn.Linear(16*5*5,120)
self.fc2=nn.Linear(120,84)
self.fc3=nn.Linear(84,10)
def forward(self,x):
x=self.pool(F.relu(self.conv1(x)))
x=self.pool(F.relu(self.conv2(x)))
x=x.view(-1,16*5*5)
x=F.relu(self.fc1(x))
x=F.relu(self.fc2(x))
x=self.fc3(x)
return x
net=Net()
# 定义一个损失函数和优化器 让我们使用分类交叉熵Cross-Entropy 作损失函数,动量SGD做优化器。
criterion = nn.CrossEntropyLoss()
optimizer=optim.SGD(net.parameters(),lr=0.001,momentum=0.9)
# 训练网络:需要在数据迭代器上循环传给网络和优化器 输入就可以。
for epoch in range(2):
running_loss=0.0
for i,data in enumerate(trainloader,0):
inputs,labels=data # get the inputs
optimizer.zero_grad() # zero the parameter gradients
outputs=net(inputs)
# forward + backward + optimize
loss=criterion(outputs,labels)
loss.backward()
optimizer.step()
#print statistics
running_loss += loss.item()
if i%2000==1999: # print every 2000 mini-batches
print('[%d,%5d] loss: %.3f'%(epoch+1,i+1,running_loss/2000))
running_loss=0.0
print("Finished Training!")
输出:
Files already downloaded and verified
Files already downloaded and verified
[1, 2000] loss: 2.192
[1, 4000] loss: 1.840
[1, 6000] loss: 1.679
[1, 8000] loss: 1.566
[1,10000] loss: 1.503
[1,12000] loss: 1.451
[2, 2000] loss: 1.386
[2, 4000] loss: 1.358
[2, 6000] loss: 1.324
[2, 8000] loss: 1.305
[2,10000] loss: 1.274
[2,12000] loss: 1.240
Finished Training!
6.在测试样本数据上测试网络
已经通过训练数据集对网络进行了2次训练,但是需要检查网络是否已经学到了东西。用神经网络的输出作为预测的类标来检查网络的预测性能,用样本的真实类标来校对。如果预测是正确的,我们将样本添加到正确预测的列表里。
# -*- coding:utf-8 -*-
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
# torchvision 数据集的输出是范围在[0,1]之间的 PILImage,我们将他们转换成归一化范围为[-1,1]之间的张量 Tensors。
transform=transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.5,0.5,0.5),(0.5,0.5,0.5))])
trainset=torchvision.datasets.CIFAR10(root="./data",train=True,download=True,transform=transform)
trainloader=torch.utils.data.DataLoader(trainset,batch_size=4,shuffle=True,num_workers=2)
testset=torchvision.datasets.CIFAR10(root="./data",train=False,download=True,transform=transform)
testloader=torch.utils.data.DataLoader(testset,batch_size=4,shuffle=False,num_workers=2)
classes=('plane', 'car', 'bird', 'cat','deer', 'dog', 'frog', 'horse', 'ship', 'truck')
class Net(nn.Module):
def __init__(self):
super(Net,self).__init__()
self.conv1=nn.Conv2d(3,6,5)
self.pool=nn.MaxPool2d(2,2)
self.conv2=nn.Conv2d(6,16,5)
self.fc1=nn.Linear(16*5*5,120)
self.fc2=nn.Linear(120,84)
self.fc3=nn.Linear(84,10)
def forward(self,x):
x=self.pool(F.relu(self.conv1(x)))
x=self.pool(F.relu(self.conv2(x)))
x=x.view(-1,16*5*5)
x=F.relu(self.fc1(x))
x=F.relu(self.fc2(x))
x=self.fc3(x)
return x
net=Net()
# 定义一个损失函数和优化器 让我们使用分类交叉熵Cross-Entropy 作损失函数,动量SGD做优化器。
criterion = nn.CrossEntropyLoss()
optimizer=optim.SGD(net.parameters(),lr=0.001,momentum=0.9)
# 训练网络:需要在数据迭代器上循环传给网络和优化器 输入就可以。
for epoch in range(2):
running_loss=0.0
for i,data in enumerate(trainloader,0):
inputs,labels=data # get the inputs
optimizer.zero_grad() # zero the parameter gradients
outputs=net(inputs)
# forward + backward + optimize
loss=criterion(outputs,labels)
loss.backward()
optimizer.step()
#print statistics
running_loss += loss.item()
if i%2000==1999: # print every 2000 mini-batches
print('[%d,%5d] loss: %.3f'%(epoch+1,i+1,running_loss/2000))
running_loss=0.0
print("Finished Training!")
outputs = net(images)
# 输出是预测与十个类的近似程度,与某一个类的近似程度越高,网络就越认为图像是属于这一类别。所以让我们打印其中最相似类别类标:
_, predicted = torch.max(outputs, 1)
print('Predicted: ', ' '.join('%5s' % classes[predicted[j]] for j in range(4)))
输出:
Files already downloaded and verified
Files already downloaded and verified
[1, 2000] loss: 2.237
[1, 4000] loss: 1.861
[1, 6000] loss: 1.687
[1, 8000] loss: 1.581
[1,10000] loss: 1.533
[1,12000] loss: 1.489
[2, 2000] loss: 1.444
[2, 4000] loss: 1.396
[2, 6000] loss: 1.375
[2, 8000] loss: 1.358
[2,10000] loss: 1.336
[2,12000] loss: 1.324
Finished Training!
Predicted: frog dog frog deer
看看网络在整个数据集上的表现:
# -*- coding:utf-8 -*-
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
# torchvision 数据集的输出是范围在[0,1]之间的 PILImage,我们将他们转换成归一化范围为[-1,1]之间的张量 Tensors。
transform=transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.5,0.5,0.5),(0.5,0.5,0.5))])
trainset=torchvision.datasets.CIFAR10(root="./data",train=True,download=True,transform=transform)
trainloader=torch.utils.data.DataLoader(trainset,batch_size=4,shuffle=True,num_workers=2)
testset=torchvision.datasets.CIFAR10(root="./data",train=False,download=True,transform=transform)
testloader=torch.utils.data.DataLoader(testset,batch_size=4,shuffle=False,num_workers=2)
classes=('plane', 'car', 'bird', 'cat','deer', 'dog', 'frog', 'horse', 'ship', 'truck')
class Net(nn.Module):
def __init__(self):
super(Net,self).__init__()
self.conv1=nn.Conv2d(3,6,5)
self.pool=nn.MaxPool2d(2,2)
self.conv2=nn.Conv2d(6,16,5)
self.fc1=nn.Linear(16*5*5,120)
self.fc2=nn.Linear(120,84)
self.fc3=nn.Linear(84,10)
def forward(self,x):
x=self.pool(F.relu(self.conv1(x)))
x=self.pool(F.relu(self.conv2(x)))
x=x.view(-1,16*5*5)
x=F.relu(self.fc1(x))
x=F.relu(self.fc2(x))
x=self.fc3(x)
return x
net=Net()
# 定义一个损失函数和优化器 让我们使用分类交叉熵Cross-Entropy 作损失函数,动量SGD做优化器。
criterion = nn.CrossEntropyLoss()
optimizer=optim.SGD(net.parameters(),lr=0.001,momentum=0.9)
# 训练网络:需要在数据迭代器上循环传给网络和优化器 输入就可以。
for epoch in range(2):
running_loss=0.0
for i,data in enumerate(trainloader,0):
inputs,labels=data # get the inputs
optimizer.zero_grad() # zero the parameter gradients
outputs=net(inputs)
# forward + backward + optimize
loss=criterion(outputs,labels)
loss.backward()
optimizer.step()
#print statistics
running_loss += loss.item()
if i%2000==1999: # print every 2000 mini-batches
print('[%d,%5d] loss: %.3f'%(epoch+1,i+1,running_loss/2000))
running_loss=0.0
print("Finished Training!")
#网络在整个数据集上的表现
correct = 0
total = 0
with torch.no_grad():
for data in testloader:
images, labels = data
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
输出:
Files already downloaded and verified
Files already downloaded and verified
[1, 2000] loss: 2.237
[1, 4000] loss: 1.861
[1, 6000] loss: 1.687
[1, 8000] loss: 1.581
[1,10000] loss: 1.533
[1,12000] loss: 1.489
[2, 2000] loss: 1.444
[2, 4000] loss: 1.396
[2, 6000] loss: 1.375
[2, 8000] loss: 1.358
[2,10000] loss: 1.336
[2,12000] loss: 1.324
Finished Training!
Accuracy of the network on the 10000 test images: 53 %
随机预测出为10类中的哪一类,看看模型表现。
# -*- coding:utf-8 -*-
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
# torchvision 数据集的输出是范围在[0,1]之间的 PILImage,我们将他们转换成归一化范围为[-1,1]之间的张量 Tensors。
transform=transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.5,0.5,0.5),(0.5,0.5,0.5))])
trainset=torchvision.datasets.CIFAR10(root="./data",train=True,download=True,transform=transform)
trainloader=torch.utils.data.DataLoader(trainset,batch_size=4,shuffle=True,num_workers=2)
testset=torchvision.datasets.CIFAR10(root="./data",train=False,download=True,transform=transform)
testloader=torch.utils.data.DataLoader(testset,batch_size=4,shuffle=False,num_workers=2)
classes=('plane', 'car', 'bird', 'cat','deer', 'dog', 'frog', 'horse', 'ship', 'truck')
class Net(nn.Module):
def __init__(self):
super(Net,self).__init__()
self.conv1=nn.Conv2d(3,6,5)
self.pool=nn.MaxPool2d(2,2)
self.conv2=nn.Conv2d(6,16,5)
self.fc1=nn.Linear(16*5*5,120)
self.fc2=nn.Linear(120,84)
self.fc3=nn.Linear(84,10)
def forward(self,x):
x=self.pool(F.relu(self.conv1(x)))
x=self.pool(F.relu(self.conv2(x)))
x=x.view(-1,16*5*5)
x=F.relu(self.fc1(x))
x=F.relu(self.fc2(x))
x=self.fc3(x)
return x
net=Net()
# 定义一个损失函数和优化器 让我们使用分类交叉熵Cross-Entropy 作损失函数,动量SGD做优化器。
criterion = nn.CrossEntropyLoss()
optimizer=optim.SGD(net.parameters(),lr=0.001,momentum=0.9)
# 训练网络:需要在数据迭代器上循环传给网络和优化器 输入就可以。
for epoch in range(2):
running_loss=0.0
for i,data in enumerate(trainloader,0):
inputs,labels=data # get the inputs
optimizer.zero_grad() # zero the parameter gradients
outputs=net(inputs)
# forward + backward + optimize
loss=criterion(outputs,labels)
loss.backward()
optimizer.step()
#print statistics
running_loss += loss.item()
if i%2000==1999: # print every 2000 mini-batches
print('[%d,%5d] loss: %.3f'%(epoch+1,i+1,running_loss/2000))
running_loss=0.0
print("Finished Training!")
#随机预测出为10类中的哪一类,看来网络性能
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
with torch.no_grad():
for data in testloader:
images, labels = data
outputs = net(images)
_, predicted = torch.max(outputs, 1)
c = (predicted == labels).squeeze()
for i in range(4):
label = labels[i]
class_correct[label] += c[i].item()
class_total[label] += 1
for i in range(10):
print('Accuracy of %5s : %2d %%' % (
classes[i], 100 * class_correct[i] / class_total[i]))
输出:
Files already downloaded and verified
Files already downloaded and verified
[1, 2000] loss: 2.237
[1, 4000] loss: 1.861
[1, 6000] loss: 1.687
[1, 8000] loss: 1.581
[1,10000] loss: 1.533
[1,12000] loss: 1.489
[2, 2000] loss: 1.444
[2, 4000] loss: 1.396
[2, 6000] loss: 1.375
[2, 8000] loss: 1.358
[2,10000] loss: 1.336
[2,12000] loss: 1.324
Finished Training!
Accuracy of plane : 57 %
Accuracy of car : 51 %
Accuracy of bird : 23 %
Accuracy of cat : 17 %
Accuracy of deer : 46 %
Accuracy of dog : 58 %
Accuracy of frog : 80 %
Accuracy of horse : 58 %
Accuracy of ship : 76 %
Accuracy of truck : 67 %