- 🍨 本文为🔗365天深度学习训练营 中的学习记录博客
- 🍖 原作者:K同学啊
学习目标:
1.根据TensorFlow代码,编写出相应的Pytorch代码
2.了解残差结构
理论知识
以下理论知识的笔记是来自训练营:
1.CNN算法的发展史
网络插图:评估模型的性能,横轴是网络的复杂度,纵轴是网络的识别精度。从此图可以观察到:
(1)2012年,AlexNet是由Alex等人在2012年ImageNet图像分类竞赛中提出的一种经典的卷积神经网络。AlexNet是首个深层卷积神经网络,同时引入了Relu激活函数、局部归一化、数据增强和Dropout;
(2)VGG-16,VGG-19依靠卷积+池化层堆叠而成,其性能在当时不错,但是计算量巨大。结构是分为几个组,每组堆叠数量不等的CONV-Relu层,并在最后一层使用MaxPool缩减特征图尺寸;
(3)GoogleNet(InceptionV1),提出了使用并联卷积结构,在每个通路中使用不同卷积核的网络,随后衍生出V2,V3,V4等一系列网络结构;
(4)ResNet,有V1,V2等不同的版本,提出恒等映射的概念,具有短路直接路径、模块化的网络结构、可以很方便地扩展到18~1001层
(5)DenseNet,这是一种具有前级特征重用、层间直连、结构递归扩展等特点的卷积网络。
2.残差网络
深度残差网络ResNet是何凯明等在2015年提出来的,具有简单、使用的优势,随后很多研究都是在ResNet-50或者ResNet-100的基础上建立起来的。
ResNet主要解决深度卷积网络在深度加深的时候‘退化问题’。一般的卷积神经网络,增大网络深度后带来的第一个问题是梯度消失、爆炸问题,这个问题由Szegedy提出的BN层顺利解决。BN层能对各层的输出做归一化,是的在反向梯度传递后仍能保持大小稳定,不会出现过小火过大的情况。但是作者发现加了BN层后在加深仍然不容易收敛,并提出第二个问题–准确率降低:层级达到一定程度是准确率就会饱和,然后迅速下降,这种下降既不是梯度消失引起的也不是过拟合造成的,而是由于网络过于复杂,以至于光靠不加约束的训练很难达到理想的错误率。
准确率下降问题不是网络结构本身的问题,而是现有的训练方式不够理想造成的。当前广泛是用的优化器,无论是SGD、RMSProp,还是Adam,都无法达到理论上最优的收敛结果。
作者在文中证明了只要有合适的网络结构,更深的网络肯定会比较浅的网络效果好。证明过程:假设在一种网络A中的后面添加几层形成新的网络B,如果增加的层级只是对A的输出做了一个恒等映射,即A的输出经过新增的层级编程B的输出后没有发生变化,这样网络A和网络B的粗无虑就是相等的,即证明了加深后的网络不会比加深前的网络效果差。
何凯明提出一种残差结构实现恒等映射(如上图):整个模块除了正常的卷积层输出F(x)外,还有一个分支把出入x直接连接到输出,然后卷积输出F(x)和分支输出x算术相加的到最终的输出,公式表达式:H(x)=F(x)+x.可以证明如果F(x)分支中所有参数都是0,则H(x)就是一个恒等映射。残差结构人为的制造了恒等映射,就能让整个网络结构朝着恒等映射的方向去收敛,确保最终的错误率不会因为深度的边打而越来越差。
左边单元为两层残差单元,包含两个相同输出的通道处3X3卷积,只是用于较浅的ResNet网络,较深的网络使用右边三层的残差单元(又称Bottleneck结构):先用1X1的卷积降维,然后用3*3的卷积,最后用1X1卷积升维回复原有的维度。此外,如果有输入输出维度不同,可以对输入做一个线性映射变换维度,在连接后面的层。三层的残差单元对于相同数量的层又减少了参数量,因此可以拓展更深的模型。
1
import torch
import torchvision
import torchvision.transforms as transforms
from torchvision import transforms,datasets
import os,pathlib,PIL
device = 'cuda' if torch.cuda.is_available() else 'cpu'
device
data_dir = '/home/aiusers/space_yjl/深度学习训练营/进阶/第J1周:ResNet-50算法实战与解析/第8天/bird_photos'
data_dir = pathlib.Path(data_dir)
data_paths = list(data_dir.glob('*'))
classNames = [str(path).split('/')[9] for path in data_paths]
print(data_dir)
print(classNames)
transforms = transforms.Compose([
transforms.Resize([224,224]),
transforms.ToTensor(),
transforms.Normalize(
mean=[0.485,0.456,0.406],
std=[0.229,0.224,0.225])
])
total_data = datasets.ImageFolder('/home/aiusers/space_yjl/深度学习训练营/进阶/第J1周:ResNet-50算法实战与解析/第8天/bird_photos',transform=transforms)
train_size = int(0.8*len(total_data))
test_size = len(total_data)-train_size
train_dataset,test_dataset = torch.utils.data.random_split(total_data,[train_size,test_size])
batch_size = 8
train_dl = torch.utils.data.DataLoader(train_dataset,batch_size=batch_size,shuffle=True)
test_dl = torch.utils.data.DataLoader(test_dataset,batch_size=batch_size,shuffle=True)
import torch
import torch.nn as nn
import torch.nn.functional as F
class Identity_block(nn.Module):
def __init__(self, in_channels,filters, kernel_size, stage,block):
super(Identity_block,self).__init__()
filters1, filters2, filters3 = filters
name_base = str(stage)+block+'_identity_block'
self.conv1 = nn.Conv2d(in_channels,filters1, kernel_size=1,stride=1, padding=0, bias=False)
self.bn1 = nn.BatchNorm2d(filters1)
self.conv2 = nn.Conv2d(filters1, filters2, kernel_size=kernel_size, stride=1, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(filters2)
self.conv3 = nn.Conv2d(filters2,filters3, kernel_size=1, stride=1, padding=0, bias=False)
self.bn3 = nn.BatchNorm2d(filters3)
self.relu = nn.ReLU(inplace=True)
def forward(self,x):
identity = x
out = self.relu(self.bn1(self.conv1(x)))
out = self.relu(self.bn2(self.conv2(out)))
out = self.bn3(self.conv3(out))
out += identity
out = self.relu(out)
return out
class Conv_block(nn.Module):
def __init__(self,in_channels,filters,kernel_size,stage,block,strides=2):
super(Conv_block,self).__init__()
filters1,filters2,filters3 = filters
res_name_base = str(stage)+block+'_conv_block_res_'
name_base = str(stage)+block+'_conv_block_'
self.conv1 = nn.Conv2d(in_channels,filters1, kernel_size=1, stride=strides, bias=False)
self.bn1 = nn.BatchNorm2d(filters1)
self.conv2 = nn.Conv2d(filters1, filters2, kernel_size=kernel_size,stride=1,padding=1,bias=False)
self.bn2 = nn.BatchNorm2d(filters2)
self.conv3 = nn.Conv2d(filters2, filters3, kernel_size=1, stride=1, bias=False)
self.bn3 = nn.BatchNorm2d(filters3)
self.shortcut = nn.Sequential(
nn.Conv2d(in_channels, filters3, kernel_size=1, stride=strides, bias=False),
nn.BatchNorm2d(filters3)
)
self.relu = nn.ReLU(inplace=True)
def forward(self, x):
identity = self.shortcut(x)
out = self.relu(self.bn1(self.conv1(x)))
out = self.relu(self.bn2(self.conv2(out)))
out = self.bn3(self.conv3(out))
out += identity
out = self.relu(out)
return out
class Resnet50(nn.Module):
def __init__(self, input_shape=(3,224,224), classes=1000):
super(Resnet50,self).__init__()
self.in_channels = 64
self.conv1 = nn.Conv2d(input_shape[0], self.in_channels, kernel_size=7,padding=3, stride=2, bias=False)
self.bn1 = nn.BatchNorm2d(self.in_channels)
self.relu = nn.ReLU(inplace=True)
self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
self.layer1 = self._make_layer([64,64,256],blocks=3,stage=2,stride=1)
self.layer2 = self._make_layer([128,128,512],blocks=4,stage=3,stride=2)
self.layer3 = self._make_layer([256,256,1024],blocks=6,stage=4, stride=2)
self.layer4 = self._make_layer([512,512,2048],blocks=3,stage=5, stride=2)
self.avgpool = nn.AvgPool2d((7,7))
self.fc = nn.Linear(2048, classes)
def _make_layer(self, filters, blocks, stage, stride):
layers = []
# first block is a ConvBlock with stride
layers.append(Conv_block(self.in_channels, filters, kernel_size=3, stage=stage, block='a', strides=stride))
self.in_channels = filters[2]
# remaining is a IdentityBlocks
for b in range(1, blocks):
layers.append(Identity_block(self.in_channels, filters,kernel_size=3, stage=stage, block=chr(97+b)))
return nn.Sequential(*layers) ##
def forward(self,x):
x = self.conv1(x)
x = self.bn1(x)
x = self.relu(x)
x = self.maxpool(x)
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
x = self.layer4(x)
x = self.avgpool(x)
x = torch.flatten(x,1)
x = self.fc(x)
return x
model = Resnet50()
print(model)
# 设置学习率
learn_rate = 1e-3
# 定义模型
model = Resnet50(classes=4)
model.to(device)
# 设置优化器
optimizer = torch.optim.Adam(model.parameters(),lr=learn_rate)
# 设置损失函数
loss_fun = nn.CrossEntropyLoss()
# 设置epoch
epochs = 10
# 初试化列表来存储训练和验证的准确率和损失值
train_acc = []
val_acc=[]
train_loss=[]
val_loss = []
for epoch in range(epochs):
# 训练阶段
model.train()
running_loss = 0.0
acc = 0
for data,label in train_dl:
data, label = data.to(device), label.to(device)
optimizer.zero_grad()
pred_out = model(data)
loss = loss_fun(pred_out, label)
loss.backward()
optimizer.step()
running_loss += loss.item()
_,predicted = torch.max(pred_out.data,1)
acc += (predicted == label).sum().item()
epoch_loss = running_loss / len(train_dl)
epoch_acc = acc/len(train_dl.dataset)
train_loss.append(epoch_loss)
train_acc.append(epoch_acc)
# 验证阶段
model.eval()
running_loss = 0.0
acc = 0
with torch.no_grad():
for data, label in test_dl:
data, label = data.to(device), label.to(device)
output = model(data)
loss = loss_fun(output,label)
running_loss += loss.item()
_,predicted = torch.max(output.data,1)
acc += (predicted == label).sum().item()
epoch_loss = running_loss/len(test_dl)
epoch_acc = acc/len(test_dl.dataset)
val_acc.append(epoch_acc)
val_loss.append(epoch_loss)
print('Epoch{}/{},Train_loss:{:.4f}, Train_acc:{:.2f}, Val_loss:{:.4f}, Val_acc:{:.2f}'
.format(epoch+1,epochs,train_loss[-1],train_acc[-1]*100, val_loss[-1],val_acc[-1]*100))
import matplotlib.pyplot as plt
plt.rcParams['font.sans-serif'] = ['SimHei']
plt.rcParams['axes.unicode_minus'] =False
plt.rcParams['figure.dpi'] = 100
plt.figure(figsize = (12,3))
plt.subplot(1,2,1)
plt.plot(range(epochs), train_acc, label='Training Accuracy')
plt.plot(range(epochs), val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1,2,2)
plt.plot(range(epochs), train_loss, label='Training Loss')
plt.plot(range(epochs), val_loss, label='Validation loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
from PIL import Image
classes = list(total_data.class_to_idx)
def predict_one_image(image_path, model, transform, classes):
test_img = Image.open(image_path).convert('RGB')
# plt.imshow(test_img) # 展示预测的图片
test_img = transform(test_img)
img = test_img.to(device).unsqueeze(0)
model.eval()
output = model(img)
_,pred = torch.max(output,1)
pred_class = classes[pred]
print(f'预测结果是:{pred_class}')
from torchvision import transforms as transforms
train_transforms = transforms.Compose([
transforms.Resize([224, 224]), # 将输入图片resize成统一尺寸
transforms.ToTensor(), # 将PIL Image或numpy.ndarray转换为tensor,并归一化到[0,1]之间
transforms.Normalize( # 标准化处理-->转换为标准正太分布(高斯分布),使模型更容易收敛
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]) # 其中 mean=[0.485,0.456,0.406]与std=[0.229,0.224,0.225] 从数据集中随机抽样计算得到的。
])
# 预测训练集中的某张照片
predict_one_image(image_path=r'/home/aiusers/space_yjl/深度学习训练营/进阶/第J1周:ResNet-50算法实战与解析/第8天/bird_photos/Black Skimmer/001.jpg',
model=model,
transform=train_transforms,
classes=classes)
2.总结
1.没有使用预训练权重 导致模型准确率并未提升