- 🍨 本文为🔗365天深度学习训练营中的学习记录博客
🍖 原作者:K同学啊 | 接辅导、项目定制
- 🚀 文章来源:K同学的学习圈子
目录
我的环境
- 语言环境:python3.8.18
- 编译器:jupyter notebook
- 深度学习环境:torch==2.0.1+cu118,torchvision==0.15.2+cu118
一、代码实现
1.配置GPU
import torch
import torch.nn as nn
import torchvision.transforms as transforms
import torchvision
from torchvision import transforms, datasets
import os,PIL,pathlib,warnings
warnings.filterwarnings("ignore") #忽略警告信息
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(device)
cuda
2.导入数据
data_dir = "E:/pytorch练习/训练营/J1/数据集/第8天/bird_photos"
data_dir = pathlib.Path(data_dir)
data_paths = list(data_dir.glob('*'))
classeNames = [str(path).split("\\")[7] for path in data_paths]
print(classeNames)
['Bananaquit', 'Black Skimmer', 'Black Throated Bushtiti', 'Cockatoo']
3.加载数据
train_transforms = transforms.Compose([
transforms.Resize([224,224]),
transforms.ToTensor(),
transforms.Normalize(
mean = [0.485,0.456,0.406],
std = [0.229,0.224,0.225]
)
])
test_transforms = transforms.Compose([
transforms.Resize([224,224]),
transforms.ToTensor(),
transforms.Normalize(
mean = [0.485,0.456,0.406],
std = [0.229,0.224,0.225]
)
])
total_data = datasets.ImageFolder("E:/pytorch练习/训练营/J1/数据集/第8天/bird_photos",transform = train_transforms)
print(total_data)
4.划分数据集
train_size = int(0.8 * len(total_data))
test_size = len(total_data) - train_size
train_dataset, test_dataset = torch.utils.data.random_split(total_data, [train_size, test_size])
print(train_dataset)
print(test_dataset)
batch_size = 8
train_dl = torch.utils.data.DataLoader(train_dataset,
batch_size=batch_size,
shuffle=True,
#num_workers=1
)
test_dl = torch.utils.data.DataLoader(test_dataset,
batch_size=batch_size,
shuffle=True,
#num_workers=1
)
for X, y in test_dl:
print("Shape of X [N, C, H, W]: ", X.shape)
print("Shape of y: ", y.shape, y.dtype)
break
<torch.utils.data.dataset.Subset object at 0x000001974B7AD610>
<torch.utils.data.dataset.Subset object at 0x000001974B7AD640>
Shape of X [N, C, H, W]: torch.Size([8, 3, 224, 224])
Shape of y: torch.Size([8]) torch.int64
5.构造模型
import torch
import torch.nn as nn
import torch.nn.functional as F
class DenseLayer(nn.Sequential):
"""Basic unit of DenseBlock (using bottleneck layer) """
def __init__(self, num_input_features, growth_rate, bn_size, drop_rate):
super(DenseLayer, self).__init__()
self.bn1 = nn.BatchNorm2d(num_input_features)
self.relu1 = nn.ReLU()
self.conv1 = nn.Conv2d(num_input_features, bn_size*growth_rate,
kernel_size=1, stride=1, bias=False)
self.bn2 = nn.BatchNorm2d(bn_size*growth_rate)
self.relu2 = nn.ReLU()
self.conv2 = nn.Conv2d(bn_size*growth_rate, growth_rate,
kernel_size=3, stride=1, padding=1, bias=False)
self.drop_rate = drop_rate
def forward(self, x):
output = self.bn1(x)
output = self.relu1(output)
output = self.conv1(output)
output = self.bn2(output)
output = self.relu2(output)
output = self.conv2(output)
if self.drop_rate > 0:
output = F.dropout(output, p=self.drop_rate)
return torch.cat([x, output], 1)
class DenseBlock(nn.Sequential):
def __init__(self, num_layers, num_input_features, bn_size, growth_rate, drop_rate):
super(DenseBlock, self).__init__()
for i in range(num_layers):
if i == 0:
self.layer = nn.Sequential(
DenseLayer(num_input_features+i*growth_rate, growth_rate, bn_size,drop_rate)
)
else:
layer = DenseLayer(num_input_features+i*growth_rate, growth_rate, bn_size,drop_rate)
self.layer.add_module("denselayer%d" % (i+1), layer)
def forward(self,input):
return self.layer(input)
class Transition(nn.Sequential):
def __init__(self, num_input_feature, num_output_features):
super(Transition, self).__init__()
self.bn = nn.BatchNorm2d(num_input_feature)
self.relu = nn.ReLU()
self.conv = nn.Conv2d(num_input_feature, num_output_features,
kernel_size=1, stride=1, bias=False)
self.pool = nn.AvgPool2d(2, stride=2)
def forward(self,input):
output = self.bn(input)
output = self.relu(output)
output = self.conv(output)
output = self.pool(output)
return output
model = DenseNet(32,(2, 2, 4, 4),64,4,0.5,0,10)
model.to(device)
6.定义训练和测试函数
def train(dataloader,model,optimizer,loss_fn):
size = len(dataloader.dataset)
num_batches = len(dataloader)
train_acc,train_loss = 0,0
for X,y in dataloader:
X,y = X.to(device),y.to(device)
pred = model(X)
loss = loss_fn(pred,y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
train_loss += loss.item()
train_acc += (pred.argmax(1) == y).type(torch.float).sum().item()
train_loss /= num_batches
train_acc /= size
return train_acc,train_loss
def test(dataloader, model, loss_fn):
size = len(dataloader.dataset) # 测试集的大小
num_batches = len(dataloader) # 批次数目, (size/batch_size,向上取整)
test_loss, test_acc = 0, 0
# 当不进行训练时,停止梯度更新,节省计算内存消耗
with torch.no_grad():
for imgs, target in dataloader:
imgs, target = imgs.to(device), target.to(device)
# 计算loss
target_pred = model(imgs)
loss = loss_fn(target_pred, target)
test_loss += loss.item()
test_acc += (target_pred.argmax(1) == target).type(torch.float).sum().item()
test_acc /= size
test_loss /= num_batches
return test_acc, test_loss
7.定义一些超参数
loss_fn = nn.CrossEntropyLoss()
learn_rate = 1e-2
opt = torch.optim.SGD(model.parameters(),lr=learn_rate)
import copy
epochs = 10
train_loss=[]
train_acc=[]
test_loss=[]
test_acc=[]
best_acc = 0
8.开始训练
for epoch in range(epochs):
model.train()
epoch_train_acc,epoch_train_loss = train(train_dl,model,opt,loss_fn)
model.eval()
epoch_test_acc,epoch_test_loss = test(test_dl,model,loss_fn)
if epoch_test_acc > best_acc:
best_acc = epoch_test_acc
best_model = copy.deepcopy(model)
train_acc.append(epoch_train_acc)
train_loss.append(epoch_train_loss)
test_acc.append(epoch_test_acc)
test_loss.append(epoch_test_loss)
lr = opt.state_dict()['param_groups'][0]['lr']
template = ('Epoch:{:2d}, Train_acc:{:.1f}%, Train_loss:{:.3f}, Test_acc:{:.1f}%, Test_loss:{:.3f}, Lr:{:.2E}')
print(template.format(epoch+1, epoch_train_acc*100, epoch_train_loss,
epoch_test_acc*100, epoch_test_loss, lr))
print('Done')
9.可视化
import matplotlib.pyplot as plt
#隐藏警告
import warnings
warnings.filterwarnings("ignore") #忽略警告信息
plt.rcParams['font.sans-serif'] = ['SimHei'] # 用来正常显示中文标签
plt.rcParams['axes.unicode_minus'] = False # 用来正常显示负号
plt.rcParams['figure.dpi'] = 100 #分辨率
epochs_range = range(epochs)
plt.figure(figsize=(12, 3))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, train_acc[-10:], label='Training Accuracy')
plt.plot(epochs_range, test_acc[-10:], label='Test Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, train_loss[-10:], label='Training Loss')
plt.plot(epochs_range, test_loss[-10:], label='Test Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
二、个人总结
这周学习了DenseNet算法模型,效果相较于上周的ResNet50V2有了显著的提升 。
DenseNet(密集连接卷积网络)是一种深度学习模型,其核心特点是网络中的每一层都与其前面的所有层直接相连,形成了密集的连接模式。这种设计促进了特征的重用,并鼓励梯度流动,有助于缓解深度学习中的梯度消失问题。下面是DenseNet结构的关键组成部分:
-
初始卷积层:网络通常以一个标准的卷积层开始,用于初步提取输入图像的特征,并可能伴随有池化层来缩小输入尺寸。
-
Dense Blocks(密集块):DenseNet的主要构建模块。每个密集块内,每新增一个层,都会将其输出特征图与之前所有层的输出特征图进行拼接(concatenation),作为下一个层的输入。这保证了信息流的高效传递和特征的复用。为了控制模型复杂度,每个层通过较小的增长率(growth rate)来增加特征图的数量,即每个层产生的新特征图数量。
-
Bottleneck Layers(瓶颈层):为了减少计算成本,实际应用中的DenseNet常采用Bottleneck层设计。这些层首先使用1x1卷积来减少输入特征图的数量,然后是BN(Batch Normalization)和ReLU激活函数,接着是3x3卷积来提取特征。这样的设计保持了模型的效率,同时维持了特征的丰富性。
-
Transition Layers(过渡层):位于Dense Blocks之间,用于过渡并控制模型的复杂度。过渡层通常包含1x1的卷积用于压缩特征图的通道数(使用压缩因子θ),以及可选的平均池化(Average Pooling)来进一步减小空间尺寸,帮助减少计算负担和过拟合风险。
-
分类层:网络的尾部通常包括全局平均池化(Global Average Pooling)层,用于将每个特征图的 spatial 维度压缩为一个值,随后连接一个或多个全连接层用于最终的分类或回归任务。
DenseNet通过其独特的密集连接机制,不仅增强了特征传播,还允许特征的多尺度融合,提高了模型的性能和训练效率。