第P6周:VGG-16算法-Pytorch实现人脸识别

一、前期准备

1.设置GPU

import torch 
import torch.nn as nn
import torchvision.transforms as transforms
import torchvision
from torchvision import transforms, datasets
import os, PIL, pathlib, warnings 

warnings.filterwarnings("ignore")         ## 忽略警告信息

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
device
device(type='cpu')

2.导入数据

import os, PIL, random, pathlib

data_dir = './6-data/'
data_dir = pathlib.Path(data_dir)

data_paths = list(data_dir.glob('*'))
classeNames = [str(path).split("/")[1] for path in data_paths]
classeNames
['Robert Downey Jr',
 'Brad Pitt',
 'Leonardo DiCaprio',
 'Jennifer Lawrence',
 'Tom Cruise',
 'Hugh Jackman',
 'Angelina Jolie',
 'Johnny Depp',
 'Tom Hanks',
 'Denzel Washington',
 'Kate Winslet',
 'Scarlett Johansson',
 'Will Smith',
 'Natalie Portman',
 'Nicole Kidman',
 'Sandra Bullock',
 'Megan Fox']

train_transforms = transforms.Compose([
    transforms.Resize([224, 224]),        ## 将图片resize成统一尺寸
    ## transforms.RandomHorizontalFlip(),    ## 随机水平翻转
    transforms.ToTensor(),                   ## 将PIL Image或numpy.ndarray转换为tensor,并归一化到[0,1]之间
    transforms.Normalize(                    ## 标准化处理-->转换为标准正态分布(高斯分布),使模型更容易收敛
        mean = [0.485, 0.456, 0.406],
        std = [0.229, 0.224, 0.225])         ## 其中mean = [0.485, 0.456, 0.406]与 std = [0.229, 0.224, 0.225]从数据集中随机抽样计算得到的
])

total_data = datasets.ImageFolder("./6-data/", transform = train_transforms)
total_data
Dataset ImageFolder
    Number of datapoints: 1800
    Root location: ./6-data/
    StandardTransform
Transform: Compose(
               Resize(size=[224, 224], interpolation=bilinear, max_size=None, antialias=True)
               ToTensor()
               Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
           )
total_data.class_to_idx
{'Angelina Jolie': 0,
 'Brad Pitt': 1,
 'Denzel Washington': 2,
 'Hugh Jackman': 3,
 'Jennifer Lawrence': 4,
 'Johnny Depp': 5,
 'Kate Winslet': 6,
 'Leonardo DiCaprio': 7,
 'Megan Fox': 8,
 'Natalie Portman': 9,
 'Nicole Kidman': 10,
 'Robert Downey Jr': 11,
 'Sandra Bullock': 12,
 'Scarlett Johansson': 13,
 'Tom Cruise': 14,
 'Tom Hanks': 15,
 'Will Smith': 16}

3.划分数据集

train_size = int(0.8 * len(total_data))
test_size = len(total_data) - train_size
train_dataset, test_dataset = torch.utils.data.random_split(total_data, [train_size, test_size])
train_dataset, test_dataset
(<torch.utils.data.dataset.Subset at 0x11026bc10>,
 <torch.utils.data.dataset.Subset at 0x110269660>)
batch_size = 32

train_dl = torch.utils.data.DataLoader(train_dataset,
                                       batch_size = batch_size,
                                       shuffle = True,
                                       num_workers = 1)

test_dl = torch.utils.data.DataLoader(test_dataset,
                                      batch_size = batch_size,
                                      shuffle = True,
                                      num_workers = 1)
for X, y in test_dl:
    print("Shape of X [N, C, H, W]:", X.shape)
    print("Shape of y;", y.shape, y.dtype)
    break
Shape of X [N, C, H, W]: torch.Size([32, 3, 224, 224])
Shape of y; torch.Size([32]) torch.int64

二、调用官方的VGG-16模型

VGG-16(Visual Geometry Group-16)是由牛津大学视觉几何组(Visual Geometry Group)提出的一种深度卷积神经网络架构,用于图像分类和对象识别任务。VGG-16在2014年被提出,是VGG系列中的一种。VGG-16之所以备受关注,是因为它在ImageNet图像识别竞赛中取得了很好的成绩,展示了其在大规模图像识别任务中的有效性。

以下是VGG-16的主要特点:

  1. 深度:VGG-16由16个卷积层和3个全连接层组成,因此具有相对较深的网络结构。这种深度有助于网络学习到更加抽象和复杂的特征。
  2. 卷积层的设计:VGG-16的卷积层全部采用3x3的卷积核和步长为1的卷积操作,同时在卷积层之后都接有ReLU激活函数。这种设计的好处在于,通过堆叠多个较小的卷积核,可以提高网络的非线性建模能力,同时减少了参数数量,从而降低了过拟合的风险。
  3. 池化层:在卷积层之后,VGG-16使用最大池化层来减少特征图的空间尺寸,帮助提取更加显著的特征并减少计算量。
  4. 全连接层:VGG-16在卷积层之后接有3个全连接层,最后一个全连接层输出与类别数相对应的向量,用于进行分类。

VGG-16结构说明:

● 13个卷积层(Convolutional Layer),分别用blockX_convX表示;
● 3个全连接层(Fully connected Layer),用classifier表示;
● 5个池化层(Pool layer)。

VGG-16包含了16个隐藏层(13个卷积层和3个全连接层),故称为VGG-16

from torchvision.models import vgg16

device = "cuda" if torch.cuda.is_available() else "cpu"
print("Using {} device".format(device))

## 加载训练模型,并且对模型进行微调
model = vgg16(pretrained = True).to(device)    ## 加载预训练的vgg16模型

for param in model.parameters():
    param.requires_grad = False                ## 冻结模型参数,在训练的时候只训练最后一层的参数
    
## 修改classifier模块的第6层(即:(6): Linear(in_features=4096, out_features=2, bias=True) )
model.classifier._modules['6'] = nn.Linear(4096,len(classeNames))  ## 修改vgg16模型中的最后一层全连接层,输出目标类别个数
model.to(device)
model
Using cpu device





VGG(
  (features): Sequential(
    (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (1): ReLU(inplace=True)
    (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (3): ReLU(inplace=True)
    (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (6): ReLU(inplace=True)
    (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (8): ReLU(inplace=True)
    (9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (11): ReLU(inplace=True)
    (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (13): ReLU(inplace=True)
    (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (15): ReLU(inplace=True)
    (16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (18): ReLU(inplace=True)
    (19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (20): ReLU(inplace=True)
    (21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (22): ReLU(inplace=True)
    (23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (25): ReLU(inplace=True)
    (26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (27): ReLU(inplace=True)
    (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (29): ReLU(inplace=True)
    (30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  )
  (avgpool): AdaptiveAvgPool2d(output_size=(7, 7))
  (classifier): Sequential(
    (0): Linear(in_features=25088, out_features=4096, bias=True)
    (1): ReLU(inplace=True)
    (2): Dropout(p=0.5, inplace=False)
    (3): Linear(in_features=4096, out_features=4096, bias=True)
    (4): ReLU(inplace=True)
    (5): Dropout(p=0.5, inplace=False)
    (6): Linear(in_features=4096, out_features=17, bias=True)
  )
)

三、训练模型

1.编写训练函数

## 训练循环
def train(dataloader, model, loss_fn, optimizer):
    size = len(dataloader.dataset)              ## 训练集大小
    num_batches = len(dataloader)               ## 批次数目,(size/batch_size,向上取整数)
    
    train_loss, train_acc = 0, 0    ## 初始化训练损失和正确率
    
    for X, y in dataloader:  ## 获取图片及其标签
        X, y = X.to(device), y.to(device)
        
        ## 计算误差
        pred = model(X)           ## 网络输出
        loss = loss_fn(pred, y)   ## 计算误差
        
        ## 反向传播
        optimizer.zero_grad()     ## grad属性归零
        loss.backward()           ## 反向传播
        optimizer.step()          ## 每一步自动更新
        
        ## 记录acc与loss
        train_acc += (pred.argmax(1) == y).type(torch.float).sum().item()
        train_loss += loss.item()
        
    train_acc /= size
    train_loss /= num_batches
    
    return train_acc, train_loss

2.编写测试函数

测试函数和训练函数大致相同,但是由于不进行梯度下降对网络权重进行更新,所以不需要传入优化器

def test (dataloader, model, loss_fn):
    size = len(dataloader.dataset)   ## 测试集的大小
    num_batches = len(dataloader)    ## 批次数目
    test_acc, test_loss = 0, 0
    
    ## 但不进行训练时,停止梯度更新,节省计算内存消耗
    with torch.no_grad():
        for imgs, target in dataloader:
            imgs , target = imgs.to(device), target.to(device)
        
            ## 计算loss
            target_pred = model(imgs)
            loss = loss_fn(target_pred, target)
        
            test_loss += loss.item()
            test_acc += (target_pred.argmax(1) == target).type(torch.float).sum().item()
        
    test_acc /= size
    test_loss /= num_batches
    
    return test_acc, test_loss

3.设置动态学习率

learn_rate = 1e-4 ## 初始化学习率
lambda1 = lambda epoch: 0.92 ** (epoch // 4)
optimizer = torch.optim.SGD(model.parameters(), lr=learn_rate)
scheduler = torch.optim.lr_scheduler.LambdaLR(optimizer, lr_lambda=lambda1) ## 选定调整方法

4.正式训练

import copy

loss_fn = nn.CrossEntropyLoss()  ## 创建损失函数
epochs = 40

train_loss = []
train_acc = []
test_loss = []
test_acc = []

best_acc = 0    ## 设置一个最佳准确率,作为模型的判别指标

for epoch in range(epochs):
    ## 更新学习率(使用自定义学习率时使用)
    ## adjust_learning_rate(optimizer, epoch, learn_rate)
    
    model.train()
    epoch_train_acc, epoch_train_loss = train(train_dl, model, loss_fn, optimizer)
    scheduler.step()      ## 更新学习率(调用官方动态学习率接口时使用)
    
    model.eval()
    epoch_test_acc, epoch_test_loss = test(test_dl, model, loss_fn)
    
    ## 保存最佳模型到best_model
    if epoch_test_acc > best_acc:
        best_acc = epoch_test_acc
        best_model = copy.deepcopy(model)
        
    train_acc.append(epoch_train_acc)
    train_loss.append(epoch_train_loss)
    test_acc.append(epoch_test_acc)
    test_loss.append(epoch_test_loss)
    
    ## 获取当前学习率
    lr = optimizer.state_dict()['param_groups'][0]['lr']
    
    template = ('Epoch:{:2d}, Train_acc:{:.1f}%, Train_loss:{:.3f}, Test_acc:{:.1f}%, Test_loss:{:.3f}, Lr:{:.2E}')
    print(template.format(epoch + 1, epoch_train_acc * 100, epoch_train_loss, epoch_test_acc * 100, epoch_test_loss, lr))

## 保留最佳模型到文件中
PATH = './best_model.pth'     ## 保存的参数文件名
torch.save(model.state_dict(), PATH)

print('Done')
Epoch: 1, Train_acc:6.9%, Train_loss:2.886, Test_acc:4.7%, Test_loss:2.872, Lr:1.00E-04
Epoch: 2, Train_acc:7.8%, Train_loss:2.863, Test_acc:8.6%, Test_loss:2.835, Lr:1.00E-04
Epoch: 3, Train_acc:8.1%, Train_loss:2.831, Test_acc:8.9%, Test_loss:2.822, Lr:1.00E-04
Epoch: 4, Train_acc:8.8%, Train_loss:2.822, Test_acc:10.6%, Test_loss:2.795, Lr:9.20E-05
Epoch: 5, Train_acc:11.8%, Train_loss:2.783, Test_acc:12.2%, Test_loss:2.762, Lr:9.20E-05
Epoch: 6, Train_acc:13.3%, Train_loss:2.730, Test_acc:12.2%, Test_loss:2.751, Lr:9.20E-05
Epoch: 7, Train_acc:12.5%, Train_loss:2.736, Test_acc:12.2%, Test_loss:2.738, Lr:9.20E-05
Epoch: 8, Train_acc:13.4%, Train_loss:2.718, Test_acc:12.5%, Test_loss:2.720, Lr:8.46E-05
Epoch: 9, Train_acc:14.7%, Train_loss:2.689, Test_acc:13.9%, Test_loss:2.705, Lr:8.46E-05
Epoch:10, Train_acc:15.8%, Train_loss:2.669, Test_acc:14.2%, Test_loss:2.698, Lr:8.46E-05
Epoch:11, Train_acc:15.8%, Train_loss:2.659, Test_acc:14.7%, Test_loss:2.664, Lr:8.46E-05
Epoch:12, Train_acc:16.8%, Train_loss:2.632, Test_acc:14.7%, Test_loss:2.658, Lr:7.79E-05
Epoch:13, Train_acc:14.7%, Train_loss:2.631, Test_acc:14.7%, Test_loss:2.658, Lr:7.79E-05
Epoch:14, Train_acc:17.3%, Train_loss:2.608, Test_acc:15.0%, Test_loss:2.646, Lr:7.79E-05
Epoch:15, Train_acc:15.6%, Train_loss:2.610, Test_acc:15.0%, Test_loss:2.637, Lr:7.79E-05
Epoch:16, Train_acc:17.4%, Train_loss:2.595, Test_acc:15.0%, Test_loss:2.628, Lr:7.16E-05
Epoch:17, Train_acc:17.5%, Train_loss:2.583, Test_acc:15.0%, Test_loss:2.613, Lr:7.16E-05
Epoch:18, Train_acc:17.7%, Train_loss:2.582, Test_acc:15.3%, Test_loss:2.618, Lr:7.16E-05
Epoch:19, Train_acc:16.8%, Train_loss:2.563, Test_acc:15.6%, Test_loss:2.590, Lr:7.16E-05
Epoch:20, Train_acc:18.6%, Train_loss:2.562, Test_acc:15.6%, Test_loss:2.601, Lr:6.59E-05
Epoch:21, Train_acc:17.4%, Train_loss:2.533, Test_acc:15.6%, Test_loss:2.586, Lr:6.59E-05
Epoch:22, Train_acc:17.7%, Train_loss:2.537, Test_acc:15.3%, Test_loss:2.578, Lr:6.59E-05
Epoch:23, Train_acc:19.4%, Train_loss:2.518, Test_acc:15.3%, Test_loss:2.583, Lr:6.59E-05
Epoch:24, Train_acc:18.5%, Train_loss:2.512, Test_acc:15.3%, Test_loss:2.563, Lr:6.06E-05
Epoch:25, Train_acc:19.2%, Train_loss:2.514, Test_acc:15.3%, Test_loss:2.554, Lr:6.06E-05
Epoch:26, Train_acc:17.4%, Train_loss:2.507, Test_acc:15.8%, Test_loss:2.564, Lr:6.06E-05
Epoch:27, Train_acc:18.5%, Train_loss:2.507, Test_acc:16.1%, Test_loss:2.533, Lr:6.06E-05
Epoch:28, Train_acc:18.1%, Train_loss:2.499, Test_acc:16.1%, Test_loss:2.534, Lr:5.58E-05
Epoch:29, Train_acc:19.4%, Train_loss:2.474, Test_acc:16.4%, Test_loss:2.524, Lr:5.58E-05
Epoch:30, Train_acc:20.2%, Train_loss:2.466, Test_acc:16.7%, Test_loss:2.533, Lr:5.58E-05
Epoch:31, Train_acc:20.9%, Train_loss:2.459, Test_acc:16.7%, Test_loss:2.513, Lr:5.58E-05
Epoch:32, Train_acc:19.3%, Train_loss:2.464, Test_acc:16.7%, Test_loss:2.530, Lr:5.13E-05
Epoch:33, Train_acc:19.9%, Train_loss:2.457, Test_acc:16.7%, Test_loss:2.493, Lr:5.13E-05
Epoch:34, Train_acc:20.0%, Train_loss:2.458, Test_acc:16.7%, Test_loss:2.509, Lr:5.13E-05
Epoch:35, Train_acc:20.6%, Train_loss:2.439, Test_acc:16.7%, Test_loss:2.497, Lr:5.13E-05
Epoch:36, Train_acc:20.6%, Train_loss:2.458, Test_acc:16.7%, Test_loss:2.505, Lr:4.72E-05
Epoch:37, Train_acc:21.8%, Train_loss:2.428, Test_acc:16.7%, Test_loss:2.482, Lr:4.72E-05
Epoch:38, Train_acc:21.2%, Train_loss:2.419, Test_acc:16.7%, Test_loss:2.478, Lr:4.72E-05
Epoch:39, Train_acc:20.3%, Train_loss:2.434, Test_acc:16.7%, Test_loss:2.476, Lr:4.72E-05
Epoch:40, Train_acc:22.2%, Train_loss:2.424, Test_acc:16.7%, Test_loss:2.491, Lr:4.34E-05
Done

四、结果可视化

1.Loss与Accuracy图

import matplotlib.pyplot as plt
## 隐藏警告
import warnings
warnings.filterwarnings("ignore")                ## 忽略警告信息
plt.rcParams['font.sans-serif'] = ['SimHei']     ## 用来正常显示中文标签
plt.rcParams['axes.unicode_minus'] = False       ## 用来正常显示负号
plt.rcParams['figure.dpi'] = 100                ## 分辨率

epochs_range = range(epochs)

plt.figure(figsize=(12, 3))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, train_acc, label='Training Accuracy')
plt.plot(epochs_range, test_acc, label='Test Accuracy')
plt.legend(loc = 'lower right')
plt.title('Training and Validation Accuracy')

plt.subplot(1, 2, 2)
plt.plot(epochs_range, train_loss, label='Training Loss')
plt.plot(epochs_range, test_loss, label='Test Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')

plt.show()

在这里插入图片描述

2.指定图片进行预测

from PIL import Image

classes = list(total_data.class_to_idx)

def predict_one_image(image_path, model, transform, classes):
    
    test_img = Image.open(image_path).convert('RGB')
    plt.imshow(test_img)          ## 展示预测的图片
    
    test_img = transform(test_img)
    img = test_img.to(device).unsqueeze(0)
    
    model.eval()
    output = model(img)
    
    _,pred = torch.max(output,1)
    pred_class = classes[pred]
    print(f'预测结果是:{pred_class}')
## 预测训练集中的某张照片
predict_one_image(image_path = './6-data/Angelina Jolie/001_fe3347c0.jpg',
                  model = model,
                  transform = train_transforms,
                  classes = classes)
预测结果是:Scarlett Johansson

在这里插入图片描述

3.模型评估

best_model.eval()
epoch_test_acc, epoch_test_loss = test(test_dl, best_model, loss_fn)
epoch_test_acc, epoch_test_loss
(0.16666666666666666, 2.541086415449778)
## 查看是否与我们记录的最高准确率一致
epoch_test_acc
0.16666666666666666

五、改进

1.改进一

1.解冻更多VGG16模型的层,这里解冻了 VGG16 features 部分的最后 7 层,并且为了防止过拟合在classifier中加入了dropout层。

from torchvision.models import vgg16

device = "cuda" if torch.cuda.is_available() else "cpu"
print("Using {} device".format(device))

## 加载训练模型,并且对模型进行微调
model = vgg16(pretrained = True).to(device)    ## 加载预训练的vgg16模型

for param in model.parameters():
    param.requires_grad = False                ## 冻结模型参数,在训练的时候只训练最后一层的参数
    
## 修改classifier模块的第6层(即:(6): Linear(in_features=4096, out_features=2, bias=True) )
model.classifier = nn.Sequential(
    nn.Linear(7*7*512, 4096),
    nn.ReLU(True),
    nn.Dropout(0.5),                    ### classifier模块加入Dropout层
    nn.Linear(1*1*4096, 4096),
    nn.ReLU(True),
    nn.Dropout(0.5),                    ### classifier模块加入Dropout层
    nn.Linear(4096, len(classeNames))   ### 修改输出层
    )
## model.classifier._modules['6'] = nn.Linear(4096,len(classeNames))  ## 修改vgg16模型中的最后一层全连接层,输出目标类别个数

### 解冻 VGG16 features 部分的最后 7 层
for param in model.features[-7:].parameters():
    param.requires_grad = True

### 解冻classifier部分的所有层
for param in model.classifier.parameters():
    param.requires_grad = True

model.to(device)
model

2.检查解冻状态:

### 检查解冻状态
for name, param in model.named_parameters():
    print(f"Layer:{name}, Requires Grad: {param.requires_grad} ")

输出:
Layer:features.0.weight, Requires Grad: False
Layer:features.0.bias, Requires Grad: False
Layer:features.2.weight, Requires Grad: False
Layer:features.2.bias, Requires Grad: False
Layer:features.5.weight, Requires Grad: False
Layer:features.5.bias, Requires Grad: False
Layer:features.7.weight, Requires Grad: False
Layer:features.7.bias, Requires Grad: False
Layer:features.10.weight, Requires Grad: False
Layer:features.10.bias, Requires Grad: False
Layer:features.12.weight, Requires Grad: False
Layer:features.12.bias, Requires Grad: False
Layer:features.14.weight, Requires Grad: False
Layer:features.14.bias, Requires Grad: False
Layer:features.17.weight, Requires Grad: False
Layer:features.17.bias, Requires Grad: False
Layer:features.19.weight, Requires Grad: False
Layer:features.19.bias, Requires Grad: False
Layer:features.21.weight, Requires Grad: False
Layer:features.21.bias, Requires Grad: False
Layer:features.24.weight, Requires Grad: True
Layer:features.24.bias, Requires Grad: True
Layer:features.26.weight, Requires Grad: True
Layer:features.26.bias, Requires Grad: True
Layer:features.28.weight, Requires Grad: True
Layer:features.28.bias, Requires Grad: True
Layer:classifier.0.weight, Requires Grad: True
Layer:classifier.0.bias, Requires Grad: True
Layer:classifier.3.weight, Requires Grad: True
Layer:classifier.3.bias, Requires Grad: True
Layer:classifier.6.weight, Requires Grad: True
Layer:classifier.6.bias, Requires Grad: True

3.将优化器改为Adam

learn_rate = 1e-4 ## 初始化学习率
lambda1 = lambda epoch: 0.92 ** (epoch // 4)
optimizer = torch.optim.Adam(model.parameters(), lr=learn_rate, weight_decay=1e-4)    ### 改用Adam优化器
scheduler = torch.optim.lr_scheduler.LambdaLR(optimizer, lr_lambda=lambda1) ## 选定调整方法

4.输出
在这里插入图片描述
这里可以看出Test_acc达到70%但是Train_acc与Test_acc还有较大的差距,存在过拟合。

2.改进二

1.在改进一的基础上加入数据增强,看看是否可以解决过拟合问题

train_transforms = transforms.Compose([
    transforms.Resize([224, 224]),        ## 将图片resize成统一尺寸
    transforms.RandomHorizontalFlip(),    ### 随机水平翻转
    transforms.RandomRotation(10),        ### 随机旋转
    transforms.ColorJitter(brightness=0.2, contrast=0.2, saturation=0.2, hue=0.2), ### 颜色随机抖动
    transforms.ToTensor(),                   ## 将PIL Image或numpy.ndarray转换为tensor,并归一化到[0,1]之间
    transforms.Normalize(                    ## 标准化处理-->转换为标准正态分布(高斯分布),使模型更容易收敛
        mean = [0.485, 0.456, 0.406],
        std = [0.229, 0.224, 0.225])         ## 其中mean = [0.485, 0.456, 0.406]与 std = [0.229, 0.224, 0.225]从数据集中随机抽样计算得到的
])

total_data = datasets.ImageFolder("./6-data/", transform = train_transforms)
total_data

2.输出
在这里插入图片描述
未能成功解决。
3.尝试图片预测成功
在这里插入图片描述

六、总结

1.本周主要学习了解了VGG16模型,调用官方的VGG16模型。
2.学习了迁移学习,学会了如何训练一部分别人预先训练好的模型。
3.学习了如何保存最佳模型和模型评估。
4.本周的项目在使用了Adam优化器后,一直存在过拟合的问题。在尝试了在VGG16模型的classifier中加入dropout层,使用数据增强,修改动态学习率,在优化器中加入L2权重衰减(L2正则)均为成功解决,后续仍需进行进一步的研究。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值