PyTorch学习笔记(十) ---- 迁移学习

转载请注明作者和出处: http://blog.csdn.net/john_bh/


在实际工作中很少有人会从零开始(随机初始化)训练一个完整的卷积网络,因为相对于网络,很难得到一个足够大的数据集[网络很深, 需要足够大数据集]。通常的做法是 在一个很大的数据集上进行预训练得到卷积网络ConvNet, 然后将这个ConvNet的参数作为目标任务的初始化参数或者固定这些参数

转移学习的两个主要场景:

  • 微调Convnet:使用预训练的网络(如在imagenet 1000上训练而来的网络)来初始化自己的网络,而不是随机初始化。其他的训练步骤不变。
  • 将Convnet看成固定的特征提取器:首先固定ConvNet除了最后的全连接层外的其他所有层。最后的全连接层被替换成一个新的随机初始化的层,只有这个新的层会被训练(只有这层参数会在反向传播时更新)。

接了下来学习利用PyTorch进行迁移学习步骤,要解决的问题是训练一个模型来对蚂蚁和蜜蜂进行分类。

1.导入相关的包

# -*- coding:utf-8 -*-

import torch
import torch.nn as nn
import torch.optim as optim
from torch.optim import lr_scheduler
import numpy as np
import torchvision
from torchvision import datasets,models,transforms
import matplotlib.pyplot as plt
import time,os,copy

plt.ion()   # interactive mode

2.加载数据

要解决的问题是训练一个模型来分类蚂蚁ants和蜜蜂bees。ants和bees各有约120张训练图片。每个类有75张验证图片。从零开始在 如此小的数据集上进行训练通常是很难泛化的。由于我们使用迁移学习,模型的泛化能力会相当好。 该数据集是imagenet的一个非常小的子集。从此处下载数据,并将其解压缩到当前目录。

# -*- coding:utf-8 -*-

import torch
import torch.nn as nn
import torch.optim as optim
from torch.optim import lr_scheduler
import numpy as np
import torchvision
from torchvision import datasets,models,transforms
import matplotlib.pyplot as plt
import time,os,copy

plt.ion()   # interactive mode

#2.加载数据集
# 训练集数据扩充和归一化
# 在验证集上仅需要归一化
data_transforms={
    'train':transforms.Compose([
        transforms.RandomResizedCrop(224),
        transforms.RandomHorizontalFlip(),
        transforms.ToTensor(),
        transforms.Normalize([0.485,0.456,0.406],[0.229,0.224,0.225])
    ]), 
    'val':transforms.Compose([
        transforms.Resize(256),
        transforms.CenterCrop(224),
        transforms.ToTensor(),
        transforms.Normalize([0.485,0.456,0.406],[0.229,0.224,0.225])
    ]),
}

data_dir='data/hymenoptera_data'
image_datasets={x:datasets.ImageFolder(os.path.join(data_dir,x),data_transforms[x]) for x in ['train','val']}
dataloaders={x:torch.utils.data.DataLoader(image_datasets[x],batch_size=4,shuffle=True,num_workers=4) for x in ['train','val']}
dataset_size={x:len(image_datasets[x]) for x in ['train','val']}
class_names=image_datasets['train'].classes

device=torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

3.可视化部分图像数据

可视化部分训练图像,以便了解数据扩充。

# -*- coding:utf-8 -*-

import torch
import torch.nn as nn
import torch.optim as optim
from torch.optim import lr_scheduler
import numpy as np
import torchvision
from torchvision import datasets,models,transforms
import matplotlib.pyplot as plt
import time,os,copy

plt.ion()   # interactive mode

# 训练集数据扩充和归一化
# 在验证集上仅需要归一化
data_transforms={
    'train':transforms.Compose([
        transforms.RandomResizedCrop(224),
        transforms.RandomHorizontalFlip(),
        transforms.ToTensor(),
        transforms.Normalize([0.485,0.456,0.406],[0.229,0.224,0.225])
    ]), 
    'val':transforms.Compose([
        transforms.Resize(256),
        transforms.CenterCrop(224),
        transforms.ToTensor(),
        transforms.Normalize([0.485,0.456,0.406],[0.229,0.224,0.225])
    ]),
}

data_dir='data/hymenoptera_data'
image_datasets={x:datasets.ImageFolder(os.path.join(data_dir,x),data_transforms[x]) for x in ['train','val']}
dataloaders={x:torch.utils.data.DataLoader(image_datasets[x],batch_size=4,shuffle=True,num_workers=1) for x in ['train','val']}
dataset_size={x:len(image_datasets[x]) for x in ['train','val']}
class_names=image_datasets['train'].classes

device=torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

def imshow(inp,title=None):
    """Imshow for Tensor"""
    inp = inp.numpy().transpose((1,2,0))
    mean = np.array([0.485,0.456,0.406])
    std = np.array([0.229,0.224,0.225])
    inp = std * inp + mean
    inp = np.clip(inp,0,1) # clip(a, a_min, a_max, out=None),当out=None时,a不变,np.clip函数返回的数组没有赋给a,当out指定为a,会把np.clip()返回值赋给a
    plt.imshow(inp)
    
    if title is not None:
        plt.title(title)
        
    plt.pause(0.001)# pause a bit so that plots are updated
    
inputs,classes=next(iter(dataloaders['train']))
# torchvision.utils.make_grid(tensor, nrow=8, padding=2, normalize=False, range=None, scale_each=False) 作用是将若干幅图像拼成一幅图像
# nrow表示每行多少张图片的数量,padding的作用就是子图像与子图像之间的pad有多宽。
# normalize=True ,会将图片的像素值归一化处理
# 如果 range=(min, max), min和max是数字,那么min,max用来规范化image
# scale_each=True ,每个图片独立规范化,而不是根据所有图片的像素最大最小值来规范化
out=torchvision.utils.make_grid(inputs) 

imshow(out,title=[class_names[x] for x in classes])

输出:
在这里插入图片描述

4.训练模型

编写一个通用函数来训练模型。下面的参数scheduler是一个来自 torch.optim.lr_scheduler的学习速率调整类的对象(LR scheduler object)。具体代码实现如下:

# -*- coding:utf-8 -*-

import torch
import torch.nn as nn
import torch.optim as optim
from torch.optim import lr_scheduler
import numpy as np
import torchvision
from torchvision import datasets,models,transforms
import matplotlib.pyplot as plt
import time,os,copy

plt.ion()   # interactive mode

# 训练集数据扩充和归一化
# 在验证集上仅需要归一化
data_transforms={
    'train':transforms.Compose([
        transforms.RandomResizedCrop(224),
        transforms.RandomHorizontalFlip(),
        transforms.ToTensor(),
        transforms.Normalize([0.485,0.456,0.406],[0.229,0.224,0.225])
    ]), 
    'val':transforms.Compose([
        transforms.Resize(256),
        transforms.CenterCrop(224),
        transforms.ToTensor(),
        transforms.Normalize([0.485,0.456,0.406],[0.229,0.224,0.225])
    ]),
}

data_dir='data/hymenoptera_data'
image_datasets={x:datasets.ImageFolder(os.path.join(data_dir,x),data_transforms[x]) for x in ['train','val']}
dataloaders={x:torch.utils.data.DataLoader(image_datasets[x],batch_size=4,shuffle=True,num_workers=4) for x in ['train','val']}
dataset_size={x:len(image_datasets[x]) for x in ['train','val']}
class_names=image_datasets['train'].classes

device=torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

# 4.训练模型
def train_model(model,criterion,optimizer,scheduler,num_epochs=25):
    since=time.time()
    
    best_model_wts=copy.deepcopy(model.state_dict())
    best_acc=0.0
    
    for epoch in range(num_epochs):
        print("Epoch {}/{}".format(epoch,num_epochs-1))
        print('-'*10)
        
        #每个epoch 都有一个训练和验证阶段
        for phase in ['train','val']:
            if phase == 'train':
                scheduler.step()
                model.train() # Set model to training mode
            else:
                model.eval() # Set model to evaluate mode
            
            running_loss=0.0
            running_corrects=0
            
            #迭代数据
            for inputs,labels in dataloaders[phase]:
                inputs=inputs.to(device)
                labels=labels.to(device)
                
                optimizer.zero_grad() # 零参数梯度
                
                #track history if only in train
                with torch.set_grad_enabled(phase == 'train'):
                    outputs = model(inputs)
                    _,preds=torch.max(outputs,1)
                    loss=criterion(outputs,labels)
                    
                    if phase == 'train': # 后向+仅在训练阶段进行优化
                        loss.backward()
                        optimizer.step()
                #统计
                running_loss+=loss.item()*inputs.size(0)
                running_corrects+=torch.sum(preds==labels.data)
            
            epoch_loss=running_loss/dataset_sizes[phase]
            epoch_acc=running_corrects.double()/dataset_sizes[phase]
            
            print('{} Loss: {:.4f} Acc:{:.4f}'.format(phase,epoch_loss,epoch_acc))
            
            #深度复制
            if phase=='val' and epoch_acc>best_acc:
                best_acc=epoch_acc
                best_model_wts=copy.deepcopy(model.state_dict())
                
        print()
        
    time_elapsed = time.time() - since
    print('Training complete in {:.0f}m {:.0f}s'.format(time_elapsed // 60, time_elapsed % 60))
    print('Best val Acc: {:4f}'.format(best_acc))
    
    #加载最佳模型权重
    model.load_state_dict(best_model_wts)
    return model 

5.可视化模型的预测结果

# -*- coding:utf-8 -*-

import torch
import torch.nn as nn
import torch.optim as optim
from torch.optim import lr_scheduler
import numpy as np
import torchvision
from torchvision import datasets,models,transforms
import matplotlib.pyplot as plt
import time,os,copy

plt.ion()   # interactive mode

# 训练集数据扩充和归一化
# 在验证集上仅需要归一化
data_transforms={
    'train':transforms.Compose([
        transforms.RandomResizedCrop(224),
        transforms.RandomHorizontalFlip(),
        transforms.ToTensor(),
        transforms.Normalize([0.485,0.456,0.406],[0.229,0.224,0.225])
    ]), 
    'val':transforms.Compose([
        transforms.Resize(256),
        transforms.CenterCrop(224),
        transforms.ToTensor(),
        transforms.Normalize([0.485,0.456,0.406],[0.229,0.224,0.225])
    ]),
}

data_dir='data/hymenoptera_data'
image_datasets={x:datasets.ImageFolder(os.path.join(data_dir,x),data_transforms[x]) for x in ['train','val']}
dataloaders={x:torch.utils.data.DataLoader(image_datasets[x],batch_size=4,shuffle=True,num_workers=4) for x in ['train','val']}
dataset_sizes={x:len(image_datasets[x]) for x in ['train','val']}
class_names=image_datasets['train'].classes

device=torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

# 4.训练模型
def train_model(model,criterion,optimizer,scheduler,num_epochs=25):
    since=time.time()
    
    best_model_wts=copy.deepcopy(model.state_dict())
    best_acc=0.0
    
    for epoch in range(num_epochs):
        print("Epoch {}/{}".format(epoch,num_epochs-1))
        print('-'*10)
        
        #每个epoch 都有一个训练和验证阶段
        for phase in ['train','val']:
            if phase == 'train':
                scheduler.step()
                model.train() # Set model to training mode
            else:
                model.eval() # Set model to evaluate mode
            
            running_loss=0.0
            running_corrects=0
            
            #迭代数据
            for inputs,labels in dataloaders[phase]:
                inputs=inputs.to(device)
                labels=labels.to(device)
                
                optimizer.zero_grad() # 零参数梯度
                
                #track history if only in train
                with torch.set_grad_enabled(phase == 'train'):
                    outputs = model(inputs)
                    _,preds=torch.max(outputs,1)
                    loss=criterion(outputs,labels)
                    
                    if phase == 'train': # 后向+仅在训练阶段进行优化
                        loss.backward()
                        optimizer.step()
                #统计
                running_loss+=loss.item()*inputs.size(0)
                running_corrects+=torch.sum(preds==labels.data)
            
            epoch_loss=running_loss/dataset_sizes[phase]
            epoch_acc=running_corrects.double()/dataset_sizes[phase]
            
            print('{} Loss: {:.4f} Acc:{:.4f}'.format(phase,epoch_loss,epoch_acc))
            
            #深度复制
            if phase=='val' and epoch_acc>best_acc:
                best_acc=epoch_acc
                best_model_wts=copy.deepcopy(model.state_dict())
                
        print()
        
    time_elapsed = time.time() - since
    print('Training complete in {:.0f}m {:.0f}s'.format(time_elapsed // 60, time_elapsed % 60))
    print('Best val Acc: {:4f}'.format(best_acc))
    
    #加载最佳模型权重
    model.load_state_dict(best_model_wts)
    return model

# 5.可视化模型的预测结果
def visualize_model(model,num_images=6):
    was_training=model.training
    model.eval()
    images_so_far = 0
    fig=plt.figure()
    
    with torch.no_grad():
        for i ,(inputs,labels) in enumerate(dataloaders['val']):
            inputs = inputs.to(device)
            labels = labels.to(device)
            
            outputs = model(inputs)
            
            _,preds=torch.max(outputs,1)
            
            for j in range(inputs.size()[0]):
                images_so_far+=1
                ax=plt.subplot(num_images//2,2,images_so_far)
                ax.axis('off')
                ax.set_title('predicted:{}'.format(class_names[preds[j]]))
                imshow(inputs.cpu().data[j])
                
                if images_so_far == num_images:
                    model.train(mode=was_training)
                    return
        model.train(mode=was_training)

6.场景1:微调ConvNet

加载预训练模型并重置最终完全连接的图层。具体代码实现如下:

# -*- coding:utf-8 -*-

import torch
import torch.nn as nn
import torch.optim as optim
from torch.optim import lr_scheduler
import numpy as np
import torchvision
from torchvision import datasets,models,transforms
import matplotlib.pyplot as plt
import time,os,copy

plt.ion()   # interactive mode

# 训练集数据扩充和归一化
# 在验证集上仅需要归一化
data_transforms={
    'train':transforms.Compose([
        transforms.RandomResizedCrop(224),
        transforms.RandomHorizontalFlip(),
        transforms.ToTensor(),
        transforms.Normalize([0.485,0.456,0.406],[0.229,0.224,0.225])
    ]), 
    'val':transforms.Compose([
        transforms.Resize(256),
        transforms.CenterCrop(224),
        transforms.ToTensor(),
        transforms.Normalize([0.485,0.456,0.406],[0.229,0.224,0.225])
    ]),
}

data_dir='data/hymenoptera_data'
image_datasets={x:datasets.ImageFolder(os.path.join(data_dir,x),data_transforms[x]) for x in ['train','val']}
dataloaders={x:torch.utils.data.DataLoader(image_datasets[x],batch_size=4,shuffle=True,num_workers=4) for x in ['train','val']}
dataset_sizes={x:len(image_datasets[x]) for x in ['train','val']}
class_names=image_datasets['train'].classes

device=torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

# 4.训练模型
def train_model(model,criterion,optimizer,scheduler,num_epochs=25):
    since=time.time()
    
    best_model_wts=copy.deepcopy(model.state_dict())
    best_acc=0.0
    
    for epoch in range(num_epochs):
        print("Epoch {}/{}".format(epoch,num_epochs-1))
        print('-'*10)
        
        #每个epoch 都有一个训练和验证阶段
        for phase in ['train','val']:
            if phase == 'train':
                scheduler.step()
                model.train() # Set model to training mode
            else:
                model.eval() # Set model to evaluate mode
            
            running_loss=0.0
            running_corrects=0
            
            #迭代数据
            for inputs,labels in dataloaders[phase]:
                inputs=inputs.to(device)
                labels=labels.to(device)
                
                optimizer.zero_grad() # 零参数梯度
                
                #track history if only in train
                with torch.set_grad_enabled(phase == 'train'):
                    outputs = model(inputs)
                    _,preds=torch.max(outputs,1)
                    loss=criterion(outputs,labels)
                    
                    if phase == 'train': # 后向+仅在训练阶段进行优化
                        loss.backward()
                        optimizer.step()
                #统计
                running_loss+=loss.item()*inputs.size(0)
                running_corrects+=torch.sum(preds==labels.data)
            
            epoch_loss=running_loss/dataset_sizes[phase]
            epoch_acc=running_corrects.double()/dataset_sizes[phase]
            
            print('{} Loss: {:.4f} Acc:{:.4f}'.format(phase,epoch_loss,epoch_acc))
            
            #深度复制
            if phase=='val' and epoch_acc>best_acc:
                best_acc=epoch_acc
                best_model_wts=copy.deepcopy(model.state_dict())
                
        print()
        
    time_elapsed = time.time() - since
    print('Training complete in {:.0f}m {:.0f}s'.format(time_elapsed // 60, time_elapsed % 60))
    print('Best val Acc: {:4f}'.format(best_acc))
    
    #加载最佳模型权重
    model.load_state_dict(best_model_wts)
    return model

# 5.可视化模型的预测结果
def visualize_model(model,num_images=6):
    was_training=model.training
    model.eval()
    images_so_far = 0
    fig=plt.figure()
    
    with torch.no_grad():
        for i ,(inputs,labels) in enumerate(dataloaders['val']):
            inputs = inputs.to(device)
            labels = labels.to(device)
            
            outputs = model(inputs)
            
            _,preds=torch.max(outputs,1)
            
            for j in range(inputs.size()[0]):
                images_so_far+=1
                ax=plt.subplot(num_images//2,2,images_so_far)
                ax.axis('off')
                ax.set_title('predicted:{}'.format(class_names[preds[j]]))
                imshow(inputs.cpu().data[j])
                
                if images_so_far == num_images:
                    model.train(mode=was_training)
                    return
        model.train(mode=was_training)

# 6.加载预训练模型并重置最终完全连接的图层。
model_ft=models.resnet18(pretrained=True)
num_ftrs=model_ft.fc.in_features
model_ft.fc=nn.Linear(num_ftrs,2)

model_ft=model_ft.to(device)

criterion = nn.CrossEntropyLoss()

optimizer_ft=optim.SGD(model_ft.parameters(),lr=0.001,momentum=0.9)# 观察所有参数都正在优化

exp_lr_scheduler=lr_scheduler.StepLR(optimizer_ft,step_size=7,gamma=0.1)# 每7个epochs衰减LR通过设置gamma=0.1

# 训练和评估模型
model_ft = train_model(model_ft, criterion, optimizer_ft, exp_lr_scheduler, num_epochs=25)

# 模型评估效果可视化
visualize_model(model_ft)

输出:

Epoch 0/24
----------
train Loss: 0.5354 Acc:0.7131
val Loss: 0.2036 Acc:0.9020

Epoch 1/24
----------
train Loss: 0.4063 Acc:0.8156
val Loss: 0.2378 Acc:0.9020

Epoch 2/24
----------
train Loss: 0.5310 Acc:0.7623
val Loss: 0.2838 Acc:0.8824
.
.
.
Epoch 23/24
----------
train Loss: 0.2859 Acc:0.8893
val Loss: 0.1818 Acc:0.9281

Epoch 24/24
----------
train Loss: 0.2462 Acc:0.9098
val Loss: 0.2053 Acc:0.9281

Training complete in 19m 15s
Best val Acc: 0.954248

在这里插入图片描述

7.场景2:ConvNet作为固定特征提取器

在这里需要冻结除最后一层之外的所有网络。通过设置requires_grad == Falsebackward()来冻结参数,这样在反向传播backward()的时候他们的梯度就不会被计算。训练模型 在CPU上,与前一个场景相比,这将花费大约一半的时间,因为不需要为大多数网络计算梯度。

# -*- coding:utf-8 -*-

import torch
import torch.nn as nn
import torch.optim as optim
from torch.optim import lr_scheduler
import numpy as np
import torchvision
from torchvision import datasets,models,transforms
import matplotlib.pyplot as plt
import time,os,copy

plt.ion()   # interactive mode

# 训练集数据扩充和归一化
# 在验证集上仅需要归一化
data_transforms={
    'train':transforms.Compose([
        transforms.RandomResizedCrop(224),
        transforms.RandomHorizontalFlip(),
        transforms.ToTensor(),
        transforms.Normalize([0.485,0.456,0.406],[0.229,0.224,0.225])
    ]), 
    'val':transforms.Compose([
        transforms.Resize(256),
        transforms.CenterCrop(224),
        transforms.ToTensor(),
        transforms.Normalize([0.485,0.456,0.406],[0.229,0.224,0.225])
    ]),
}

data_dir='data/hymenoptera_data'
image_datasets={x:datasets.ImageFolder(os.path.join(data_dir,x),data_transforms[x]) for x in ['train','val']}
dataloaders={x:torch.utils.data.DataLoader(image_datasets[x],batch_size=4,shuffle=True,num_workers=4) for x in ['train','val']}
dataset_sizes={x:len(image_datasets[x]) for x in ['train','val']}
class_names=image_datasets['train'].classes

device=torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

# 4.训练模型
def train_model(model,criterion,optimizer,scheduler,num_epochs=25):
    since=time.time()
    
    best_model_wts=copy.deepcopy(model.state_dict())
    best_acc=0.0
    
    for epoch in range(num_epochs):
        print("Epoch {}/{}".format(epoch,num_epochs-1))
        print('-'*10)
        
        #每个epoch 都有一个训练和验证阶段
        for phase in ['train','val']:
            if phase == 'train':
                scheduler.step()
                model.train() # Set model to training mode
            else:
                model.eval() # Set model to evaluate mode
            
            running_loss=0.0
            running_corrects=0
            
            #迭代数据
            for inputs,labels in dataloaders[phase]:
                inputs=inputs.to(device)
                labels=labels.to(device)
                
                optimizer.zero_grad() # 零参数梯度
                
                #track history if only in train
                with torch.set_grad_enabled(phase == 'train'):
                    outputs = model(inputs)
                    _,preds=torch.max(outputs,1)
                    loss=criterion(outputs,labels)
                    
                    if phase == 'train': # 后向+仅在训练阶段进行优化
                        loss.backward()
                        optimizer.step()
                #统计
                running_loss+=loss.item()*inputs.size(0)
                running_corrects+=torch.sum(preds==labels.data)
            
            epoch_loss=running_loss/dataset_sizes[phase]
            epoch_acc=running_corrects.double()/dataset_sizes[phase]
            
            print('{} Loss: {:.4f} Acc:{:.4f}'.format(phase,epoch_loss,epoch_acc))
            
            #深度复制
            if phase=='val' and epoch_acc>best_acc:
                best_acc=epoch_acc
                best_model_wts=copy.deepcopy(model.state_dict())
                
        print()
        
    time_elapsed = time.time() - since
    print('Training complete in {:.0f}m {:.0f}s'.format(time_elapsed // 60, time_elapsed % 60))
    print('Best val Acc: {:4f}'.format(best_acc))
    
    #加载最佳模型权重
    model.load_state_dict(best_model_wts)
    return model

# 5.可视化模型的预测结果
def visualize_model(model,num_images=6):
    was_training=model.training
    model.eval()
    images_so_far = 0
    fig=plt.figure()
    
    with torch.no_grad():
        for i ,(inputs,labels) in enumerate(dataloaders['val']):
            inputs = inputs.to(device)
            labels = labels.to(device)
            
            outputs = model(inputs)
            
            _,preds=torch.max(outputs,1)
            
            for j in range(inputs.size()[0]):
                images_so_far+=1
                print(images_so_far)
                ax=plt.subplot(num_images// 2,2,images_so_far)
                ax.axis('off') #不显示坐标尺寸
                ax.set_title('predicted:{}'.format(class_names[preds[j]]))
                imshow(inputs.cpu().data[j])
                
                if images_so_far == num_images:
                    model.train(mode=was_training)
                    return
        model.train(mode=was_training)

# 7.ConvNet作为固定特征提取器
# 这里需要冻结除最后一层之外的所有网络。通过设置requires_grad == Falsebackward()来冻结参数,
# 这样在反向传播backward()的时候他们的梯度就不会被计算
model_conv=torchvision.models.resnet18(pretrained=True)
for param in model_conv.parameters():
    param.requires_grad=False

# Parameters of newly constructed modules have requires_grad=True by default
num_ftrs=model_conv.fc.in_features
model_conv.fc=nn.Linear(num_ftrs,2)

model_conv = model_conv.to(device)

criterion = nn.CrossEntropyLoss()

optimizer_conv = optim.SGD(model_conv.fc.parameters(), lr=0.001, momentum=0.9) # 观察所有参数都正在优化

exp_lr_scheduler = lr_scheduler.StepLR(optimizer_conv, step_size=7, gamma=0.1)# 每7个epochs衰减LR通过设置gamma=0.1

# 训练和评估模型
model_conv = train_model(model_conv, criterion, optimizer_conv,exp_lr_scheduler, num_epochs=25)

visualize_model(model_conv)

plt.ioff()
plt.show()

输出:

Epoch 0/24
----------
train Loss: 0.6637 Acc:0.6393
val Loss: 0.2943 Acc:0.8824

Epoch 1/24
----------
train Loss: 0.5378 Acc:0.7582
val Loss: 0.1554 Acc:0.9412

Epoch 2/24
----------
train Loss: 0.5152 Acc:0.7910
val Loss: 0.1292 Acc:0.9608Epoch 0/24
.
.
.
Epoch 23/24
----------
train Loss: 0.3033 Acc:0.8648
val Loss: 0.1559 Acc:0.9542

Epoch 24/24
----------
train Loss: 0.4079 Acc:0.8525
val Loss: 0.1707 Acc:0.9542

Training complete in 8m 50s
Best val Acc: 0.960784

在这里插入图片描述

  • 0
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 6
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 6
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值