LeNet-5

LeNet-5

在这里插入图片描述

LeNet-5共有7层(不包含输入)。输入图像大小为32x32。MNIST数据集是28x28的,是希望潜在的明显特征如笔画断点或角能够出现在最高层**特征检测子感受野(receptive field)**的中心。因此在训练之前需要对28x28的图像加上paddings。

[out_channels, in_channels, kernel_size]

C1:卷积[6x1x5x5],得到的特征图:[6x28x28]

S2:池化层,采用max_pool(最大池化),size=2x2。得到的特征图:[6x14x14]

C3:卷积[16x6x5x5],得到的特征图:[16x10x10]

S4:池化层,采用max_pool(最大池化),size=2x2。得到的特征图:[16x5x5]

C5:卷积[120x16x5x5],得到的特征图:[120x1x1]

F6:全连接层,得到的特征图:[84x1x1]

输出层:全连接二层,输出长度为10张量。

# 使用LeNet-5、FashionMNIST训练

import torch
import torchvision
from torch import nn
import torchvision.transforms as transforms
import torch.utils.data
import time
import matplotlib.pyplot as plt
class LeNet(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv = nn.Sequential(
            nn.ZeroPad2d(2),
            nn.Conv2d(1,6,(5,5)),
            nn.Sigmoid(),
            nn.MaxPool2d(2,2),
            nn.Conv2d(6,16,(5,5)),
            nn.Sigmoid(),
            nn.MaxPool2d(2,2)
        )
        self.fc = nn.Sequential(
            nn.Linear(16*5*5, 120),
            nn.Sigmoid(),
            nn.Linear(120, 84),
            nn.Sigmoid(),
            nn.Linear(84,10)
        )
    def forward(self,img):
        feature = self.conv(img)
        output = self.fc(feature.view(img.shape[0],-1))
        return output


def load_data(batch_size):
    minst_train = torchvision.datasets.FashionMNIST(root = "../DataSets", train= True, download= True, transform=transforms.ToTensor())
    minst_test = torchvision.datasets.FashionMNIST(root="../DataSets", train=False, download=True,
                                                    transform=transforms.ToTensor())
    # a = minst_train[0][1]
    # print(a)
    train_iter = torch.utils.data.DataLoader(minst_train, batch_size = batch_size, shuffle = True, num_workers = 0)
    test_iter = torch.utils.data.DataLoader(minst_test, batch_size = batch_size, shuffle = False, num_workers = 0)
    return train_iter, test_iter
def evaluate_accuracy(data_iter, net, device = None):
    if device is None and isinstance(net, torch.nn.Module):
        device = list(net.parameters())[0].device
    acc_sum, n =0.0, 0
    with torch.no_grad():
        for X, y in data_iter:
            net.eval() #开启评估模式
            acc_sum += (net(X.to(device)).argmax(dim=1) == y.to(device)).sum().cpu().item()
            net.train()
            n += y.shape[0]
    return  acc_sum/n

def train(net, train_iter, test_iter, batch_size,optimizer, device, num_epochs):
    net = net.to(device)
    print("training on ",device)
    loss = torch.nn.CrossEntropyLoss()
    for epoch in range(num_epochs):
        train_l_sum, train_acc_sum, n, batch_count, start = 0.0, 0.0, 0, 0, time.time()
        for X, y in train_iter:
            X = X.to(device)
            y = y.to(device)
            y_hat = net(X)
            l = loss(y_hat , y)
            optimizer.zero_grad()
            l.backward()
            optimizer.step()
            #此次训练的loss
            train_l_sum += l.cpu().item()
            #此次训练的正确率
            train_acc_sum += (y_hat.argmax(dim=1)==y).sum().cpu().item()
            n+=y.shape[0]
            batch_count +=1
        #每个epoch结束后在测试集上进行测试
        test_acc = evaluate_accuracy(test_iter,net)
        print('epoch %d, loss %.4f, train acc %.3f, test acc %.3f, time %.1f sec'
              % (epoch + 1, train_l_sum / batch_count, train_acc_sum / n, test_acc, time.time() - start))

if __name__ == '__main__':
    device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
    batch_size = 256
    train_iter, test_iter = load_data(batch_size)
    net = LeNet()
    lr, num_epochs = 0.001, 8
    optimizer = torch.optim.Adam(net.parameters(),lr = lr)
    train(net, train_iter, test_iter, batch_size, optimizer, device, num_epochs)


  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值