【图像识别】cifar10数据集用resnet18实现90%测试集精度

该文介绍了作者经过大半年的深度学习实践,首次成功运行并获得较好效果的模型——ResNet18,应用于CIFAR10数据集,达到89.69%的测试精度。在数据预处理中使用了Pad(4)和多种数据增强技术,如随机水平翻转和随机裁剪。模型训练了100个epoch,表示可能继续训练会提高精度,但为了效率停止在此。作者计划接下来研究目标检测和对抗攻击领域。
摘要由CSDN通过智能技术生成

学习了大半年的深度学习,开始跑的第一个效果较好的模型

之前的跑的一些不难的模型测试精度大概在82%点几,用了resnet18,效果达到89.69%

在数据处理的时候Pad(4)有用。

 训练100个epoch,多训练可能还会再涨,不过不浪费时间了。图像识别任务到此结束。下次更一些目标检测和对抗攻击。

# coding: UTF-8
# import packages
import os
import torchvision
import torch
import torch.nn as nn

# Device 
device=torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Transform configuration and data augmentation
transform_train=torchvision.transforms.Compose([
        torchvision.transforms.Pad(4),
        torchvision.transforms.RandomHorizontalFlip(),
        torchvision.transforms.RandomCrop(32),
        # torchvision.transforms.RandomVerticalFlip(),
        # torchvision.transforms.RandomRotation(15),
        torchvision.transforms.ToTensor(),
        torchvision.transforms.Normalize([0.5,0.5,0.5], [0.5,0.5,0.5])
        ])

transform_test=torchvision.transforms.Compose([
        torchvision.transforms.ToTensor(),
        torchvision.transforms.Normalize([0.5,0.5,0.5], [0.5,0.5, 0.5])
        ])
# hyper-parameters
num_classes=10
batch_size=128
learning_rate=0.001
num_epochs=100

# load downloaded dataset
train_dataset = torchvision.datasets.CIFAR10('./data', download=True, train=True, transform=transform_train)
test_dataset = torchvision.datasets.CIFAR10('./data', download=True, train=False, transform=transform_test)

# Data loader 
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size, shuffle=False)

# Define 3*3 convolutional neural network
def conv3x3(in_channels, out_channels, stride=1):
    return nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=stride, padding=1, bias=False)


class ResidualBlock(nn.Module):
    def __init__(self, in_channels, out_channels, stride=1, downsample=None):
        super(ResidualBlock, self).__init__()
        self.conv1 = conv3x3(in_channels, out_channels, stride)
        self.bn1 = nn.BatchNorm2d(out_channels)
        self.relu = nn.ReLU(inplace=True)
        self.conv2 = conv3x3(out_channels, out_channels)
        self.bn2 = nn.BatchNorm2d(out_channels)
        self.downsample = downsample
    def forward(self, x):
        residual=x
        out = self.conv1(x)
        out = self.bn1(out)
        out = self.relu(out)
        out = self.conv2(out)
        out = self.bn2(out)
        if(self.downsample):
            residual = self.downsample(x)
        out += residual
        out = self.relu(out)
        return out


# Define ResNet-18
# Define ResNet-18
class ResNet(torch.nn.Module):
    def __init__(self, block, layers, num_classes):
        super(ResNet, self).__init__()
        self.in_channels = 16
        self.conv = conv3x3(3, 16)
        self.bn = torch.nn.BatchNorm2d(16)
        self.relu = torch.nn.ReLU(inplace=True)
        self.layer1 = self._make_layers(block, 16, layers[0])
        self.layer2 = self._make_layers(block, 32, layers[1], 2)
        self.layer3 = self._make_layers(block, 64, layers[2], 2)
        self.layer4 = self._make_layers(block, 128, layers[3], 2)
        self.avg_pool = torch.nn.AdaptiveAvgPool2d((1, 1))
        self.fc = torch.nn.Linear(128, num_classes)
        
    def _make_layers(self, block, out_channels, blocks, stride=1):
        downsample = None
        if (stride != 1) or (self.in_channels != out_channels):
            downsample = torch.nn.Sequential(
                conv3x3(self.in_channels, out_channels, stride=stride),
                torch.nn.BatchNorm2d(out_channels))
        layers = []
        layers.append(block(self.in_channels, out_channels, stride, downsample))
        self.in_channels = out_channels
        for i in range(1, blocks):
            layers.append(block(out_channels, out_channels))
        return torch.nn.Sequential(*layers)
    
    def forward(self, x):
        out = self.conv(x)
        out = self.bn(out)
        out = self.relu(out)
        out = self.layer1(out)
        out = self.layer2(out)
        out = self.layer3(out)
        out = self.layer4(out)
        out = self.avg_pool(out)
        out = out.view(out.size(0), -1)
        out = self.fc(out)
        return out

# Make model.
model=ResNet(ResidualBlock, [2,2,2,2], num_classes).to(device=device)

# Loss and optimizer
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)

# Train the model
total_step = len(train_loader)
cur_lr = 0.001

for epoch in range(0,num_epochs):
        
    for i, (images, labels) in enumerate(train_loader):
        
        images = images.to(device=device)
        labels = labels.to(device=device)

        # Forward pass
        outputs = model(images)
        loss = criterion(outputs, labels)
        
        # Backward and optimize
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

        if (i+1) % 100 == 0:

            print('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'
                  .format(epoch+1, num_epochs, i+1, total_step, loss.item()))

# -------------- Test the model ---------------
print('\nTest with best model')
model.eval()  # eval mode (batchnorm uses moving mean/variance instead of mini-batch mean/variance)
with torch.no_grad():

    correct = 0
    total = 0

    for images, labels in test_loader:

        images = images.to(device=device)
        labels = labels.to(device=device)
        outputs = model(images)

        _, predicted = torch.max(outputs.data, 1)

        total += labels.size(0)
        correct += (predicted == labels).sum().item()
    
    print('Accuracy of the network on the 10000 test images: {} %'.format(100 * correct / total))

  • 1
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
CIFAR-10数据集是一个用于机器学习和计算机视觉算法训练的图像集合,其中包含了60000张32x32的彩色图像,分为10个类别,每个类别有6000张图像。这10个类别分别是飞机、汽车、鸟、猫、鹿、狗、青蛙、马、船和卡车。CIFAR-10是机器学习研究中最广泛使用的数据集之一,它是2009年发布的ImageNet的一个标注子集。\[1\] 在使用CIFAR-10数据集进行ResNet训练时,可以通过字典与反字典生成相应的映射,将类别名称与数字标签进行对应。例如,可以使用字典将"airplane"映射为0,"automobile"映射为1,以此类推。这样可以方便地在训练过程中进行类别的标签处理。\[3\] 需要注意的是,当使用自定义的MyResNet50网络进行训练时,其准确率可能不会达到ResNet源码的效果。因为MyResNet只是一个简化的版本,相比于深层网络ResNet,其网络结构较浅,因此在准确率上可能会有一定的差距。\[2\] #### 引用[.reference_title] - *1* [几十行代码简易实现CIFAR10数据集实战—ResNet50简易实现/pytorch](https://blog.csdn.net/Dec1steee/article/details/130735974)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^control_2,239^v3^insert_chatgpt"}} ] [.reference_item] - *2* *3* [cifar-10数据集+ResNet50](https://blog.csdn.net/m0_51547083/article/details/130246053)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^control_2,239^v3^insert_chatgpt"}} ] [.reference_item] [ .reference_list ]
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值