VGG训练CIFAR数据集

!!兄弟们 写错了 调试代码的时候才发现类别搞错了…用的时候把类别改一下啊!!!
照着b站up主霹雳啪啦zw的视频把VGG敲了出来,用CIFAR验证了一下。AlexNet用的MSTAR。分辨率太低了才128,照着VGG池化完就剩4?(128池化5次后成了…),但好像CIFAR池化完也没多少…中间有bug翻博客,找到了一个大二学生的博客,把我秒了…虽然我是半路出家吧,但我着实是有点菜。先介绍一下VGG:
在这里插入图片描述
共有6种VGG,根据总层数的多少分别为11、13、16、19。VGG是牛津大学Visual Geometry Group组提出的,简称VGG,论文名称是《Very Deep Convolutional Networks for Large-Scale Image Recognition》,也是拿了比赛的名次(好像出名的backbone都是先比赛出名再发论文的?),是AlexNet之后最常用的骨干网(ResNet出来之前),它也是第一次提出了骨干网的大小,可以适应更多不同性能的设备,我觉得这是第一个比较突出的亮点。
其次它使用多个小卷积来达到大卷积的效果:
比如一个大小为3的卷积核,padding和stride都为1的:
(maps-3+2)/1+1=maps,卷完其特征图不变。但是感受野却变大了:
盗图b站up主盗图Amusi
图都是网上盗的。两个3×3等效5×5,三个等效7×7。这样卷积层不会减小特征图的大小,但扩大了感受野,只有池化层才会减小特征图的大小。这样做的好处(我总感觉网上的算法不对,也可能是我错了)。两个3×3卷积核的参数为3×3×C(通道数)×2=18C,5×5卷积核则为25C。下面附上代码:
VGG主干网

import torch.nn as nn
import torch

cfgs = {
     'vgg11':[64,'M',128,'M',256,256,'M',512,512,'M',512,512,'M'],
     'vgg13':[64,64,'M',128,128,'M',256,256,'M',512,512,'M',512,512,'M'],
     'vgg16':[64,64,'M',128,128,'M',256,256,256,'M',512,512,512,'M',512,512,512,'M'],
     'vgg19':[64,64,'M',128,128,'M',256,256,256,256,'M',512,512,512,512,'M',512,512,512,512,'M'],
}
class VGG(nn.Module):
    def __init__(self,features,class_num,init_weights=False):
        super(VGG,self).__init__()
        self.features=features
        self.classifier=nn.Sequential(
            nn.Dropout(p=0.5),
            nn.Linear(512,2048),
            nn.LeakyReLU(inplace=True),
            nn.Dropout(p=0.5),
            nn.Linear(2048,2048),
            nn.LeakyReLU(True),
            nn.Linear(2048,class_num)
        )
        if init_weights:
            self._initialize_weights()
    def _initialize_weights(self):
        for m in self.modules():
            if isinstance(m, nn.Conv2d):
                nn.init.xavier_uniform_(m.weight)
                if m.bias is not None:
                    nn.init.constant_(m.bias, 0)
            elif isinstance(m, nn.Linear):
                nn.init.xavier_uniform_(m.weight)
                nn.init.constant_(m.bias, 0)
    def forward(self, x):
        x=self.features(x)
        x=torch.flatten(x,start_dim=1)
        x=self.classifier(x)
        return x
def get_models(cfgs:list):
    layers=[]
    in_channels=3
    for v in cfgs:
        if v=="M":
            layers+=[nn.MaxPool2d(kernel_size=2,stride=2)]
        else:
            con2d = nn.Conv2d(in_channels,v,kernel_size=3,padding=1)
            BN = nn.BatchNorm2d(v,eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            layers += [con2d,nn.LeakyReLU(True),BN]
            in_channels=v
    return  nn.Sequential(*layers)
def vgg(model_name='vgg16',class_num=100,**kwargs):
    try:
        cfg=cfgs[model_name]
    except:
        print('Error:No such model')
        exit(-1)
    mdoel=VGG(get_models(cfg),class_num,**kwargs)#××kwargs表示可变长度字典
    return mdoel

主函数

import torch,torchvision
import torch.nn as nn
import torchvision.datasets as datasets
from Test.VGG.backbone import vgg
import torch.optim as optim
import torchvision.transforms as transforms
import os
import time
device=torch.device("cuda:0"if torch.cuda.is_available()else"cpu")
data_transform={
    "train":transforms.Compose(
        [
            transforms.RandomHorizontalFlip(),
            transforms.ToTensor(),
            transforms.Normalize((0.5,0.5,0.5),(0.5,0.5,0.5))
         ]),
    "val":  transforms.Compose(
        [
            transforms.RandomHorizontalFlip(),
            transforms.ToTensor(),
            transforms.Normalize((0.5,0.5,0.5),(0.5,0.5,0.5))
         ]
    )
}
data_root=os.path.abspath(os.path.join(os.getcwd(),".."))#获取当前文件目录
train_dataset = torchvision.datasets.CIFAR10(root=data_root+'/Test/VGG/cifar-10-python', train=True,download=False, transform=data_transform['train'])
train_num=60000
batchsize=128
trainloader = torch.utils.data.DataLoader(train_dataset, batch_size=batchsize,shuffle=True, num_workers=4)
test_dataset = torchvision.datasets.CIFAR10(root=data_root+'/Test/VGG/cifar-10-python', train=False,download=False, transform=data_transform['val'])
testloader = torch.utils.data.DataLoader(test_dataset, batch_size=batchsize,shuffle=True, num_workers=4)
test_data_iter = iter(testloader)
test_image, test_label = test_data_iter.next()
net = vgg('vgg16',class_num=100,init_weights=True)
net.to(device)
optimizer = optim.Adam(net.parameters(), lr=0.0005)
best_acc=0
val_num=len(test_dataset)
for epoch in range(30):
    # train
    net.train()
    running_loss = 0.0
    for step, data in enumerate(trainloader, start=0):
        images, labels = data
        optimizer.zero_grad()
        outputs = net(images.to(device))
        loss = nn.functional.cross_entropy(outputs, labels.to(device))
        loss.backward()
        optimizer.step()
        # print statistics
        running_loss += loss.item()
        # print train process
        rate = (step + 1) / len(trainloader)
        a = "*" * int(rate * 50)
        b = "." * int((1 - rate) * 50)
        print("\rtrain loss: {:^3.0f}%[{}->{}]{:.3f}".format(int(rate * 100), a, b, loss), end="")
    print()

    # validate
    net.eval()
    acc = 0.0  # accumulate accurate number / epoch
    with torch.no_grad():
        for data_test in testloader:
            test_images, test_labels = data_test
            optimizer.zero_grad()
            outputs = net(test_images.to(device))
            predict_y = torch.max(outputs, dim=1)[1]
            acc += (predict_y == test_labels.to(device)).sum().item()
        accurate_test = acc / val_num
        if accurate_test > best_acc:
            best_acc = accurate_test
            #torch.save(net.state_dict(), save_path)
        print('[epoch %d] train_loss: %.3f  test_accuracy: %.3f' %
              (epoch + 1, running_loss / step, accurate_test))

print('Finished Training')

中间损失函数那块没折腾死我…我用的是nn.CrossEntropyLoss,传参错…最后用的nn.functional.cross_entropy才搞定…到后面全连接一维张量长度512,最后一层通道数512…所以最后成1×1的特征图了???那还不如直接FCN呢…主要主函数这块不是自己写的,也懒得看,一有bug就刷抖音,没有一点学生该有的样子…等静下来一行行的调试一下,主要看看损失函数那块,怎么算的怎么反传的…(这话AlexNet也说过?)最近出了很多新论文,收藏了不少,但都没看…下周还要汇报语音课的论文…我完全不会啊…就这样吧,就跑通了看了一下结果80多,再改改结果会更好吧…

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值