vggNet网络学习(网络架构及代码搭建)

原论文 翻译链接:VERY DEEP CONVOLUTIONAL NETWORKSFOR LARGE-SCALE IMAGE RECOGNITION(VGGnet论文翻译(附原文))_机器学习我不学习的博客-CSDN博客

 网络架构

        vggnet使用了更小的卷积核和更深的卷积神经网络在分类任务中获得了更好的效果。vggnet的输入是固定大小为224×224的RGB图像,预处理是从每个像素上减去在训练集上计算出的平均RGB值。和过去的神经网络不同,vggnet卷积层没有使用大的卷积核,而是使用了非常小的卷积核进行卷积操作。卷积核大小为3×3,步长为1,padding为1。也使用到了1×1的卷积核,这可以看作是输入通道的线性变换。有五个最大池化层,池化窗口大小为2×2,步长为2。之后跟着三个全连接层,其中前两个层每个有4096个通道,第三个层执行1000路ILSVRC分类,因此包含1000个通道(每个类一个),后一层是softmax层。所有的隐层都使用ReLU作为激活函数。

        3×3的卷积核是能包含图像中一个像素点上下左右的最小尺度。

        该网络的亮点:通过堆叠多个3×3的卷积核来替代大尺度卷积核,这样操作可以减少训练中的参数。由两个3×3 卷积层堆叠一起具有5×5的有效感受野;三个这样的3×3的卷积层具有7×7的有效感受野。通过堆叠两个3×3的卷积核可以替代5×5的卷积核,通过堆叠三个3×3的卷积核可以替代7×7的卷积核。

在这里插入图片描述

        如上图所示,两个3×3的卷积核可以堆叠成一个5×5的卷积核(左图),三个3×3的卷积核可以堆叠成一个7×7的卷积核(右图)。

使用7×7的卷积核所需要的参数:7×7×C×C = 49C^{2}   (假设输入输出chan'n)

使用3个3×3的卷积核堆叠成7×7的卷积核所需要的参数:3×3×C×C+3×3×C×C+3×3×C×C=27C^{2}

         vggnet训练了几种深度不同的网络,从网络A~网络E的网络架构如下图所示:

        下图是vgg-16的网络架构: 

         输入的是一个224×224×的RGB图像,所有卷积核的stride为1,padding为1;maxpooling的size为2,stride为2。

        ①:经过两个3×3的卷积层,每个卷积层中卷积核的数量为64,所以得到的输出尺寸为224×224×64;再经过一个maxpooling,尺寸变成112×112×64。

        ②:经过两个3×3的卷积层,每个卷积层中卷积核的数量为128,所以得到的输出尺寸为112×112×128;再经过一个maxpooling,尺寸变成56×56×128。

        ③:经过三个3×3的卷积层,每个卷积层中卷积核的数量为256,所以得到的输出尺寸为56×56×256;再经过一个maxpooling,尺寸变成28×28×256。

        ④:经过三个3×3的卷积层,每个卷积层中卷积核的数量为512,所以得到的输出尺寸为28×28×512;再经过一个maxpooling,尺寸变成14×14×512。

        ⑤:经过三个3×3的卷积层,每个卷积层中卷积核的数量为512,所以得到的输出尺寸为14×14×512;再经过一个maxpooling,尺寸变成7×7×512。

        ⑥:是三层全连接层,其中第一层和第二层全连接层的节点个数都是4096个,第三个全连接层有1000个节点,对应着ImageNet中的1000个分类。其中前两个全连接层后跟着Relu激活函数,但是第三个全连接层不需要加Relu激活函数,因为最后会有一层softmax层。

        ⑦softmax层,将预测结果转化为概率分布。

vgg-16网络搭建与训练

train.py

import os
import sys
import json

import torch
import torch.nn as nn
from torchvision import transforms, datasets
import torch.optim as optim
from tqdm import tqdm

from model import vgg


def main():
    device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
    print("using {} device.".format(device))

    data_transform = {
        "train": transforms.Compose([transforms.RandomResizedCrop(224),
                                     transforms.RandomHorizontalFlip(),
                                     transforms.ToTensor(),
                                     transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]),
        "val": transforms.Compose([transforms.Resize((224, 224)),
                                   transforms.ToTensor(),
                                   transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])}

    data_root = os.path.abspath(os.path.join(os.getcwd(), "../.."))  # get data root path
    image_path = os.path.join(data_root, "data_set", "flower_data")  # flower data set path
    assert os.path.exists(image_path), "{} path does not exist.".format(image_path)
    train_dataset = datasets.ImageFolder(root=os.path.join(image_path, "train"),
                                         transform=data_transform["train"])
    train_num = len(train_dataset)

    # {'daisy':0, 'dandelion':1, 'roses':2, 'sunflower':3, 'tulips':4}
    flower_list = train_dataset.class_to_idx
    cla_dict = dict((val, key) for key, val in flower_list.items())
    # write dict into json file
    json_str = json.dumps(cla_dict, indent=4)
    with open('class_indices.json', 'w') as json_file:
        json_file.write(json_str)

    batch_size = 32
    nw = min([os.cpu_count(), batch_size if batch_size > 1 else 0, 8])  # number of workers
    print('Using {} dataloader workers every process'.format(nw))

    train_loader = torch.utils.data.DataLoader(train_dataset,
                                               batch_size=batch_size, shuffle=True,
                                               num_workers=nw)

    validate_dataset = datasets.ImageFolder(root=os.path.join(image_path, "val"),
                                            transform=data_transform["val"])
    val_num = len(validate_dataset)
    validate_loader = torch.utils.data.DataLoader(validate_dataset,
                                                  batch_size=batch_size, shuffle=False,
                                                  num_workers=nw)
    print("using {} images for training, {} images for validation.".format(train_num,
                                                                           val_num))

    # test_data_iter = iter(validate_loader)
    # test_image, test_label = test_data_iter.next()

    model_name = "vgg16"
    net = vgg(model_name=model_name, num_classes=5, init_weights=True)
    net.to(device)
    loss_function = nn.CrossEntropyLoss()
    optimizer = optim.Adam(net.parameters(), lr=0.0001)

    epochs = 30
    best_acc = 0.0
    save_path = './{}Net.pth'.format(model_name)
    train_steps = len(train_loader)
    for epoch in range(epochs):
        # train
        net.train()
        running_loss = 0.0
        train_bar = tqdm(train_loader, file=sys.stdout)
        for step, data in enumerate(train_bar):
            images, labels = data
            optimizer.zero_grad()
            outputs = net(images.to(device))
            loss = loss_function(outputs, labels.to(device))
            loss.backward()
            optimizer.step()

            # print statistics
            running_loss += loss.item()

            train_bar.desc = "train epoch[{}/{}] loss:{:.3f}".format(epoch + 1,
                                                                     epochs,
                                                                     loss)

        # validate
        net.eval()
        acc = 0.0  # accumulate accurate number / epoch
        with torch.no_grad():
            val_bar = tqdm(validate_loader, file=sys.stdout)
            for val_data in val_bar:
                val_images, val_labels = val_data
                outputs = net(val_images.to(device))
                predict_y = torch.max(outputs, dim=1)[1]
                acc += torch.eq(predict_y, val_labels.to(device)).sum().item()

        val_accurate = acc / val_num
        print('[epoch %d] train_loss: %.3f  val_accuracy: %.3f' %
              (epoch + 1, running_loss / train_steps, val_accurate))

        if val_accurate > best_acc:
            best_acc = val_accurate
            torch.save(net.state_dict(), save_path)

    print('Finished Training')


if __name__ == '__main__':
    main()

model.py

import torch.nn as nn
import torch

# official pretrain weights
model_urls = {
    'vgg11': 'https://download.pytorch.org/models/vgg11-bbd30ac9.pth',
    'vgg13': 'https://download.pytorch.org/models/vgg13-c768596a.pth',
    'vgg16': 'https://download.pytorch.org/models/vgg16-397923af.pth',
    'vgg19': 'https://download.pytorch.org/models/vgg19-dcbb9e9d.pth'
}


class VGG(nn.Module):
    def __init__(self, features, num_classes=1000, init_weights=False):
        super(VGG, self).__init__()
        self.features = features
        self.classifier = nn.Sequential(
            nn.Linear(512*7*7, 4096),
            nn.ReLU(True),
            nn.Dropout(p=0.5),
            nn.Linear(4096, 4096),
            nn.ReLU(True),
            nn.Dropout(p=0.5),
            nn.Linear(4096, num_classes)
        )
        if init_weights:
            self._initialize_weights()

    def forward(self, x):
        # N x 3 x 224 x 224
        x = self.features(x)
        # N x 512 x 7 x 7
        x = torch.flatten(x, start_dim=1)
        # N x 512*7*7
        x = self.classifier(x)
        return x

    def _initialize_weights(self):
        for m in self.modules():
            if isinstance(m, nn.Conv2d):
                # nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
                nn.init.xavier_uniform_(m.weight)
                if m.bias is not None:
                    nn.init.constant_(m.bias, 0)
            elif isinstance(m, nn.Linear):
                nn.init.xavier_uniform_(m.weight)
                # nn.init.normal_(m.weight, 0, 0.01)
                nn.init.constant_(m.bias, 0)


def make_features(cfg: list):
    layers = []
    in_channels = 3
    for v in cfg:
        if v == "M":
            layers += [nn.MaxPool2d(kernel_size=2, stride=2)]
        else:
            conv2d = nn.Conv2d(in_channels, v, kernel_size=3, padding=1)
            layers += [conv2d, nn.ReLU(True)]
            in_channels = v
    return nn.Sequential(*layers)


cfgs = {
    'vgg11': [64, 'M', 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],
    'vgg13': [64, 64, 'M', 128, 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],
    'vgg16': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'M', 512, 512, 512, 'M', 512, 512, 512, 'M'],
    'vgg19': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 256, 'M', 512, 512, 512, 512, 'M', 512, 512, 512, 512, 'M'],
}


def vgg(model_name="vgg16", **kwargs):
    assert model_name in cfgs, "Warning: model number {} not in cfgs dict!".format(model_name)
    cfg = cfgs[model_name]

    model = VGG(make_features(cfg), **kwargs)
    return model

predict.py

import os
import json

import torch
from PIL import Image
from torchvision import transforms
import matplotlib.pyplot as plt

from model import vgg


def main():
    device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

    data_transform = transforms.Compose(
        [transforms.Resize((224, 224)),
         transforms.ToTensor(),
         transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])

    # load image
    img_path = "../tulip.jpg"
    assert os.path.exists(img_path), "file: '{}' dose not exist.".format(img_path)
    img = Image.open(img_path)
    plt.imshow(img)
    # [N, C, H, W]
    img = data_transform(img)
    # expand batch dimension
    img = torch.unsqueeze(img, dim=0)

    # read class_indict
    json_path = './class_indices.json'
    assert os.path.exists(json_path), "file: '{}' dose not exist.".format(json_path)

    with open(json_path, "r") as f:
        class_indict = json.load(f)
    
    # create model
    model = vgg(model_name="vgg16", num_classes=5).to(device)
    # load model weights
    weights_path = "./vgg16Net.pth"
    assert os.path.exists(weights_path), "file: '{}' dose not exist.".format(weights_path)
    model.load_state_dict(torch.load(weights_path, map_location=device))

    model.eval()
    with torch.no_grad():
        # predict class
        output = torch.squeeze(model(img.to(device))).cpu()
        predict = torch.softmax(output, dim=0)
        predict_cla = torch.argmax(predict).numpy()

    print_res = "class: {}   prob: {:.3}".format(class_indict[str(predict_cla)],
                                                 predict[predict_cla].numpy())
    plt.title(print_res)
    for i in range(len(predict)):
        print("class: {:10}   prob: {:.3}".format(class_indict[str(i)],
                                                  predict[i].numpy()))
    plt.show()


if __name__ == '__main__':
    main()

  • 2
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
以下是使用Python搭建VGGNet神经网络代码示例: 1. 导入相关包 ```python import numpy as np import tensorflow as tf from keras.preprocessing.image import ImageDataGenerator from keras.models import Sequential from keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout from keras.optimizers import RMSprop ``` 2. 定义网络结构 ```python model = Sequential() model.add(Conv2D(32, (3,3), activation='relu', input_shape=(224,224,3))) model.add(MaxPooling2D((2,2))) model.add(Conv2D(64, (3,3), activation='relu')) model.add(MaxPooling2D((2,2))) model.add(Conv2D(128, (3,3), activation='relu')) model.add(MaxPooling2D((2,2))) model.add(Conv2D(256, (3,3), activation='relu')) model.add(MaxPooling2D((2,2))) model.add(Flatten()) model.add(Dense(1024, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(4096, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(1000, activation='softmax')) ``` 3. 编译模型 ```python model.compile(loss='categorical_crossentropy', optimizer=RMSprop(lr=0.001), metrics=['accuracy']) ``` 4. 加载训练集和测试集 ```python train_datagen = ImageDataGenerator(rescale=1./255) test_datagen = ImageDataGenerator(rescale=1./255) train_generator = train_datagen.flow_from_directory( 'train/', target_size=(224,224), batch_size=64, class_mode='categorical' ) test_generator = test_datagen.flow_from_directory( 'test/', target_size=(224,224), batch_size=64, class_mode='categorical' ) ``` 5. 训练模型 ```python model.fit_generator( train_generator, steps_per_epoch=nb_train_samples//batch_size, epochs=epochs, validation_data=test_generator, validation_steps=nb_test_samples//batch_size ) ``` 说明:上述代码中,输入层的形状为(224,224,3),卷积层的核大小为(3,3),MaxPooling层的池化窗口大小为(2,2),Dropout层的参数为0.5。训练数据和测试数据使用Keras中的ImageDataGenerator类生成。调用fit_generator()函数来训练模型。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值