pytorch ssd网络结构

amdegroot/ssd.pytorch 代码来源

一、vgg基础网络

网络的backbone是vgg,构建vgg网络代码如下,输入是vgg的各卷积层通道数和是否池化层的参数cfg,输入图像通道数i,

最后的conv6和conv7对应于上图中的Conv6和Conv7两个19*19*1024的特征图,其中Conv7用于预测

不得不说这个代码写的真棒

cfg = [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'C', 512, 512, 512, 'M',
            512, 512, 512]
 
def vgg(cfg, i, batch_norm=False):
    layers = []
    in_channels = i
    for v in cfg:
        if v == 'M':
            layers += [nn.MaxPool2d(kernel_size=2, stride=2)]
        elif v == 'C':
            layers += [nn.MaxPool2d(kernel_size=2, stride=2, ceil_mode=True)]
        else:
            conv2d = nn.Conv2d(in_channels, v, kernel_size=3, padding=1)
            if batch_norm:
                layers += [conv2d, nn.BatchNorm2d(v), nn.ReLU(inplace=True)]
            else:
                layers += [conv2d, nn.ReLU(inplace=True)]
            in_channels = v
    pool5 = nn.MaxPool2d(kernel_size=3, stride=1, padding=1)
    conv6 = nn.Conv2d(512, 1024, kernel_size=3, padding=6, dilation=6)
    conv7 = nn.Conv2d(1024, 1024, kernel_size=1)
    layers += [pool5, conv6,
               nn.ReLU(inplace=True), conv7, nn.ReLU(inplace=True)]
    return layers

二、额外的卷积层

SSD在基础的vgg网络后边添加了额外的卷积层,通过这些卷积层得到的feature map逐级减小,如图一中的19*19,10*10,5*5,3*3和1*1,

在这些层上边进行预测就可以得到多尺度的效果。构建代码如下,由于在之前的vgg函数中已经构建了Conv6和Conv7两层,

这里的输入就是Conv7得到的19*19*1024的feature map。cfg是要构建的卷积层的通道数,S表示这一层需要stride=2

cfg = [256, 'S', 512, 128, 'S', 256, 128, 256, 128, 256]
 
def add_extras(cfg, i, batch_norm=False):
    # Extra layers added to VGG for feature scaling
    # 1*1和3*3的卷积交替
    layers = []
    in_channels = i
    flag = False
    for k, v in enumerate(cfg):
        if in_channels != 'S':
            if v == 'S':
                layers += [nn.Conv2d(in_channels, cfg[k + 1],
                           kernel_size=(1, 3)[flag], stride=2, padding=1)]
            else:
                layers += [nn.Conv2d(in_channels, v, kernel_size=(1, 3)[flag])]
            flag = not flag
        in_channels = v
    return layers

三、multibox层

这一层是分类和位置回归层,对应于图一中的classifier。loc_layers由6个输出维度为default_box_num * 4的3*3卷积层组成,

conf_layers由6个输出维度为default_box_num * class_num的3*3卷积层组成
cfg = [4, 6, 6, 6, 4, 4] #每个预测层的default box个数
 
def multibox(vgg, extra_layers, cfg, num_classes):
    loc_layers = []  #分别对loc回归层和conf预测层创建两个list
    conf_layers = []
    #vgg的layer中索引为21和-2的卷积输出的feature map对应是38*38*512和19*19*1024的预测层
    vgg_source = [21, -2]
    for k, v in enumerate(vgg_source):
        loc_layers += [nn.Conv2d(vgg[v].out_channels,
                                 cfg[k] * 4, kernel_size=3, padding=1)]
        conf_layers += [nn.Conv2d(vgg[v].out_channels,
                        cfg[k] * num_classes, kernel_size=3, padding=1)]
    for k, v in enumerate(extra_layers[1::2], 2):
    #k从2开始
    #在extra_layers的list中每个2个索引取一次,即取3*3conv层
        loc_layers += [nn.Conv2d(v.out_channels, cfg[k]
                                 * 4, kernel_size=3, padding=1)]
        conf_layers += [nn.Conv2d(v.out_channels, cfg[k]
                                  * num_classes, kernel_size=3, padding=1)]
    return vgg, extra_layers, (loc_layers, conf_layers)

四、default_box的生成

这一部分的代码在prior_box.py文件中的PriorBox类中。

对每一张特征图,按照不同的scale和ratio生成k个default boxes

 def forward(self):
        """
        测试中参数如下
        feature_maps: <class 'list'>: [38, 19, 10, 5, 3, 1]
        steps: <class 'list'>: [8, 16, 32, 64, 100, 300]
        min_size: <class 'list'>: [30, 60, 111, 162, 213, 264]
        max_size: <class 'list'>: [60, 111, 162, 213, 264, 315]
        aspect_ratios: <class 'list'>: [[2], [2, 3], [2, 3], [2, 3], [2], [2]]
        """
        mean = []
        #遍历6个特征图
        for k, f in enumerate(self.feature_maps):
            #遍历特征图中每一个点
            for i, j in product(range(f), repeat=2):
                f_k = self.image_size / self.steps[k] #特征图一个点对应原图大小
                # unit center x,y
                #计算这一点上默认框的中心,如上边默认框中心设定的公式,将每一个点归一化
                cx = (j + 0.5) / f_k
                cy = (i + 0.5) / f_k
 
                # aspect_ratio: 1,长宽比为1,用min_size作边长的默认框
                # rel size: min_size
                s_k = self.min_sizes[k]/self.image_size
                mean += [cx, cy, s_k, s_k]
 
                # aspect_ratio: 1,长宽比为1时额外添加一个尺度
                # rel size: 计算公式sqrt(s_k * s_(k+1)),s_k+1等于这一层的max_size
                s_k_prime = sqrt(s_k * (self.max_sizes[k]/self.image_size))
                mean += [cx, cy, s_k_prime, s_k_prime]
 
                # rest of aspect ratios,计算其他长宽比的默认框,w和h分别按照公式
                for ar in self.aspect_ratios[k]:
                    mean += [cx, cy, s_k*sqrt(ar), s_k/sqrt(ar)]
                    mean += [cx, cy, s_k/sqrt(ar), s_k*sqrt(ar)]
        # back to torch land,output是(8732,4)的Tensor
        output = torch.Tensor(mean).view(-1, 4)
        if self.clip:
            output.clamp_(max=1, min=0) #控制output的范围在[0,1]
        return output
 

 

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值