SRM滤波器与双线性池化

【时间】2020.01.14

【题目】SRM滤波器与双线性池化

【CVPR 2018】Learning Rich Features for Image Manipulation Detection(图像篡改检测)中,提到了通过SRM滤波获得噪声图片,以及最后通过双线性池化(Bilinear pool)融合两条支路。

 

1、SRM滤波器

SRM指是《 Rich models for steganalysis of digital images》中提出来的,所以应该是Steganalysis Rich Model的缩写,富隐写分析模型的意思。论文中使用下面3个滤波器获得噪声图片:

输入RGB图片,通过上面的 3个滤波器获得通道数依旧为3的特征。在keras中通过Conv层实现如下:

def SRMLayer(x):
    q = [4.0, 12.0, 2.0]
    filter1 = [[0, 0, 0, 0, 0],
            [0, -1, 2, -1, 0],
            [0, 2, -4, 2, 0],
            [0, -1, 2, -1, 0],
            [0, 0, 0, 0, 0]]
    filter2 = [[-1, 2, -2, 2, -1],
            [2, -6, 8, -6, 2],
            [-2, 8, -12, 8, -2],
            [2, -6, 8, -6, 2],
            [-1, 2, -2, 2, -1]]
    filter3 = [[0, 0, 0, 0, 0],
            [0, 0, 0, 0, 0],
            [0, 1, -2, 1, 0],
            [0, 0, 0, 0, 0],
            [0, 0, 0, 0, 0]]
    filter1 = np.asarray(filter1, dtype=float) / q[0]
    filter2 = np.asarray(filter2, dtype=float) / q[1]
    filter3 = np.asarray(filter3, dtype=float) / q[2]
    filters = np.asarray([[filter1, filter1, filter1], [filter2, filter2, filter2], [filter3, filter3, filter3]]) #shape=(3,3,5,5)
    filters = np.transpose(filters, (2,3,1,0)) #shape=(5,5,3,3)

    initializer_srm = keras.initializers.Constant(filters)
    output = Conv2D(3, (5, 5), padding='same', kernel_initializer=initializer_srm, use_bias=False, trainable=False)(x)
 
    return output

2、双线性池化(Bilinear pool) 

双线性池化将两个CNN特征进行outer product(外积),把RGB流和噪声流结合到一起的同时保留了空间信息。

是在论文《Bilinear CNN Models for Fine-Grained Visual Recognition》中提出来的,用于细粒度图片分类,即同一子类的类别分类,比如X种类海鸥与Y种类海鸥。

具体见:简书:Bilinear CNNs for Fine-grained Visual Recognition

细粒度论文笔记:双线性模型 《Bilinear CNN Models for Fine-Grained Visual Recognition》

计算方法为:

它们的bilinear combination为:

f表示特征,每个位置的特征值可以表示为f(channel,x,y)

单个位置f(x,y)的双线性值:

对所有位置的双线性值求和:

注意:fA和fB的size(即S=H*W)必须相同,通道数(M和N)可以不同。最后相当于将fA和fB两个特征分别按通道交叉元素级别点乘再求和,最后获得一个MxN的向量作为特征向量。具体实现是将fA 和 fB resize成(S,M)和(S,N),前者再转置成(M,S),两者矩阵相乘,获得(M,N)的矩阵,在将其展平为MxN的向量。

其pytorch实现代码如下:github

import torch
import torchvision
import torch.optim as optim
import torchvision.transforms as transforms
import torch.nn as nn
import torch.nn.functional as F

# vgg16 = torchvision.models.vgg16(pretrained=True)
# import os
# os.environ["CUDA_VISIBLE_DEVICES"] = "2"



class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.features = nn.Sequential(
            nn.Conv2d(3, 64, kernel_size=3, stride=1, padding=1),
            nn.ReLU(inplace=True),
            nn.Conv2d(64, 64, kernel_size=3, stride=1, padding=1),
            # nn.BatchNorm2d(64),
            nn.ReLU(inplace=True),
            nn.MaxPool2d(kernel_size=2, stride=2, padding=0),


            nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=1),
            nn.ReLU(inplace=True),
            nn.Conv2d(128, 128, kernel_size=3, stride=1, padding=1),
            # nn.BatchNorm2d(128),
            nn.ReLU(inplace=True),
            nn.MaxPool2d(kernel_size=2, stride=2, padding=0),



            nn.Conv2d(128, 256, kernel_size=3, stride=1, padding=1),
            nn.ReLU(inplace=True),
            nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1),
            nn.ReLU(inplace=True),
            nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1),
            # nn.BatchNorm2d(256),
            nn.ReLU(inplace=True),
            nn.MaxPool2d(kernel_size=2, stride=2, padding=0),


            nn.Conv2d(256, 512, kernel_size=3, stride=1, padding=1),
            nn.ReLU(inplace=True),
            nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1),
            nn.ReLU(inplace=True),
            nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1),
            # nn.BatchNorm2d(512),
            nn.ReLU(inplace=True),
            nn.MaxPool2d(kernel_size=2, stride=2, padding=0),

            nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1),
            nn.ReLU(inplace=True),


            nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1),
            nn.ReLU(inplace=True),

            nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1),
            nn.ReLU(inplace=True),
            # nn.MaxPool2d(kernel_size=2, stride=2, padding=0),

        )

        self.classifiers = nn.Sequential(
            nn.Linear(512 ** 2, 200),
        )

    def forward(self, x):
        x = self.features(x)
        batch_size = x.size(0)
        x = x.view(batch_size, 512, 28 ** 2)

        x = (torch.bmm(x, torch.transpose(x, 1, 2)) / 28 ** 2).view(batch_size, -1)

        x = torch.nn.functional.normalize(torch.sign(x) * torch.sqrt(torch.abs(x) + 1e-10))

        # feature = feature.view(feature.size(0), -1)
        x = self.classifiers(x)
        return x

 

代码在对所有位置的双线性值求和后还做了平均,之后还进行了归一化。

 

评论 12
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值