深度学习篇:基于自适应特征增强和生成器路径交互的红外与可见光图像融合(上篇)

前言

最近想在空闲之余回顾一下去年的一篇深度学习文章,主要针对红外与可见光图像融合,想要生成既有高亮生命体征又有可见纹理细节的“大综合一体化”图像。这篇文章附带相关源码讲解,相信可以更快地帮助你理解并使用生成对抗网络(GAN),我想尝试尽量通俗地讲解这篇文章,这样可以让稍微有些相关基础概念的人快速入手。

整体网络框架

我们先来看看整篇文章的大框架,因为是将俩图进行融合,所以我现在把这个GAN网络的生成器部分的输入端放置两条图像输入路径(可见光路径+红外路径)。在这里面可以看到我在任意一条路径中放入了三张图,比如说可见光路径(IR+2VIS)和红外路径(VIS+2IR),这种放置方法叫做差值比连接形式(因为可见光图像既有主要的梯度信息也有次要的对比度信息,红外图像类似)可以迫使生成器充分提取来自两种源图像的同一个目标信息。我想干嘛?我生成器产生的图像想要骗过鉴别器到最后鉴别器猜不出我生成器产出图像的真假,但是问题又来了,你去看看GANMcC或者是FusionGAN,是不是它们的融合图像中目标物体的边缘都很模糊,所以我想了一招,我要锐化融合图像中的各类物体的边缘(也就是让融合图像更加清晰),我往生成器里面加一个”新玩意儿”——自适应增强块。增强块干嘛?——采用高斯卷积核过滤梯度信息让融合图像的物体边缘得到锐化,具体的内容我们后面讨论。

自适应增强块

那么我下面来说说这个自适应增强块,它用来锐化增强融合图像所有的物体边缘。废话就不说了,它的关键是使用了一个我称其为权重图w(x,y)的玩意儿,这个权重图由不同半径的高斯卷积核对源图像进行滤波来得到的。我们看看它的公式:w(x,y)=G_{r_{1}\times r_{1}}[(\bigtriangledown I)^{2}]+G_{r_{2}\times r_{2}}[(\bigtriangledown I)^{2}]+G_{r_{3}\times r_{3}}[(\bigtriangledown I)^{2}],它是三种高斯卷积核的的组合(其中\bigtriangledown I表示图像经过拉普拉斯梯度算子所得梯度,三种半径分别是3、5、7)。别问我为什么这么组合?当然是因为实验显示这样组合锐化效果最好呀!别急,我们再来看看网络整个框架的一部分损失函数,强度损失和梯度损失自然不必多言,都是些老套路。我想给大伙儿看看我定义的”特征增强损失函数“,只不过,因为特征增强损失是在强度损失的基础上而得的,所以为了方便解释,请看官们容许我多一句”废话“——强度损失函数:L_{int}=\tfrac{1}{HW}\Sigma _{x=1}^{H}\Sigma _{y=1}^{W}(I_{f_{x,y}}-I_{i_{x,y}})^{2}+(I_{f_{x,y}}-I_{v_{x,y}})^{2},下面进入正题——特征增强损失函数:L_{int}=\tfrac{1}{HW}\Sigma _{x=1}^{H}\Sigma _{y=1}^{W}(I_{f_{x,y}}-I_{i_{x,y}})^{2}\cdot w_{i}(x,y)+(I_{f_{x,y}}-I_{v_{x,y}})^{2}\cdot w_{v}(x,y)你看看这俩函数这俩玩意儿,就是很像,权重图这玩意儿它是这样的:接近俩源图像梯度信息的地方,权重值逐渐变大;远离俩源图像梯度信息的地方,权重值逐渐减小直到变成0。这样有个好处:我锐化了梯度信息的同时,我还能去噪,真好啊!正所谓一石二鸟大抵就是这样了吧。下面我们来看看我们锐化增强的效果:

因为这个创新点相对比较重要,我下面来说明一下它的生成器损失函数代码(tensorflow 1.x版本),包括这么几个部分:对抗性损失(和鉴别器对应)+内容损失(强度+梯度+特征增强),你看,下面的代码的重点部分就是特征增强损失,这是我想讲的最大的要点:

    with tf.name_scope('g_loss'):
        self.g_loss_1=tf.reduce_mean(tf.square(neg_i-tf.random_uniform(shape=[self.batch_size,1],minval=0.7,maxval=1.2,dtype=tf.float32))) \
                      + tf.reduce_mean(tf.square(neg_v-tf.random_uniform(shape=[self.batch_size,1],minval=0.7,maxval=1.2,dtype=tf.float32)))
        tf.summary.scalar('g_loss_1',self.g_loss_1)
        # self.g_loss_2=tf.reduce_mean(tf.square(self.fusion_image - self.labels_ir))+0.4*tf.reduce_mean(tf.square(self.fusion_image - self.labels_vi))+8*tf.reduce_mean(tf.square(gradient(self.fusion_image) -gradient (self.labels_vi)))+ 6.4*tf.reduce_mean(tf.square(gradient(self.fusion_image) -gradient (self.labels_ir)))
        self.g_loss_2=tf.reduce_mean(tf.square(self.fusion_image - self.labels_ir))+0.4*tf.reduce_mean(tf.square(self.fusion_image - self.labels_vi))\
                      +8*tf.reduce_mean(tf.square(tf.abs(gradient(self.fusion_image)) -tf.maximum(tf.abs(gradient(self.labels_ir)), tf.abs(gradient(self.labels_vi)))))
        tf.summary.scalar('g_loss_2',self.g_loss_2)
        self.edge_loss = 0.4*tf.reduce_mean(tf.square(self.fusion_image - self.labels_vi) * (get_gaussian_kernel(gradient_square(self.labels_vi),kernel_size=3) + get_gaussian_kernel(gradient_square(self.labels_vi),kernel_size=5) + get_gaussian_kernel(gradient_square(self.labels_vi),kernel_size=7))) \
                         + tf.reduce_mean(tf.square(self.fusion_image - self.labels_ir) * (get_gaussian_kernel(gradient_square(self.labels_ir),kernel_size=3) + get_gaussian_kernel(gradient_square(self.labels_ir),kernel_size=5) + get_gaussian_kernel(gradient_square(self.labels_ir),kernel_size=7)))
        tf.summary.scalar('edge_loss', self.edge_loss)
        # self.tv_loss = 0.4 * tf.losses.mean_squared_error(tv_loss(self.fusion_image), tv_loss(self.labels_ir)) + 0.6*tf.losses.mean_squared_error(tv_loss(self.fusion_image), tv_loss(self.labels_vi))
        # tf.summary.scalar('tv_loss', self.tv_loss)
        self.g_loss_total=self.g_loss_1 + 100*self.g_loss_2 + 600*self.edge_loss
        tf.summary.scalar('loss_g',self.g_loss_total)
    self.saver = tf.train.Saver(max_to_keep=50)

我们的重点在于特征增强损失(代码中的edge_loss),权重图就是通过高斯核大小的变化而设置的,反正它的特性就是这样,我们来琢磨一下权重图的代码。朋友,你注意看一点,是要在梯度的基础上进行权重图的作用哦!因为这个式子:w(x,y)=G_{r_{1}\times r_{1}}[(\bigtriangledown I)^{2}]+G_{r_{2}\times r_{2}}[(\bigtriangledown I)^{2}]+G_{r_{3}\times r_{3}}[(\bigtriangledown I)^{2}],你可能会好奇,为什么要是梯度的平方?因为这样锐化效果会更明显!

def get_gaussian_kernel(input, kernel_size=5, sigma=5):
    x_coord = tf.range(kernel_size)  # 创建1维整形张量,tensor([0, 1, 2, 3, 4]) torch.Size([10])
    x_grid_before = tf.tile(x_coord, [kernel_size])
    x_grid = tf.reshape(x_grid_before, [kernel_size, kernel_size])
    y_grid = tf.transpose(x_grid)  # 转置
    xy_grid_before = tf.stack([x_grid, y_grid], axis=-1)
    xy_grid = tf.cast(xy_grid_before,dtype=float)

    mean = (kernel_size - 1) / 2.
    variance = sigma ** 2.

    # Calculate the 2-dimensional gaussian kernel which is
    # the product of two gaussian distributions for two different
    # variables (in this case called x and y)
    gaussian_kernel = (1. / (2. * pi * variance)) * tf.exp(-tf.reduce_sum((xy_grid - mean) ** 2., axis=-1) / (2 * variance))

    # Make sure sum of values in gaussian kernel equals 1.
    gaussian_kernel = gaussian_kernel / tf.reduce_sum(gaussian_kernel)

    # Reshape to 2d depthwise convolutional weight
    gaussian_kernel = tf.reshape(gaussian_kernel, [kernel_size, kernel_size, 1 ,1])

    gaussian_filter = tf.nn.conv2d(input, gaussian_kernel, strides=[1, 1, 1, 1], padding='SAME')

    return gaussian_filter

def gradient_square(input):
    filter=tf.reshape(tf.constant([[0.,1.,0.],[1.,-4.,1.],[0.,1.,0.]]),[3,3,1,1])
    d=tf.nn.conv2d(input,filter,strides=[1,1,1,1], padding='SAME')
    return d**2

生成器架构

下面我们再来看看这个所谓的”生成器路径交互结构“,我把图放在下面,你看看。这种设计其实就是方便提取源图像的各种特征。你看看我的生成器内部(也就是中间部分)多了条路:我在两条主路径的中间设置了一个交互卷积层,我把交互的卷积核大小设置成1×1,交互卷积层的输入是两条路径中前一个卷积层的输出的串联。

我想,现在有必要把生成器的代码拿来解析一下(为了这次的博文相对完整),就像上面地图中所示的那个屌样子,主要是有5层,最后一层是两路径实现“大综合一体化”:

def fusion_model(self, img_ir, img_vi):
    with tf.variable_scope('fusion_model'):
        with tf.variable_scope('layer1'):
            weights = tf.get_variable("w1", [5, 5, 3, 16], initializer=tf.truncated_normal_initializer(stddev=1e-3))
            weights = weights_spectral_norm(weights)
            bias = tf.get_variable("b1", [16], initializer=tf.constant_initializer(0.0))
            conv1_ir = tf.contrib.layers.batch_norm(tf.nn.conv2d(img_ir, weights, strides=[1, 1, 1, 1], padding='SAME') + bias, decay=0.9, updates_collections=None, epsilon=1e-5, scale=True)
            conv1_ir = lrelu(conv1_ir)
        with tf.variable_scope('layer1_vi'):
            weights = tf.get_variable("w1_vi", [5, 5, 3, 16], initializer=tf.truncated_normal_initializer(stddev=1e-3))
            weights = weights_spectral_norm(weights)
            bias = tf.get_variable("b1_vi", [16], initializer=tf.constant_initializer(0.0))
            conv1_vi = tf.contrib.layers.batch_norm(tf.nn.conv2d(img_vi, weights, strides=[1, 1, 1, 1], padding='SAME') + bias, decay=0.9, updates_collections=None, epsilon=1e-5, scale=True)
            conv1_vi = lrelu(conv1_vi)

        conv_1_midle = tf.concat([conv1_ir,conv1_vi], axis=-1)

        ####################  Layer2  ###########################
        with tf.variable_scope('layer2'):
            weights = tf.get_variable("w2", [3, 3, 16, 16], initializer=tf.truncated_normal_initializer(stddev=1e-3))
            weights = weights_spectral_norm(weights)
            bias = tf.get_variable("b2", [16], initializer=tf.constant_initializer(0.0))
            conv2_ir = tf.contrib.layers.batch_norm(tf.nn.conv2d(conv1_ir, weights, strides=[1, 1, 1, 1], padding='SAME') + bias, decay=0.9, updates_collections=None, epsilon=1e-5, scale=True)
            conv2_ir = lrelu(conv2_ir)
        with tf.variable_scope('layer2_vi'):
            weights = tf.get_variable("w2_vi", [3, 3, 16, 16], initializer=tf.truncated_normal_initializer(stddev=1e-3))
            weights = weights_spectral_norm(weights)
            bias = tf.get_variable("b2_vi", [16], initializer=tf.constant_initializer(0.0))
            conv2_vi = tf.contrib.layers.batch_norm(tf.nn.conv2d(conv1_vi, weights, strides=[1, 1, 1, 1], padding='SAME') + bias, decay=0.9, updates_collections=None, epsilon=1e-5, scale=True)
            conv2_vi = lrelu(conv2_vi)

        conv_2_midle = tf.concat([conv2_ir, conv2_vi], axis=-1)

        with tf.variable_scope('layer2_3'):
            weights = tf.get_variable("w2_3", [1, 1, 32, 16], initializer=tf.truncated_normal_initializer(stddev=1e-3))
            weights = weights_spectral_norm(weights)
            bias = tf.get_variable("b2_3", [16], initializer=tf.constant_initializer(0.0))
            conv2_3_ir = tf.contrib.layers.batch_norm(tf.nn.conv2d(conv_2_midle, weights, strides=[1, 1, 1, 1], padding='SAME') + bias, decay=0.9, updates_collections=None, epsilon=1e-5, scale=True)
            conv2_3_ir = lrelu(conv2_3_ir)
        with tf.variable_scope('layer2_3_vi'):
            weights = tf.get_variable("w2_3_vi", [1, 1, 32, 16], initializer=tf.truncated_normal_initializer(stddev=1e-3))
            weights = weights_spectral_norm(weights)
            bias = tf.get_variable("b2_3_vi", [16], initializer=tf.constant_initializer(0.0))
            conv2_3_vi = tf.contrib.layers.batch_norm(tf.nn.conv2d(conv_2_midle, weights, strides=[1, 1, 1, 1], padding='SAME') + bias, decay=0.9, updates_collections=None, epsilon=1e-5, scale=True)
            conv2_3_vi = lrelu(conv2_3_vi)

        ####################  Layer3  ###########################
        conv_12_ir = tf.concat([conv1_ir, conv2_ir, conv2_3_ir], axis=-1)
        conv_12_vi = tf.concat([conv1_vi, conv2_vi, conv2_3_vi], axis=-1)

        with tf.variable_scope('layer3'):
            weights = tf.get_variable("w3", [3, 3, 48, 16], initializer=tf.truncated_normal_initializer(stddev=1e-3))
            weights = weights_spectral_norm(weights)
            bias = tf.get_variable("b3", [16], initializer=tf.constant_initializer(0.0))
            conv3_ir = tf.contrib.layers.batch_norm(tf.nn.conv2d(conv_12_ir, weights, strides=[1, 1, 1, 1], padding='SAME') + bias, decay=0.9, updates_collections=None, epsilon=1e-5, scale=True)
            conv3_ir = lrelu(conv3_ir)
        with tf.variable_scope('layer3_vi'):
            weights = tf.get_variable("w3_vi", [3, 3, 48, 16], initializer=tf.truncated_normal_initializer(stddev=1e-3))
            weights = weights_spectral_norm(weights)
            bias = tf.get_variable("b3_vi", [16], initializer=tf.constant_initializer(0.0))
            conv3_vi = tf.contrib.layers.batch_norm(tf.nn.conv2d(conv_12_vi, weights, strides=[1, 1, 1, 1], padding='SAME') + bias, decay=0.9, updates_collections=None, epsilon=1e-5, scale=True)
            conv3_vi = lrelu(conv3_vi)

        conv_3_midle = tf.concat([conv3_ir, conv3_vi], axis=-1)

        with tf.variable_scope('layer3_4'):
            weights = tf.get_variable("w3_4", [1, 1, 32, 16], initializer=tf.truncated_normal_initializer(stddev=1e-3))
            weights = weights_spectral_norm(weights)
            bias = tf.get_variable("b3_4", [16], initializer=tf.constant_initializer(0.0))
            conv3_4_ir = tf.contrib.layers.batch_norm(tf.nn.conv2d(conv_3_midle, weights, strides=[1, 1, 1, 1], padding='SAME') + bias, decay=0.9, updates_collections=None, epsilon=1e-5, scale=True)
            conv3_4_ir = lrelu(conv3_4_ir)
        with tf.variable_scope('layer3_4_vi'):
            weights = tf.get_variable("w3_4_vi", [1, 1, 32, 16],initializer=tf.truncated_normal_initializer(stddev=1e-3))
            weights = weights_spectral_norm(weights)
            bias = tf.get_variable("b3_4_vi", [16], initializer=tf.constant_initializer(0.0))
            conv3_4_vi = tf.contrib.layers.batch_norm(tf.nn.conv2d(conv_3_midle, weights, strides=[1, 1, 1, 1], padding='SAME') + bias, decay=0.9, updates_collections=None, epsilon=1e-5, scale=True)
            conv3_4_vi = lrelu(conv3_4_vi)

        ####################  Layer4  ###########################
        conv_123_ir = tf.concat([conv1_ir, conv2_ir, conv3_ir, conv3_4_ir], axis=-1)
        conv_123_vi = tf.concat([conv1_vi, conv2_vi, conv3_vi, conv3_4_vi], axis=-1)

        with tf.variable_scope('layer4'):
            weights = tf.get_variable("w4", [3, 3, 64, 16], initializer=tf.truncated_normal_initializer(stddev=1e-3))
            weights = weights_spectral_norm(weights)
            bias = tf.get_variable("b4", [16], initializer=tf.constant_initializer(0.0))
            conv4_ir = tf.contrib.layers.batch_norm(tf.nn.conv2d(conv_123_ir, weights, strides=[1, 1, 1, 1], padding='SAME') + bias, decay=0.9, updates_collections=None, epsilon=1e-5, scale=True)
            conv4_ir = lrelu(conv4_ir)
        with tf.variable_scope('layer4_vi'):
            weights = tf.get_variable("w4_vi", [3, 3, 64, 16], initializer=tf.truncated_normal_initializer(stddev=1e-3))
            weights = weights_spectral_norm(weights)
            bias = tf.get_variable("b4_vi", [16], initializer=tf.constant_initializer(0.0))
            conv4_vi = tf.contrib.layers.batch_norm(tf.nn.conv2d(conv_123_vi, weights, strides=[1, 1, 1, 1], padding='SAME') + bias, decay=0.9, updates_collections=None, epsilon=1e-5, scale=True)
            conv4_vi = lrelu(conv4_vi)

        conv_ir_vi = tf.concat([conv1_ir, conv1_vi, conv2_ir, conv2_vi, conv3_ir, conv3_vi, conv4_ir, conv4_vi], axis=-1)

        with tf.variable_scope('layer5'):
            weights = tf.get_variable("w5", [1, 1, 128, 1], initializer=tf.truncated_normal_initializer(stddev=1e-3))
            weights = weights_spectral_norm(weights)
            bias = tf.get_variable("b5", [1], initializer=tf.constant_initializer(0.0))
            conv5_ir = tf.nn.conv2d(conv_ir_vi, weights, strides=[1, 1, 1, 1], padding='SAME') + bias
            conv5_ir = tf.nn.tanh(conv5_ir)
    return conv5_ir

你看,生成器的代码部分看的出来,和FusionGAN或者GANMcC最大的不同之处在于我上面所提的交互卷积层,有了交互,我是为了更全面地提取两种源图像的各种信息,这样让我得到我想要的”大综合一体化“融合图像。

小结

在这次的“上篇”,我主要想分享的就是自适应特征增强的原理及实现代码+生成器路径交互的原理及实现代码,希望可以给深度学习初学者在图像融合方向的一些思路,这其中也包括了GAN的应用,GAN只是一个壳子罢了,关键还是看内部的损失函数设置,关于下篇的讲解,我将全面介绍整体代码,并且将其放入github方便图像融合深度学习方向的朋友共同学习。

  • 22
    点赞
  • 22
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值