STDFusionNet: An Infrared and Visible ImageFusion Network Based on SalientTarget Detection(基于显著目标检测)

一、摘要

保留红外图像的热目标和可见光图像中的纹理细节;利用显著目标掩模对红外图像中关注的区域进行标注。显著区域目标掩码只在训练阶段。

二、related work

(1)传统融合方法

多尺度变换:将源图像分解为一系列多尺度表示;根据融合规则对多尺度表示进行融合;融合后进行相应的逆变换得到融合后的图像。

显著性方法:权重计算(多尺度变换分解为基层和细节层;得到基层和细节层的显著性图,再得到权重图))和 显著性目标提取(保持完整性和像素强度)

稀疏表示:从大量的图片中学习过完整的字典,通过学习过完整字典得到稀疏表示系数,然后利用融合后的稀疏表示系数重构图像。

优化:通过最小化目标函数来产生期望的融合结果。(目标函数的设计应该考虑强度保真度和纹理细节保存;前者约束融合结果具有期望的亮度,后者约束有期望的纹理细节)

混合:综合了以上方法的优点

(2)基于深度学习的图像融合

 预训练CNN对源图像进行活动水平测量,生成权重图,过程基于金字塔。(没有经过专门的图像融合训练限制了融合性能)

基于自编码的深度学习方法:预训练一个自编码器来提取特征和图像恢复*(特征融合是传统规则)

Densefuse:将denseblock引入到编码器和解码器中,融合层使用了L1范数和加法策略。

Nestfuse:考虑到没有下采样的网络无法从源图像中提取多尺度特征,基于巢连接网络(空间注意模型和通道注意模型用于提取融合信息:注意机制不可学习)

GAN无监督分布估计能力非常适合于图像融合任务

DDcGAN:双鉴别器,红外图像和可见光图像都参与对弈(不容易训练)

一种通用的融合框架:两个卷积网络提取特征

基于梯度和强度的比例维持, 构造损失函数通过调整损失项的权重来适应不同的融合任务。

U2fusion:

本文提出的:引入显著目标掩模来引导网络检测显著区域热目标和背景纹理的保存是通过保证特定区域的强度和梯度的一致性来实现的。从红外图像中选择性提取目标显著特征,从可见光图像中选择提取背景纹理特征。隐式的实现显著目标检测和所需信息融合。

伪孪生神经网络(pseudosiamese network):

  • 其Netowrk1和Network2可以是不同的神经网络(如:左边是LSTM,右边是CNN),也可是相同类型的神经网络
  • 其权值不共享
  • 伪孪生神经网络适用于处理两个输入"有一定差别"的情况

  损失函数:

决定了融合图像保留的信息类型以及各种信息之间的比例关系。包括了像素损失和梯度损失。像素损失约束融合图像的像素强度和源图像一致,梯度损失迫使融合图像包含更详细的信息。

像素损失=显著像素损失+背景像素损失。

梯度损失=显著梯度损失+背景梯度损失

三、网络体系结构

损失函数决定了融合图像保留的信息类型以及各种信息之间的比列关系。

在训练阶段只需要mask来构造损失函数,测试阶段不需要。

(1)特征提取网络(分为红外图像和可见光的特征提取网络)

    def vi_feature_extraction_network(self, vi_image):
        with tf.compat.v1.variable_scope('vi_extraction_network'):#变量作用域的开始
            with tf.compat.v1.variable_scope('conv1'):
                weights = tf.compat.v1.get_variable("w", [5, 5, 1, 16],#卷积核大小 输入通道、输出通道
                                                    initializer=tf.truncated_normal_initializer(stddev=1e-3))#定义卷积核的权重变量
                #weights = weights_spectral_norm(weights)
                bias = tf.compat.v1.get_variable("b", [16], initializer=tf.constant_initializer(0.0))#他有16个元素,初始值为0
                conv1 = tf.nn.conv2d(vi_image, weights, strides=[1, 1, 1, 1], padding='SAME') + bias
                # conv1 = tf.contrib.layers.batch_norm(conv1, decay=0.9, updates_collections=None, epsilon=1e-5, scale=True)
                conv1 = tf.nn.leaky_relu(conv1)
            block1_input = conv1
            # state size: 16
            with tf.compat.v1.variable_scope('block1'):#resblock
                with tf.compat.v1.variable_scope('conv1'):
                    weights = tf.compat.v1.get_variable("w", [1, 1, 16, 16],
                                                        initializer=tf.truncated_normal_initializer(stddev=1e-3))
                    #weights = weights_spectral_norm(weights)
                    bias = tf.compat.v1.get_variable("b", [16], initializer=tf.constant_initializer(0.0))
                    conv1 = tf.nn.conv2d(block1_input, weights, strides=[1, 1, 1, 1], padding='SAME') + bias
                    conv1 = tf.nn.leaky_relu(conv1)

                with tf.compat.v1.variable_scope('conv2'):
                    weights = tf.compat.v1.get_variable("w", [3, 3, 16, 16],
                                                        initializer=tf.truncated_normal_initializer(stddev=1e-3))
                    #weights = weights_spectral_norm(weights)
                    bias = tf.compat.v1.get_variable("b", [16], initializer=tf.constant_initializer(0.0))
                    conv2 = tf.nn.conv2d(conv1, weights, strides=[1, 1, 1, 1], padding='SAME') + bias
                    conv2 = tf.nn.leaky_relu(conv2)
                with tf.compat.v1.variable_scope('conv3'):
                    weights = tf.compat.v1.get_variable("w", [1, 1, 16, 16],
                                                        initializer=tf.truncated_normal_initializer(stddev=1e-3))
                    #weights = weights_spectral_norm(weights)
                    bias = tf.compat.v1.get_variable("b", [16], initializer=tf.constant_initializer(0.0))
                    conv3 = tf.nn.conv2d(conv2, weights, strides=[1, 1, 1, 1], padding='SAME') + bias

                block1_output = tf.nn.leaky_relu(conv3 + block1_input)
            block2_input = block1_output
            with tf.compat.v1.variable_scope('block2'):
                with tf.compat.v1.variable_scope('conv1'):
                    weights = tf.compat.v1.get_variable("w", [1, 1, 16, 16],
                                                        initializer=tf.truncated_normal_initializer(stddev=1e-3))
                    #weights = weights_spectral_norm(weights)
                    bias = tf.compat.v1.get_variable("b", [16], initializer=tf.constant_initializer(0.0))
                    conv1 = tf.nn.conv2d(block2_input, weights, strides=[1, 1, 1, 1], padding='SAME') + bias
                    conv1 = tf.nn.leaky_relu(conv1)

                with tf.compat.v1.variable_scope('conv2'):
                    weights = tf.compat.v1.get_variable("w", [3, 3, 16, 16],
                                                        initializer=tf.truncated_normal_initializer(stddev=1e-3))
                    #weights = weights_spectral_norm(weights)
                    bias = tf.compat.v1.get_variable("b", [16], initializer=tf.constant_initializer(0.0))
                    conv2 = tf.nn.conv2d(conv1, weights, strides=[1, 1, 1, 1], padding='SAME') + bias
                    conv2 = tf.nn.leaky_relu(conv2)
                with tf.compat.v1.variable_scope('conv3'):
                    weights = tf.compat.v1.get_variable("w", [1, 1, 16, 32],
                                                        initializer=tf.truncated_normal_initializer(stddev=1e-3))
                    #weights = weights_spectral_norm(weights)
                    bias = tf.compat.v1.get_variable("b", [32], initializer=tf.constant_initializer(0.0))
                    conv3 = tf.nn.conv2d(conv2, weights, strides=[1, 1, 1, 1], padding='SAME') + bias
                with tf.variable_scope('identity_conv'):
                    weights = tf.compat.v1.get_variable("w", [1, 1, 16, 32],
                                                        initializer=tf.truncated_normal_initializer(stddev=1e-3))
                    #weights = weights_spectral_norm(weights)
                    identity_conv = tf.nn.conv2d(block2_input, weights, strides=[1, 1, 1, 1], padding='SAME')
                block2_output = tf.nn.leaky_relu(conv3 + identity_conv)
                block3_input = block2_output
            with tf.compat.v1.variable_scope('block3'):
                with tf.compat.v1.variable_scope('conv1'):
                    weights = tf.compat.v1.get_variable("w", [1, 1, 32, 32],
                                                        initializer=tf.truncated_normal_initializer(stddev=1e-3))
                    #weights = weights_spectral_norm(weights)
                    bias = tf.compat.v1.get_variable("b", [32], initializer=tf.constant_initializer(0.0))
                    conv1 = tf.nn.conv2d(block3_input, weights, strides=[1, 1, 1, 1], padding='SAME') + bias
                    conv1 = tf.nn.leaky_relu(conv1)

                with tf.compat.v1.variable_scope('conv2'):
                    weights = tf.compat.v1.get_variable("w", [3, 3, 32, 32],
                                                        initializer=tf.truncated_normal_initializer(stddev=1e-3))
                    #weights = weights_spectral_norm(weights)
                    bias = tf.compat.v1.get_variable("b", [32], initializer=tf.constant_initializer(0.0))
                    conv2 = tf.nn.conv2d(conv1, weights, strides=[1, 1, 1, 1], padding='SAME') + bias
                    conv2 = tf.nn.leaky_relu(conv2)
                with tf.compat.v1.variable_scope('conv3'):
                    weights = tf.compat.v1.get_variable("w", [1, 1, 32, 64],
                                                        initializer=tf.truncated_normal_initializer(stddev=1e-3))
                    #weights = weights_spectral_norm(weights)
                    bias = tf.compat.v1.get_variable("b", [64], initializer=tf.constant_initializer(0.0))
                    conv3 = tf.nn.conv2d(conv2, weights, strides=[1, 1, 1, 1], padding='SAME') + bias
                with tf.variable_scope('identity_conv'):
                    weights = tf.compat.v1.get_variable("w", [1, 1, 32, 64],
                                                        initializer=tf.truncated_normal_initializer(stddev=1e-3))
                    #weights = weights_spectral_norm(weights)
                    identity_conv = tf.nn.conv2d(block3_input, weights, strides=[1, 1, 1, 1], padding='SAME')
                block3_output = tf.nn.leaky_relu(conv3 + identity_conv)
                encoding_feature = block3_output
        return encoding_feature

(2)特征重建网络

  def feature_reconstruction_network(self, feature):
        with tf.compat.v1.variable_scope('reconstruction_network'):
            block1_input = feature
            with tf.compat.v1.variable_scope('block1'):
                with tf.compat.v1.variable_scope('conv1'):
                    weights = tf.compat.v1.get_variable("w", [1, 1, 128, 128],
                                                        initializer=tf.truncated_normal_initializer(stddev=1e-3))
                    #weights = weights_spectral_norm(weights)
                    bias = tf.compat.v1.get_variable("b", [128], initializer=tf.constant_initializer(0.0))
                    conv1 = tf.nn.conv2d(block1_input, weights, strides=[1, 1, 1, 1], padding='SAME') + bias
                    conv1 = tf.nn.leaky_relu(conv1)

                with tf.compat.v1.variable_scope('conv2'):
                    weights = tf.compat.v1.get_variable("w", [3, 3, 128, 128],
                                                        initializer=tf.truncated_normal_initializer(stddev=1e-3))
                    #weights = weights_spectral_norm(weights)
                    bias = tf.compat.v1.get_variable("b", [128], initializer=tf.constant_initializer(0.0))
                    conv2 = tf.nn.conv2d(conv1, weights, strides=[1, 1, 1, 1], padding='SAME') + bias
                    conv2 = tf.nn.leaky_relu(conv2)
                with tf.compat.v1.variable_scope('conv3'):
                    weights = tf.compat.v1.get_variable("w", [1, 1, 128, 64],
                                                        initializer=tf.truncated_normal_initializer(stddev=1e-3))
                    #weights = weights_spectral_norm(weights)
                    bias = tf.compat.v1.get_variable("b", [64], initializer=tf.constant_initializer(0.0))
                    conv3 = tf.nn.conv2d(conv2, weights, strides=[1, 1, 1, 1], padding='SAME') + bias
                with tf.variable_scope('identity_conv'):
                    weights = tf.compat.v1.get_variable("w", [1, 1, 128, 64],
                                                        initializer=tf.truncated_normal_initializer(stddev=1e-3))
                    #weights = weights_spectral_norm(weights)
                    identity_conv = tf.nn.conv2d(block1_input, weights, strides=[1, 1, 1, 1], padding='SAME')
                block1_output = tf.nn.elu(conv3 + identity_conv)
            block2_input = block1_output
            with tf.compat.v1.variable_scope('block2'):
                with tf.compat.v1.variable_scope('conv1'):
                    weights = tf.compat.v1.get_variable("w", [1, 1, 64, 64],
                                                        initializer=tf.truncated_normal_initializer(stddev=1e-3))
                    #weights = weights_spectral_norm(weights)
                    bias = tf.compat.v1.get_variable("b", [64], initializer=tf.constant_initializer(0.0))
                    conv1 = tf.nn.conv2d(block2_input, weights, strides=[1, 1, 1, 1], padding='SAME') + bias
                    conv1 = tf.nn.leaky_relu(conv1)

                with tf.compat.v1.variable_scope('conv2'):
                    weights = tf.compat.v1.get_variable("w", [3, 3, 64, 64],
                                                        initializer=tf.truncated_normal_initializer(stddev=1e-3))
                    #weights = weights_spectral_norm(weights)
                    bias = tf.compat.v1.get_variable("b", [64], initializer=tf.constant_initializer(0.0))
                    conv2 = tf.nn.conv2d(conv1, weights, strides=[1, 1, 1, 1], padding='SAME') + bias
                    conv2 = tf.nn.leaky_relu(conv2)
                with tf.compat.v1.variable_scope('conv3'):
                    weights = tf.compat.v1.get_variable("w", [1, 1, 64, 32],
                                                        initializer=tf.truncated_normal_initializer(stddev=1e-3))
                    #weights = weights_spectral_norm(weights)
                    bias = tf.compat.v1.get_variable("b", [32], initializer=tf.constant_initializer(0.0))
                    conv3 = tf.nn.conv2d(conv2, weights, strides=[1, 1, 1, 1], padding='SAME') + bias
                with tf.variable_scope('identity_conv'):
                    weights = tf.compat.v1.get_variable("w", [1, 1, 64, 32],
                                                        initializer=tf.truncated_normal_initializer(stddev=1e-3))
                    #weights = weights_spectral_norm(weights)
                    identity_conv = tf.nn.conv2d(block2_input, weights, strides=[1, 1, 1, 1], padding='SAME')
                block2_output = tf.nn.elu(conv3 + identity_conv)
                block3_input = block2_output
            with tf.compat.v1.variable_scope('block3'):
                with tf.compat.v1.variable_scope('conv1'):
                    weights = tf.compat.v1.get_variable("w", [1, 1, 32, 32],
                                                        initializer=tf.truncated_normal_initializer(stddev=1e-3))
                    #weights = weights_spectral_norm(weights)
                    bias = tf.compat.v1.get_variable("b", [32], initializer=tf.constant_initializer(0.0))
                    conv1 = tf.nn.conv2d(block3_input, weights, strides=[1, 1, 1, 1], padding='SAME') + bias
                    conv1 = tf.nn.leaky_relu(conv1)

                with tf.compat.v1.variable_scope('conv2'):
                    weights = tf.compat.v1.get_variable("w", [3, 3, 32, 32],
                                                        initializer=tf.truncated_normal_initializer(stddev=1e-3))
                    #weights = weights_spectral_norm(weights)
                    bias = tf.compat.v1.get_variable("b", [32], initializer=tf.constant_initializer(0.0))
                    conv2 = tf.nn.conv2d(conv1, weights, strides=[1, 1, 1, 1], padding='SAME') + bias
                    conv2 = tf.nn.leaky_relu(conv2)
                with tf.compat.v1.variable_scope('conv3'):
                    weights = tf.compat.v1.get_variable("w", [1, 1, 32, 16],
                                                        initializer=tf.truncated_normal_initializer(stddev=1e-3))
                    #weights = weights_spectral_norm(weights)
                    bias = tf.compat.v1.get_variable("b", [16], initializer=tf.constant_initializer(0.0))
                    conv3 = tf.nn.conv2d(conv2, weights, strides=[1, 1, 1, 1], padding='SAME') + bias
                with tf.variable_scope('identity_conv'):
                    weights = tf.compat.v1.get_variable("w", [1, 1, 32, 16],
                                                        initializer=tf.truncated_normal_initializer(stddev=1e-3))
                    #weights = weights_spectral_norm(weights)
                    identity_conv = tf.nn.conv2d(block3_input, weights, strides=[1, 1, 1, 1], padding='SAME')
                block3_output = tf.nn.leaky_relu(conv3 + identity_conv)
                block4_input = block3_output
            with tf.compat.v1.variable_scope('block4'):
                with tf.compat.v1.variable_scope('conv1'):
                    weights = tf.compat.v1.get_variable("w", [1, 1, 16, 16],
                                                        initializer=tf.truncated_normal_initializer(stddev=1e-3))
                    #weights = weights_spectral_norm(weights)
                    bias = tf.compat.v1.get_variable("b", [16], initializer=tf.constant_initializer(0.0))
                    conv1 = tf.nn.conv2d(block4_input, weights, strides=[1, 1, 1, 1], padding='SAME') + bias
                    conv1 = tf.nn.leaky_relu(conv1)

                with tf.compat.v1.variable_scope('conv2'):
                    weights = tf.compat.v1.get_variable("w", [3, 3, 16, 16],
                                                        initializer=tf.truncated_normal_initializer(stddev=1e-3))
                    #weights = weights_spectral_norm(weights)
                    bias = tf.compat.v1.get_variable("b", [16], initializer=tf.constant_initializer(0.0))
                    conv2 = tf.nn.conv2d(conv1, weights, strides=[1, 1, 1, 1], padding='SAME') + bias
                    conv2 = tf.nn.leaky_relu(conv2)
                with tf.compat.v1.variable_scope('conv3'):
                    weights = tf.compat.v1.get_variable("w", [1, 1, 16, 1],
                                                        initializer=tf.truncated_normal_initializer(stddev=1e-3))
                    #weights = weights_spectral_norm(weights)
                    bias = tf.compat.v1.get_variable("b", [1], initializer=tf.constant_initializer(0.0))
                    conv3 = tf.nn.conv2d(conv2, weights, strides=[1, 1, 1, 1], padding='SAME') + bias
                with tf.variable_scope('identity_conv'):
                    weights = tf.compat.v1.get_variable("w", [1, 1, 16, 1],
                                                        initializer=tf.truncated_normal_initializer(stddev=1e-3))
                    #weights = weights_spectral_norm(weights)
                    identity_conv = tf.nn.conv2d(block4_input, weights, strides=[1, 1, 1, 1], padding='SAME')
                block4_output = tf.nn.tanh(conv3 + identity_conv)
                fusion_image = block4_output
        return fusion_image

两种输入图像的属性不同,虽然采用的是相同的网络结构,但是各自的参数是独立训练的。

特征提取:最后一层激活函数使用Tanh

掩模是突出红外图像的显著区域:使用labelme 工具箱对显著区域进行标注,并将其转换为二值显著目标掩模。然后对显著目标掩模进行反转得到背景掩模;然后将显著目标掩模和纹理背景掩模与红外图像和可将光图像在像素级上相乘,分别得到源显著目标区域和源背景纹理区域。将融合后的图像在像素级与显著目标蒙版和纹理背景蒙版相乘,得到融合后的显著目标区域和融合后的背景区域。随后,利用原始显著区域、原始背景区域、融合显著区域和融合背景区域构建特定损失函数,引导网络隐式实现显著目标检测和信息融合。

三、实验部分

数据集:有源图像还有掩模图像。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值