基于深度学习的红外和可见光图像融合论文及代码整理

基于深度学习的红外和可见光图像融合论文及代码整理

首先附上近期整理基于深度学习的图像融合论文的思维导图
在这里插入图片描述
本篇博客主要整理基于深度学习的红外和可见光图像融合的论文和代码
图像融合系列博客还有:

  1. 图像融合论文及代码整理最全大合集参见:图像融合论文及代码整理最全大合集
  2. 图像融合综述论文整理参见:图像融合综述论文整理
  3. 图像融合评估指标参见:红外和可见光图像融合评估指标
  4. 图像融合常用数据集整理参见:图像融合常用数据集整理
  5. 通用图像融合框架论文及代码整理参见:通用图像融合框架论文及代码整理
  6. 基于深度学习的红外和可见光图像融合论文及代码整理参见:基于深度学习的红外和可见光图像融合论文及代码整理
  7. 更加详细的红外和可见光图像融合代码参见:红外和可见光图像融合论文及代码整理
  8. 基于深度学习的多曝光图像融合论文及代码整理参见:基于深度学习的多曝光图像融合论文及代码整理
  9. 基于深度学习的多聚焦图像融合论文及代码整理参见:基于深度学习的多聚焦图像融合(Multi-focus Image Fusion)论文及代码整理
  10. 基于深度学习的全色图像锐化论文及代码整理参见:基于深度学习的全色图像锐化(Pansharpening)论文及代码整理
  11. 基于深度学习的医学图像融合论文及代码整理参见:基于深度学习的医学图像融合(Medical image fusion)论文及代码整理
  12. 彩色图像融合参见: 彩色图像融合
  13. SeAFusion:首个结合高级视觉任务的图像融合框架参见:SeAFusion:首个结合高级视觉任务的图像融合框架
  14. DIVFusion:首个耦合互促低光增强&图像融合的框架参见:DIVFusion:首个耦合互促低光增强&图像融合的框架

基于自编码器的图像融合框架

1. DenseFuse: A Fusion Approach to Infrared and Visible Images [DenseFuse(TIP 2019)] [Paper] [Code]

2. NestFuse: An Infrared and Visible Image Fusion Architecture Based on Nest Connection and Spatial/Channel Attention Models [NestFuse(TIM 2020)] [Paper] [Code]

3. RFN-Nest: An end-to-end residual fusion network for infrared and visible images [RFN-Nest (IF 2021)] [Paper] [Code]

4. Classification Saliency-Based Rule for Visible and Infrared Image Fusion [CSF (TCI 2021)] [Paper] [Code]

5. DRF: Disentangled Representation for Visible and Infrared Image Fusion [DRF(TIM 2021)] [Paper] [Code]

6. SEDRFuse: A Symmetric Encoder–Decoder With Residual Block Network for Infrared and Visible Image Fusion [SEDRFuse (TIM 2021)] [Paper] [Code]

7. Learning a Deep Multi-Scale Feature Ensemble and an Edge-Attention Guidance for Image Fusion [EAGIF (TCSVT 2021)] [Paper]

基于卷积神经网络的图像融合框架

1. A Bilevel Integrated Model With Data-Driven Layer Ensemble for Multi-Modality Image Fusion [D2LE (TIP 2019)] [Paper]

2. Different Input Resolutions and Arbitrary Output Resolution: A Meta Learning-Based Deep Framework for Infrared and Visible Image Fusion [Meta Learning(TIP 2021)] [Paper]

3. Searching a Hierarchically Aggregated Fusion Architecture for Fast Multi-Modality Image Fusion [HAF(ACM MM 2021)] [Paper] [Code]

4. RXDNFuse: A aggregated residual dense network for infrared and visible image fusion [RXDNFuse(IF 2021)] [Paper]

5. STDFusionNet: An Infrared and Visible Image Fusion Network Based on Salient Target Detection [STDFusionNet(TIM 2021)] [Paper] [Code]

6. Image fusion in the loop of high-level vision tasks: A semantic-aware real-time infrared and visible image fusion network [SeAFusion(IF 2022)] [Paper] [Code]

7. PIAFusion: A progressive infrared and visible image fusion network based on illumination aware [PIAFusion(IF 2022)] [Paper] [Code]

8. Unsupervised Misaligned Infrared and Visible Image Fusion via Cross-Modality Image Generation and Registration [UMF-CMGR(IJCAI 2022)] [Paper] [Code]

9. DetFusion: A Detection-driven Infrared and Visible Image Fusion Network [DetFusion(ACM MM 2022)] [Paper] [Code]

10. DIVFusion: Darkness-free infrared and visible image fusion [DIVFusion(IF 2023)] [Paper] [Code]

基于生成对抗网络的图像融合框架

1. FusionGAN: A generative adversarial network for infrared and visible image fusion [FusionGAN(IF 2019)] [Paper] [Code]

2. Infrared and visible image fusion via detail preserving adversarial learning [Detail-GAN(IF 2021)] [Paper] [Code]

3. Learning a Generative Model for Fusing Infrared and Visible Images via Conditional Generative Adversarial Network with Dual Discriminators. [DDcGAN (IJCAI 2019)] [Paper] [Code]

4. Image fusion based on generative adversarial network consistent with perception [DDcGAN(TIP 2020)] [Paper] [Code]

5. GANMcC: A Generative Adversarial Network With Multiclassification Constraints for Infrared and Visible Image Fusion [GANMcC(TIM 2020)] [Paper] [Code]

6. Image fusion based on generative adversarial network consistent with perception [Perception-GAN(IF 2021))] [Paper] [Code]

7. Semantic-supervised Infrared and Visible Image Fusion via a Dual-discriminator Generative Adversarial Network [SDDGAN(TMM 2021)] [Paper] [Code]

8. AttentionFGAN: Infrared and Visible Image Fusion Using Attention-Based Generative Adversarial Networks [AttentionFGAN(TMM 2021)] [Paper]

9. GAN-FM: Infrared and Visible Image Fusion Using GAN With Full-Scale Skip Connection and Dual Markovian Discriminators [GAN-FM(TCI 2021)] [Paper] [Code]

10. Multigrained Attention Network for Infrared and Visible Image Fusion [GAN-FM(TCI 2021)] [Paper]

11. Infrared and Visible Image Fusion via Texture Conditional Generative Adversarial Network [TC-GAN(TCSVT 2021)] [Paper] [Code]

12. Unsupervised Misaligned Infrared and Visible Image Fusion via Cross-Modality Image Generation and Registration [UMF-CMGR(IJCAI 2022)] [Paper] [Code]

如有疑问可联系:2458707789@qq.com; 备注 姓名+学校

### YOLO在红外可见光图像处理中的应用 YOLO (You Only Look Once) 是一种高效的实时物体检测算法,在多种视觉任务中表现出色。对于红外可见光图像的处理,YOLO 可以用于跨模态的目标检测识别。 #### 数据预处理 为了使模型能够适应不同类型的输入数据,通常需要对红外可见光图像进行标准化处理。考虑到两种成像方式的不同特性,可能需要调整亮度、对比度等参数[^1]。此外,由于红外可见光图像是不同的波长范围内的表现形式,因此还需要考虑如何有效地将两者的信息结合起来。 #### 跨模态特征提取 可以采用双流架构或多分支结构来分别学习来自两个域的数据表示。例如,设计一个共享权重的基础网络作为骨干网(Backbone),再连接特定于每个模式的任务头(Task Head)。这样做的好处是可以充分利用已有的大规模单模态预训练模型,并通过迁移学习加速收敛过程[^3]。 #### 训练策略 当构建好合适的框架之后,则需准备足够的标注样本用于监督式的学习。这里提到的一个重要方面是如何选取合适大小的裁剪窗口以及批量数量(batch size),这取决于具体应用场景下的计算资源限制情况。另外值得注意的是,在某些情况下可能会遇到类别不平衡的问题,即某一类别的实例数远超其他类别;此时可以通过加权损失函数等方式缓解这个问题的影响[^4]。 #### 实验设置 针对具体的实验环境而言,如果是在军事或安防监控等领域内工作的话,那么所使用的测试集应该尽可能覆盖各种复杂背景条件下的场景变化。比如 TNO 多光谱图像融合数据集中就包含了丰富的夜间作战条件下获取到的画面素材,这对于评估模型鲁棒性泛化能力具有重要意义[^2]。 ```python import torch from yolov5 import YOLOv5 # 初始化YOLOv5模型并加载预训练权重 model = YOLOv5('yolov5s.pt') def preprocess_image(image_path, mode='visible'): """Preprocess the input image based on its type.""" img = cv2.imread(image_path) if mode == 'infrared': # Apply specific preprocessing steps for infrared images pass elif mode == 'visible': # Apply standard normalization and resizing operations pass return transform(img) # 加载并预处理待测图片 ir_img_tensor = preprocess_image('./path_to_infrared.jpg', 'infrared') vis_img_tensor = preprocess_image('./path_to_visible.png', 'visible') with torch.no_grad(): detections_ir = model(ir_img_tensor)[0] detections_vis = model(vis_img_tensor)[0] print(detections_ir.shape, detections_vis.shape) ```
评论 32
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Timer-419

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值