- 2023 TNNLS
- X. Wang, Z. Guan, W. Qian, J. Cao, C. Wang and R. Ma, "STFuse: Infrared and Visible Image Fusion via Semisupervised Transfer Learning," in IEEE Transactions on Neural Networks and Learning Systems, doi: 10.1109/TNNLS.2023.3328060
summary
目标:从信息丰富的源域迁移知识到目标域。
原因:为获取带有源图像互补信息(complementary information)的融合图像,IVIF缺少ground truth并依赖先验知识。
解决方法:基于半监督迁移学习,借助MFIF任务的监督知识通过引导损失函数过滤特定任务的属性知识促进与IVIF交互任务。
优点:减少缺少ground truth对融合性能的限制;监督知识约束下互补表达能力比先验知识更有指导意义。
贡献:半监督迁移学习IVIF框架;基于梯度信息迁移学习机制——引导监督知识迁移到无监督情境,促进共有知识的跨任务利用; CEM模块——用自注意力和相互注意力机制细化每个分支的特征,在提升互补特征建模整合的同时减少冗余信息产生。
background
- 传感器、数码摄影图像融合、MMIF、IVIF、MIF
- 深度学习图像融合方法(GAN、CNN、AE、端到端)
- 无监督学习、自监督学习
- 半监督迁移学习
some details
experiments
数据集:TNO、FLIR、M3FD
baseline:GANMcC,SDDGAN,CSF,SMoA,IPLF,SDNet,PIAFusion
YDTR,MUFusion,SwinFusion
metrics:entropy (EN), standard deviation (SD), visual information fidelity (VIF), nonlinear correlation information (QNCIE), improved structural similarity index (QY), and mutual information (MI)
detection:YOLOV5
references
X. Wang, Z. Guan, W. Qian, J. Cao, C. Wang and R. Ma, "STFuse: Infrared and Visible Image Fusion via Semisupervised Transfer Learning," in IEEE Transactions on Neural Networks and Learning Systems, doi: 10.1109/TNNLS.2023.3328060.
C. Cheng, T. Xu, and X.-J. Wu, "MUFusion: A general unsupervised image fusion network based on memory unit," Inf. Fusion, vol. 92, pp. 80–92, Apr. 2023.
J. Ma, L. Tang, F. Fan, J. Huang, X. Mei, and Y. Ma, "SwinFusion: Cross-domain long-range learning for general image fusion via Swin transformer," IEEE/CAA J. Autom. Sinica, vol. 9, no. 7, pp. 1200–1217, Jul. 2022.
W. Tang, F. He, and Y. Liu, "YDTR: Infrared and visible image fusion via Y-shape dynamic transformer," IEEE Trans. Multimedia, early access, Jul. 20, 2022, doi: 10.1109/TMM.2022.3192661.
J. Ma, H. Zhang, Z. Shao, P. Liang, and H. Xu, "GANMcC: A generative adversarial network with multiclassification constraints for infrared and visible image fusion," IEEE Trans. Instrum. Meas., vol. 70, pp. 1–14, 2021.
H. Zhou, W. Wu, Y. Zhang, J. Ma, and H. Ling, "Semantic-supervised infrared and visible image fusion via a dual-discriminator generative adversarial network," IEEE Trans. Multimedia, vol. 25, pp. 635–648, 2023.
H. Xu, H. Zhang, and J. Ma, "Classification saliency-based rule for visible and infrared image fusion," IEEE Trans. Comput. Imag., vol. 7, pp. 824–836, 2021.
J. Liu, Y. Wu, Z. Huang, R. Liu, and X. Fan, "SMoA: Searching a modality-oriented architecture for infrared and visible image fusion,"IEEE Signal Process. Lett., vol. 28, pp. 1818–1822, 2021.
D. Zhu, W. Zhan, Y. Jiang, X. Xu, and R. Guo, "IPLF: A novel image pair learning fusion network for infrared and visible image," IEEE Sensors J., vol. 22, no. 9, pp. 8808–8817, May 2022.
H. Zhang and J. Ma, "SDNet: A versatile squeeze-and-decomposition network for real-time image fusion," Int. J. Comput. Vis., vol. 129, no. 10, pp. 2761–2785, Oct. 2021.
L. Tang, J. Yuan, H. Zhang, X. Jiang, and J. Ma, "PIAFusion: A progressive infrared and visible image fusion network based on illumination aware," Inf. Fusion, vol. 83, pp. 79–92, Jul. 2022.