多源图像融合训练的脚本示例 (Matlab训练多个输入的CNN模型)

简介:

在常规的CNN图像训练中,matlab和pytorch都提供了很多标准示例,但其输入都是N*N*3的图像,在部分场景下,研究人员想要模型参考多个角度或不同源的成像数据得出一个综合的输出,这样模型能够考虑到更多的特征细节以提高预测精度。本文参考了题为‘Noninvasive Detection of Salt Stress in Cotton Seedlings by Combining Multicolor Fluorescence–Multispectral Reflectance Imaging with EfficientNet-OB2’的中的训练方法,该论文将9个不同的数据拆分输入到改进的EfficientNet中,从而实现了更好的效果。

原论文中不同输入的区分,合并输入(左),拆分输入(右)

论文源:

Noninvasive Detection of Salt Stress in Cotton Seedlings by Combining Multicolor Fluorescence–Multispectral Reflectance Imaging with EfficientNet-OB2 | Plant Phenomics (science.org)icon-default.png?t=N7T8https://spj.science.org/doi/10.34133/plantphenomics.0125#sec-1

代码介绍和展示

脚本先是创建了9输入的EfficientNet改进网络,然后通过imageDatastore及arraryDatastore函数将图像数据和标签数据进行整合最后使用combine函数将这些数据组合成多源输入数据。之后就是使用常规的trainNetwork进行训练。需要数据的是,每个源(输入)的文件夹内排序要一一对应,如果是乱序,combine后是对应不上的,所以建议每个文件夹内的对应数据采用相同的名称命名。

lgraph=Build_ENetOB2(N,PC,9);%created

%%训练集 traindata
imdsTrainPC1 = imageDatastore("C:\Users\94562\Desktop\cotton\Environment_train\PC1","IncludeSubfolders",true,"LabelSource","foldernames");
imdsTrainPC2 = imageDatastore("C:\Users\94562\Desktop\cotton\Environment_train\PC2","IncludeSubfolders",true,"LabelSource","foldernames");
imdsTrainPC3 = imageDatastore("C:\Users\94562\Desktop\cotton\Environment_train\PC3","IncludeSubfolders",true,"LabelSource","foldernames");
imdsTrainPC4 = imageDatastore("C:\Users\94562\Desktop\cotton\Environment_train\PC4","IncludeSubfolders",true,"LabelSource","foldernames");
imdsTrainPC5 = imageDatastore("C:\Users\94562\Desktop\cotton\Environment_train\PC5","IncludeSubfolders",true,"LabelSource","foldernames");
imdsTrainPC6 = imageDatastore("C:\Users\94562\Desktop\cotton\Environment_train\PC6","IncludeSubfolders",true,"LabelSource","foldernames");
imdsTrainPC7 = imageDatastore("C:\Users\94562\Desktop\cotton\Environment_train\PC7","IncludeSubfolders",true,"LabelSource","foldernames");
imdsTrainPC8 = imageDatastore("C:\Users\94562\Desktop\cotton\Environment_train\PC8","IncludeSubfolders",true,"LabelSource","foldernames");
imdsTrainPC9 = imageDatastore("C:\Users\94562\Desktop\cotton\Environment_train\PC9","IncludeSubfolders",true,"LabelSource","foldernames");
imdsTrainM=arrayDatastore(imdsTrainPC1.Labels);

%验证集 validation data
TimdsTrainPC1 = imageDatastore("C:\Users\94562\Desktop\cotton\Environment_450_300_check\PC1","IncludeSubfolders",true,"LabelSource","foldernames");
TimdsTrainPC2 = imageDatastore("C:\Users\94562\Desktop\cotton\Environment_450_300_check\PC2","IncludeSubfolders",true,"LabelSource","foldernames");
TimdsTrainPC3 = imageDatastore("C:\Users\94562\Desktop\cotton\Environment_450_300_check\PC3","IncludeSubfolders",true,"LabelSource","foldernames");
TimdsTrainPC4 = imageDatastore("C:\Users\94562\Desktop\cotton\Environment_450_300_check\PC4","IncludeSubfolders",true,"LabelSource","foldernames");
TimdsTrainPC5 = imageDatastore("C:\Users\94562\Desktop\cotton\Environment_450_300_check\PC5","IncludeSubfolders",true,"LabelSource","foldernames");
TimdsTrainPC6 = imageDatastore("C:\Users\94562\Desktop\cotton\Environment_450_300_check\PC6","IncludeSubfolders",true,"LabelSource","foldernames");
TimdsTrainPC7 = imageDatastore("C:\Users\94562\Desktop\cotton\Environment_450_300_check\PC7","IncludeSubfolders",true,"LabelSource","foldernames");
TimdsTrainPC8 = imageDatastore("C:\Users\94562\Desktop\cotton\Environment_450_300_check\PC8","IncludeSubfolders",true,"LabelSource","foldernames");
TimdsTrainPC9 = imageDatastore("C:\Users\94562\Desktop\cotton\Environment_450_300_check\PC9","IncludeSubfolders",true,"LabelSource","foldernames");
TimdsTrainM=arrayDatastore(TimdsTrainPC1.Labels);


imdsTrainPC=combine(imdsTrainPC1,imdsTrainPC2,imdsTrainPC3,imdsTrainPC4,imdsTrainPC5,imdsTrainPC6,imdsTrainPC7,imdsTrainPC8,imdsTrainPC9,imdsTrainM,ReadOrder='associated');
read(imdsTrainPC)
TimdsTrainPC=combine(TimdsTrainPC1,TimdsTrainPC2,TimdsTrainPC3,TimdsTrainPC4,TimdsTrainPC5,TimdsTrainPC6,TimdsTrainPC7,TimdsTrainPC8,TimdsTrainPC9,TimdsTrainM,ReadOrder='associated');


%[imdsTrain, imdsValidation] = splitEachLabel(imdsTrainPC,0.7,"randomized");

opts = trainingOptions("adam",...
    "ExecutionEnvironment","gpu",...
    "InitialLearnRate",0.01,...
    "MaxEpochs",300,...
    "Shuffle","every-epoch",...
    "Plots","training-progress",'MiniBatchSize',128,'ValidationData',TimdsTrainPC,'ValidationFrequency',50);

[net, traininfo] = trainNetwork(imdsTrainPC,lgraph,opts); 
%resultN=predict(net,TimdsTrainPC);
Yt=TimdsTrainPC1.Labels
Yp = classify(net,TimdsTrainPC)
figure
confusionchart(Yt,Yp);

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
多源图像信息融合技术广泛应用于军事、计算机视觉、医疗诊断及遥感应用等领域。 论文从像元、特征和决策融合三个方面研究了多源图像融合技术,提出了一些新的分析处 理方法。 像元级图像融合技术的主要研究目的是获得一幅视觉增强的图像。本论文首先研究了 一般意义下单纯面向视觉增强的多源图像融合方法,提出了一种基于统计融合模型的多分 辨融合方法,给出了一般意义下的统计融合模型,通过引入传感器噪声项,有效地抑制了 传感器噪声对图像融合的影响。然后在多光谱图像融合中,提出了一种新的融合方法,将 相关约束进一步引入到统计融合模型中。这个方法充分增强了相关的空间信息,同时又有 效抑制了光谱失真。 特征组合、分类等是现有文献中特征级图像融合的主要研究内容。论文提出了特征级 融合的新思想,即在图像特征提取过程中利用多源图像的信息融合。基于这个思想,提出 了一种融合多源图像边缘信息的直线提取算法,把边缘的相位信息作为融合要素,通过融 合规则组合多源图像中的不同特性。算法可以提取出仅利用单一或部分图像不能获得的直 线特征。论文进一步把道路的直线特征与光谱特征相结合,发展了一种从遥感多光谱图像 提取道路的算法。 决策级图像融合技术具有广泛的应用范围。Dempster-Shafer (D-S)证据理论是决 策融合的主要方法之一,但典型的D-S理论不大适应高冲突证据组合。论文提出了一种基 于预处理模式的新方法,在利用Dempster组合规则进行证据组合之前,将冲突焦元的基 本概率赋值部分转移到焦元并集,采用证据之间的冲突额度来确定证据组合顺序。由于预 方法将冲突化解为不确定的知识表示,D-S理论可以处理冲突证据的组合问题。论文进一 步将改进方法应用于高光谱图像分类,发展了一种基于D-S理论的高光谱图像分类方法, 提供了比基于典型D-S理论方法更好的结果。 论文提出的所有算法均应用于真实多源图像数据,实验结果显示了算法的有效性。 关键词:图像融合,多分辨分析,多光谱,特征提取,直线提取,道路提取,D-S证 据理论,高光谱,图像分类,目标识别

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值