Improving Lesion Segmentation for Diabetic Retinopathy using Adversarial Learning

ICIAR 2019
Code[Pytorch]

摘要&introduction

IDRiD给出了pixel level的病灶标注:microaneurysms, hemorrhages, soft exudates and hard exudates.

  • HEDNet edge detector to solve a semantic segmentation task on this dataset.

  • then propose an end-to-end system for pixel-level segmentation of DR lesions by incorporating HEDNet into a Conditional Generative Adver-sarial Network(cGAN).

  • design a loss function that adds adversarial loss to segmentation loss.

image-level 或者 patch level的GT,不利于医生解释诊断结果。====> pixel level segmentation的重要性

我们无法根据image-level 或者 patch level的预测结果来向患者或者他人解释做出诊断的原因,但是有了病灶的标注,我们便可以根据病灶的某些因素做出比较可信的诊断。

加入cGAN的原因是可以使得分割出来的结果更加可信,从而提高performance。

方法

  • 预处理

illumination correction and contrast enhancement techniques.(照明校正和对比度增强)

  • Brightness Balance:由于数据集采样自不同的病灶和组织,所以在整个数据集上会存在光照的不一致性。(对于每一个训练和测试图,平均像素强度等于训练集上的平均像素强度。)

  • Contrast Enhancement:对比度增强确保像素强度覆盖广泛的值,这可以使细节更容易地明显。 CLAHE([限制对比度自适应直方图均衡化)

  • Denoising :假设数据存在高斯白噪声,使用非局部均值去噪算法。此外,对图像进行了双边滤波(去噪保边),用附近像素的强度值的加权平均代替了每个像素的强度,从而在最小化噪声的同时保留了边缘信息

  • network

参见另外一篇论文,了解其中的细节。

  • loss

病灶仅存在在病变图像中的一小部分区域,所以GT(二分类,病灶和非病灶)是严重不平衡的。使用了weight -BCE loss:在这里插入图片描述

y为0或1,表示是否是病灶,p表示positive class(病灶)的预测概率。

受到SSNet的启发,病灶的size和shape容易导致假阳性和假阴性labeling。

假阳性:错误的诊断本来不存在的病。

假阴性:有病没查出来。

cGAN可以有效的提高泛化能力。

SSNet的generator是GCN(Global Convolutional Network),本文使用HEDNet作为generator。

cGAN用来discriminate output。网络结构same as infoGAN,判别器用了PatchGAN的架构。(输入图被split成小的patch,然后对每个小patch使用CELoss判别是真是假。

The final generator loss term is a weighted average of binary cross-entropy loss and GAN loss: >

最终的分割不仅要与GT接近,并且要让discriminator看起来real。(后面的就是GAN的作用)

比如这里是李宏毅老师ppt中的一页,可以看出两个的output和GT都是只有一个pixel的差异,但是前面的一个output更加的realistic。
在这里插入图片描述

实验部分

数据集

IDRiD,四个不同的task,Microaneurysms (MA), Soft Exudates (SE), Hard Exudates (EX) and Hemorrhages (HE) 的病灶分割

54个training set,27个test set。并非所有的图都包含四种病灶

对于54张训练集,随机划分80%训练,20%做验证。

分辨率:4288*2848
>

Implementation Details

  • 超参

SE, EX, HE的patch size为128,MA的patch size为64.

weight-BCE loss的 β \beta β 为10,表示lesion占比重比较大

total loss中GAN loss占比 λ \lambda λ 为0.01,(从占比上来看,GANloss只是起到很小一部分)

SGD, 初始lr=0.001,HEDNet的话,每200个epoch除以10,对于判别器,每100个epoch除以10.

momentum=0.9,L2-weight decay=0.0005

train和val的batch size=4,test batch size=1

总的epoch=5000

  • 预处理

CLAHE的tiles为8x8,对比度限制为40.

非局部均值去噪算法中的filter strength为10.

最后对输入图像进行normalize,mean=(0.485, 0.456, 0.406)标准差为(0.229, 0.224, 0.225)

  • 数据增强

随机裁剪为512x512

随机旋转,最大旋转角度为20

评价指标&实验结果

HEDNet+cGAN在EX上achieve best scores(F1,AP),因为EX是小的有光泽的白色或黄白色沉积在视网膜血管深处,边缘尖锐,导致图像高对比度。

MA效果不明显,主要是MA very small, lower contrast and share higher similarity to blood vessels.

但是从结果中可以看出,加了cGAN后,效果比不加的普遍要好

EX效果最好,加了cGAN后效果比不加的好。

感觉就最后的可视化结果来看,效果也没有很好。。

Adversarial attacks are a major concern in the field of deep learning as they can cause misclassification and undermine the reliability of deep learning models. In recent years, researchers have proposed several techniques to improve the robustness of deep learning models against adversarial attacks. Here are some of the approaches: 1. Adversarial training: This involves generating adversarial examples during training and using them to augment the training data. This helps the model learn to be more robust to adversarial attacks. 2. Defensive distillation: This is a technique that involves training a second model to mimic the behavior of the original model. The second model is then used to make predictions, making it more difficult for an adversary to generate adversarial examples that can fool the model. 3. Feature squeezing: This involves converting the input data to a lower dimensionality, making it more difficult for an adversary to generate adversarial examples. 4. Gradient masking: This involves adding noise to the gradients during training to prevent an adversary from estimating the gradients accurately and generating adversarial examples. 5. Adversarial detection: This involves training a separate model to detect adversarial examples and reject them before they can be used to fool the main model. 6. Model compression: This involves reducing the complexity of the model, making it more difficult for an adversary to generate adversarial examples. In conclusion, improving the robustness of deep learning models against adversarial attacks is an active area of research. Researchers are continually developing new techniques and approaches to make deep learning models more resistant to adversarial attacks.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值