https://blog.csdn.net/qian2213762498/article/details/87884869
1. 如何去优化IoU
2. Hard Examples:focal_loss+OHEM
3.U-Net改进
(1)transposed convolution我们会选择用upsampling+3*3 conv(Deconvolution and Checkerboard Artifacts)代替(具体原因请见这篇文章:Deconvolution and Checkerboard Artifacts (强烈安利distill,blog质量奇高))
(2)为了提升各feature map的resolution我移去了原resnet conv1中的pool(实践过,需要内存更大的gpu)
(3) decoder 添加设计:
attention,用很少的参数来校准feature map(Concurrent Spatial and Channel Squeeze & Excitation in Fully Convolutional Networks(https://arxiv.org/pdf/1803.02579.pdf);
为了进一步鼓励模型在多尺度上的鲁棒性,我们可以引入Hypercolumn去直接把各个scale的feature map concatenate起来;
(4)Semi-supervised:
Semi-supervised Skin Lesion Segmentation via Transformation Consistent Self-ensembling Model
https://arxiv.org/pdf/1808.03887.pdf
(5)Training:
其实训练我觉得真的是case by case,在task A上用的heuristics放到task B效果就反而没那么好,所以我就介绍一个大多场合下都能用的trick:Cosine Annealing w. Snapshot Ensemble(https://arxiv.org/abs/1704.00109)
听上去听酷炫的,实际上就是每个一段时间warm restart学习率,这样在单位时间内能得到多个而不是一个converged local minina,做融合的话手上的模型会多很多。
最后安利一下我自己(author)的repo:liaopeiyuan/ml-arsenal-public(https://github.com/liaopeiyuan/ml-arsenal-public) ,里面会有我所有参与过的Kaggle竞赛的源代码,目前有两个Top 1% solution:TGS Salt和Quick Draw Doodle