https://github.com/MarkMoHR/Awesome-Edge-Detection-Papers
SegFix:与模型无关的后处理优化预测边缘
分割时,对于置信度不高的像素,能否单独处理?首先,确定其方向,找到其对应的内部像素,用内部像素代替该像素
学习边界到内部点的方向;
文献概述:
提高语义分割表现,通过:
Semantic segmentation with boundary neural fields. CVPR 2016
Semantic image segmentation with task-specific edge detection using cnns and a discriminatively trained domain transform CVPR 2016
Devil in the details: Towards accurate single and multiple human parsing.
Gated shape cnns for semantic segmentation CVPR 2019
Boundary-aware feature propagation for scene segmentation 2019
边界检测模型:
High-for-low and low-for-high: Efficient boundary detection from deep object features and its applications to high-level vision ICCV 2015
Contour detection and hierarchical image segmentation. PAMI 2010
Fast edge detection using structured forests Arcive 2014
距离和方向分割:3,24,53 16,6,10,48
Deep watershed transform for instance segmentation CVPR2017
Boundary-aware instance segmentation. CVPR2017
Object instance annotation with deep extreme level set evolution CVPR2019
Personlab: Person pose estimation and instance segmentation with a bottom-up, partbased, geometric embedding model. ECCV2018
A distance map regularized cnn for cardiac cine mrimage segmentation. 2019
A.: Multi-task learning for segmentation of building footprints with deep neural networks ICIP2019
Masklab: Instance segmentation by refining object detection with semantic and direction features CVPR2018
Level Set for Segmentation:[1,14,53,31】
-
角度回归:
-
Devil is in the edges: Learning semantic boundaries from noisy annotations CVPR 2019
Deep watershed transform for instance segmentation CVPR 2017
现有的DeepLabv3,Gated_SCNN,HRNet都不能很好的优化网络的边缘;并且发现,距离边缘越远的预测结果越好,即内部预测结果更可靠。
我们的方法分为2步:定位目标边界(通过边界检测模型预测二进制边界);学习从边界到内部的像素的方向,并将边界像素沿着方向向内部移动一定的距离;
网络框架:
(1)先训练一个模型,选出来边缘像素(边缘分割图)及其对应的内部像素(offset分割图,并将边缘沿着offset向内走,得到置信度更高的像素);
根据distance_map生成boundary map:distance_map的distance小于指定值的,定义为boundary
我们的方法效果取决于3个量:(1)边缘预测结果,即是否为边缘 (2)边缘的方向预测结果,即指向内部还是外部 (3)内部像素的预测准确度
结果:
边缘宽度的选择3,5,10对结果的影响不大
Devil is in the Edges: Learning Semantic Boundaries from Noisy Annotations
训练过程:
训练过程分为2步,先根据预测结果和标注轮廓,通过主动轮廓+能量场最小来得到refine contour,然后根据refine contour来计算3个损失(二进制分割损失,NMS损失和边缘损失)来优化backbone:
主动轮廓+能量场能量场最小(没看懂):
根据refine contour来计算3个损失:
其角度损失:
对预测的边缘计算法线方向,对gt边缘计算法线方向,
参考Deep watershed transform for instance segmentation
**NMS损失:**没有看懂
https://nv-tlabs.github.io/STEAL/
边缘经常出现在高频信息,所以如何将图像低频与高频信息分离(原图-高斯*原图;频率域)和结合,以预测边缘
语义边缘检测:36,34
CASENet: Deep category-aware semantic edge detection. CVPR 2017
Holistically-nested edge detection ICCV 2015
Active contours
https://www.youtube.com/watch?v=jrA-r4BOn0c
Deep Watershed Transform for Instance Segmentation
参考:https://github.com/min2209/dwt
我们的模型将RGB图像(a)和半分割图像(e)作为输入,预测前景中每个像素到其最近的边缘的方向(f),方向编码为2通道的单位向量;
角度损失:
预测的单位向量和实际的单位向量计算角度的均方差损失:
角度的均方差损失:先计算角度的cos,然后得到角度,然后角度**2
errorAngles = tf.acos(tf.reduce_sum(pred * gt, reduction_indices=[1], keep_dims=True))
lossAngleTotal = tf.reduce_sum((tf.abs(errorAngleserrorAngles))ssweight)
计算角度的cos:pred * gt,因为pred , gt都是单位向量,所以|pred||gt|cos=predgt,所以cos=predgt
然后计算角度值:tf.acos
然后计算均方差损失tf.reduce_sum((tf.abs(errorAngleserrorAngles))ssweight)