Context Contrasted Feature and Gated Multi-scale Aggregation for Scene Segmentation

Introduction

作者认为获得有判别力的语义特征以及多尺度融合是提升性能的关键,

  • 本文提出一种新的创新性的语义对比特征,能够突出局部信息。
  • 除此之外作者提出了新的gated sum来对每个位置选择性地融合多尺度的特征。gate在这里控制着不同尺度的信息流动

Segmentation Network

这个的语义框架是FCN网络,作者用skip layer来同和多尺度的特征。

CCL

语义信息对于整个的场景的分类是很重要的,DCNN已经能够产生足够好的语义信息了,但是这些语义特征更倾向于整个图像的抽象表达,对于场景分割来说是不合适的。

  1. 首先这些特征呢主要focus on the dominated object对于不合情理的物体不能保证丰富的语义。
  2. 在空间上分辨力不够

和物体分割相比,场景分割的物体间的关联更加丰富,不加选择的结合语义信息对于最终的预测是有害的,特别是有复杂的背景的时候。

这里写图片描述
在这图图像中,车辆是inconspicious物体在A的周围回收集丰富的语义信息和其他像素之间有着明显的区别,但是他并不能够得到全局的信息,例如车辆和道路,因此也就不能获得鲁棒的高水平特征。但是如果随意的去融合语义特征,车辆这里的特征很可能就被周围的人覆盖掉了,最终车辆的信息就会被忽略,从而使得最终的结果标错。而且不同位置的语义似乎倾向于主导特征的连续表示,因此对于A处很难搜集高水平的特征。为了解决这个问题,作者提出了CCL。怎么做的呢?
作者将局部信息的预测和语义特征分开,最后通过在这二者之间做contrast来融合二者。这样的话不仅仅能够利用有用的语义信息,而且能够将局部特征变成前景从而和语义形成对比的。
这里写图片描述
Gated sum用来动态选择不同level的context contrasted local features。CCL首先在每一个block生成context contrasted local features,也即第一排的context-local1,context-local2,…context-local6,首先在feature level融合特征,然后再score level融合特征。CCL主要想区分高水平的特征,CRF旨在低水平的特征

Gated sum

Gated sum主要是对于score map来讲的。对于不同尺度的score map进行融合的时候,每一个位置的不同尺度的score map的权重是不一样地去对待的。因此这里的门控其实就是这样的一个权重。为了获得们空的信息,作者设计skip-layer也即conv+sigmoid来从特征map上来提取信息的,信息map(information map)是HXW,和feature map是一样的大小。对于不同尺度的特征之间,为了构建不同尺度间的关联,作者用RNN来对information map进行序列建模(skip layer产生的是information map)。
这里写图片描述

首先利用 Fnp,n=1,...,N F p n , n = 1 , . . . , N 来产生 Inp I p n ,其中N是level的个数, Fnp F p n 是HXWX#C的大小,然而 Inp I p n 是HXWX1的大小。然后利用rnn结够预测除了N个 Inp I p n ,然后将这N个 Inp I p n concat在一起,形成 Hp=(h1p,h2p,...,hNp) H p = ( h p 1 , h p 2 , . . . , h p N ) ,然后融合全局特征后形成
这里写图片描述
然后对每张图归一化形成
这里写图片描述
也就是最终的门控,用形成的门控map去对最终的score map做权重,也就是最终对score map做权重。、
这里写图片描述

网络结构总框架
这里写图片描述

  • 1
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
CVPR2018的oral论文合集。 包含以下论文: A Certifiably Globally Optimal Solution to the Non-Minimal Relative Pose Problem.pdf Accurate and Diverse Sampling of Sequences based on a “Best of Many” Sample Objective .pdf Actor and Action Video Segmentation from a Sentence .pdf An Analysis of Scale Invariance in Object Detection - SNIP .pdf Analytic Expressions for Probabilistic Moments of PL-DNN with Gaussian Input.pdf Are You Talking to Me_ Reasoned Visual Dialog Generation through Adversarial Learning .pdf Augmented Skeleton Space Transfer for Depth-based Hand Pose Estimation .pdf Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering .pdf CodeSLAM — Learning a Compact, Optimisable Representation for Dense Visual SLAM .pdf Context Contrasted Feature and Gated Multi-scale Aggregation for Scene Segmentation.pdf Context Encoding for Semantic Segmentation.pdf Convolutional Neural Networks with Alternately Updated Clique .pdf Deep Layer Aggregation.pdf Deep Learning of Graph Matching.pdf DensePose Multi-Person Dense Human Pose Estimation In The Wild.pdf Density Adaptive Point Set Registration.pdf Detail-Preserving Pooling in Deep Networks.pdf Direction-aware Spatial Context Features for Shadow Detection .pdf Discriminative Learning of Latent Features for Zero-Shot Recognition .pdf DoubleFusion_Real-time Capture of Human Performance with Inner Body Shape from a Single Depth Sensor.pdf Efficient Optimization for Rank-based Loss Functions .pdf Egocentric Activity Recognition on a Budget .pdf Fast and Furious_Real Time End-to-End 3D Detection, Tracking and Motion Forecasting with a Single Convolutional Net.pdf Feature Space Transfer for Data Augmentation.pdf Finding It”_ Weakly-Supervised Reference-Aware Visual Grounding in Instructional Video” .pdf Finding Tiny Faces in the Wild with Generative Adversarial Network.pdf FlipDial_A Generative Model for Two-Way Visual Dialogue .pdf Group Consistent Similarity Learning via Deep CRFs for Person Re-Identification .pdf High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs .pdf Hybrid Camera Pose Estimation .pdf Illuminant Spectra-based Source Separation Using Flash Photography .pdf Im2Flow_Motion Hallucination from Static Images for Action Recognition .pdf Im2Pano3D_Extrapolating 360 Structure and Semantics Beyond the Field of View .pdf Improved Fusion of Visual and Language Representations by Dense Symmetric Co-Attention for Visual Question Answering .pdf Learning Face Age Progression_A Pyramid Architecture of GANs .pdf Learning to Find Good Correspondences .pdf Left-Right Comparative Recurrent Model for Stereo Matching .pdf MapNet_An Allocentric Spatial Memory for Mapping Environments.pdf Maximum Classifier Discrepancy for Unsupervised Domain Adaptation .pdf Neural Kinematic Networks for Unsupervised Motion Retargetting.pdf
以以下代码为基础,绘制图片来 显示数据增强的过程和结果:def flip(root_path,img_name): #翻转图像 img = Image.open(os.path.join(root_path, img_name)) filp_img = img.transpose(Image.FLIP_LEFT_RIGHT) # filp_img.save(os.path.join(root_path,img_name.split('.')[0] + '_flip.jpg')) return filp_img def rotation(root_path, img_name): img = Image.open(os.path.join(root_path, img_name)) rotation_img = img.rotate(20) #旋转角度 # rotation_img.save(os.path.join(root_path,img_name.split('.')[0] + '_rotation.jpg')) return rotation_img def randomColor(root_path, img_name): #随机颜色 """ 对图像进行颜色抖动 :param image: PIL的图像image :return: 有颜色色差的图像image """ image = Image.open(os.path.join(root_path, img_name)) random_factor = np.random.randint(0, 31) / 10. # 随机因子 color_image = ImageEnhance.Color(image).enhance(random_factor) # 调整图像的饱和度 random_factor = np.random.randint(10, 21) / 10. # 随机因子 brightness_image = ImageEnhance.Brightness(color_image).enhance(random_factor) # 调整图像的亮度 random_factor = np.random.randint(10, 21) / 10. # 随机因子 contrast_image = ImageEnhance.Contrast(brightness_image).enhance(random_factor) # 调整图像对比度 random_factor = np.random.randint(0, 31) / 10. # 随机因子 return ImageEnhance.Sharpness(contrast_image).enhance(random_factor) # 调整图像锐度 def contrastEnhancement(root_path, img_name): # 对比度增强 image = Image.open(os.path.join(root_path, img_name)) enh_con = ImageEnhance.Contrast(image) contrast = 1.5 image_contrasted = enh_con.enhance(contrast) return image_contrasted def brightnessEnhancement(root_path,img_name):#亮度增强 image = Image.open(os.path.join(root_path, img_name)) enh_bri = ImageEnhance.Brightness(image) brightness = 1.5 image_brightened = enh_bri.enhance(brightness) return image_brightened def colorEnhancement(root_path,img_name):#颜色增强 image = Image.open(os.path.join(root_path, img_name)) enh_col = ImageEnhance.Color(image) color = 1.5 image_colored = enh_col.enhance(color) return image_colored from PIL import Image from PIL import ImageEnhance import os #import cv2 import numpy as np imageDir="./test/0" #要改变的图片的路径文件夹 saveDir="./new" #要保存的图片的路径文件夹 for name in os.listdir(imageDir): saveName= name[:-4]+"id.jpg" image = Image.open(os.path.join(imageDir, name)) image.save(os.path.join(saveDir,saveName)) saveName= name[:-4]+"be.jpg" saveImage=brightnessEnhancement(imageDir,name) saveImage.save(os.path.join(saveDir,saveName)) saveName= name[:-4]+"fl.jpg" saveImage=flip(imageDir,name) saveImage.save(os.path.join(saveDir,saveName)) saveName= name[:-4]+"ro.jpg" saveImage=rotation(imageDir,name) saveImage.save(os.path.join(saveDir,saveName))
最新发布
05-18

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值