今日读由Tayyab Naseer, Gabriel L, Oliveira Thomas,Brox Wolfram Burgard四人合著的《Semantics-aware Visual Localization under Challenging Perceptual Conditions》。
主要一个亮点是 利用了有Up-convolutional的FAST-Net. 得到了一个结合 突出区域的特征和现有的全局描述 形成的新的更鲁棒的场景描述子,使得长时间的用以机器人位置定位的视觉导航方法,在动态环境下更鲁棒。
up-convolutional Networks 这种结构 可以在分类网络中利用概率对每一个分类赋予权重,准确来说,对于“geometrically stable image region”才是作者focus的(本文主要针对的是雨雪天气以及其他动态的因素影响下的视觉定位)
本文的主要几点贡献:
“ – We present a learning approach for robust binary segmentation and feature aggregation of deep networks.
– We show that our method outperforms off-the-shelf features from deep networks for robust place recognition over a variety of datasets. Our approach runs online at 14 Hz on a single GPU.
– We present a coarsely labeled dataset for semantic saliency in dynamic and perceptually changing urban environments which captures long-term weather, seasonal, and str