【美味蟹堡皇今日营业】10-17论文学习笔记

从今天开始,每天一定要读至少5篇文章!

今天的文章比较零散,大多不是我需要的

StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation[paper]

Surface NetWorks[paper]

Feature Space Transfer for Data Augmentation[paper]

Abstract

The problem of data augmentation in feature space is considered. A new architecture, denoted the FeATure TransfEr Network (FATTEN), is proposed for the modeling of feature trajectories induced by variations of object pose. This architecture exploits a parametrization of the pose manifold in terms of pose and appearance. This leads to a deep encoder/decoder network architecture, where the encoder factors into an appearance and a pose predictor. Unlike previous attempts at trajectory transfer, FATTEN can be efficiently trained end-to-end, with no need to train separate feature transfer functions. This is realized by supplying the decoder with information about a target pose and the use of a multi-task loss that penalizes category- and pose-mismatches. In result, FATTEN discourages discontinuous or non-smooth trajectories that fail to capture the structure of the pose manifold, and generalizes well on object recognition tasks involving large pose variation. Experimental results on the artificial ModelNet database show that it can successfully learn to map source features to target features of a desired pose, while preserving class identity. Most notably, by using feature space transfer for data augmentation (w.r.t. pose and depth) on SUN-RGBD objects, we demonstrate considerable performance improvements on one/few-shot object recognition in a transfer learning setup, compared to current state-of-the-art methods.

FLIPDIAL: A Generative Model for Two-Way Visual Dialogue[paper]

Abstract

We present FLIPDIAL, a generative model for Visual Dialogue that simultaneously plays the role of both participants in a visually-grounded dialogue. Given context in the form of an image and an associated caption summarising the contents of the image, FLIPDIAL learns both to answer questions and put forward questions, capable of generating entire sequences of dialogue (question-answer pairs) which are diverse and relevant to the image. To do this, FLIPDIAL relies on a simple but surprisingly powerful idea: it uses convolutional neural networks (CNNs) to encode entire dialogues directly, implicitly capturing dialogue context, and conditional VAEs to learn the generative model. FLIPDIAL outperforms the state-of-the-art model in the sequential answering task (1VD) on the VisDial dataset by 5 points in Mean Rank using the generated answers. We are the first to extend this paradigm to full two-way visual dialogue (2VD), where our model is capable of generating both questions and answers in sequence based on a visual input, for which we propose a set of novel evaluation measures and metrics.

MapNet: An Allocentric Spatial Memory for Mapping Environments[paper]

Revisiting Deep Intrinsic Image Decompositions[paper]

 

Abstract

While invaluable for many computer vision applications, decomposing a natural image into intrinsic reflectance and shading layers represents a challenging, underdetermined inverse problem. As opposed to strict reliance on conventional optimization or filtering solutions with strong prior assumptions, deep learning based approaches have also been proposed to compute intrinsic image decompositions when granted access to sufficient labeled training data. The downside is that current data sources are quite limited, and broadly speaking fall into one of two categories: either dense fully-labeled images in synthetic/narrow settings, or weakly-labeled data from relatively diverse natural scenes. In contrast to many previous learning-based approaches, which are often tailored to the structure of a particular dataset (and may not work well on others), we adopt core network structures that universally reflect loose prior knowledge regarding the intrinsic image formation process and can be largely shared across datasets. We then apply flexibly supervised loss layers that are customized for each source of ground truth labels. The resulting deep architecture achieves state-of-the-art results on all of the major intrinsic image benchmarks, and runs considerably faster than most at test time.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
CVPR2018的oral论文合集。 包含以下论文: A Certifiably Globally Optimal Solution to the Non-Minimal Relative Pose Problem.pdf Accurate and Diverse Sampling of Sequences based on a “Best of Many” Sample Objective .pdf Actor and Action Video Segmentation from a Sentence .pdf An Analysis of Scale Invariance in Object Detection - SNIP .pdf Analytic Expressions for Probabilistic Moments of PL-DNN with Gaussian Input.pdf Are You Talking to Me_ Reasoned Visual Dialog Generation through Adversarial Learning .pdf Augmented Skeleton Space Transfer for Depth-based Hand Pose Estimation .pdf Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering .pdf CodeSLAM — Learning a Compact, Optimisable Representation for Dense Visual SLAM .pdf Context Contrasted Feature and Gated Multi-scale Aggregation for Scene Segmentation.pdf Context Encoding for Semantic Segmentation.pdf Convolutional Neural Networks with Alternately Updated Clique .pdf Deep Layer Aggregation.pdf Deep Learning of Graph Matching.pdf DensePose Multi-Person Dense Human Pose Estimation In The Wild.pdf Density Adaptive Point Set Registration.pdf Detail-Preserving Pooling in Deep Networks.pdf Direction-aware Spatial Context Features for Shadow Detection .pdf Discriminative Learning of Latent Features for Zero-Shot Recognition .pdf DoubleFusion_Real-time Capture of Human Performance with Inner Body Shape from a Single Depth Sensor.pdf Efficient Optimization for Rank-based Loss Functions .pdf Egocentric Activity Recognition on a Budget .pdf Fast and Furious_Real Time End-to-End 3D Detection, Tracking and Motion Forecasting with a Single Convolutional Net.pdf Feature Space Transfer for Data Augmentation.pdf Finding It”_ Weakly-Supervised Reference-Aware Visual Grounding in Instructional Video” .pdf Finding Tiny Faces in the Wild with Generative Adversarial Network.pdf FlipDial_A Generative Model for Two-Way Visual Dialogue .pdf Group Consistent Similarity Learning via Deep CRFs for Person Re-Identification .pdf High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs .pdf Hybrid Camera Pose Estimation .pdf Illuminant Spectra-based Source Separation Using Flash Photography .pdf Im2Flow_Motion Hallucination from Static Images for Action Recognition .pdf Im2Pano3D_Extrapolating 360 Structure and Semantics Beyond the Field of View .pdf Improved Fusion of Visual and Language Representations by Dense Symmetric Co-Attention for Visual Question Answering .pdf Learning Face Age Progression_A Pyramid Architecture of GANs .pdf Learning to Find Good Correspondences .pdf Left-Right Comparative Recurrent Model for Stereo Matching .pdf MapNet_An Allocentric Spatial Memory for Mapping Environments.pdf Maximum Classifier Discrepancy for Unsupervised Domain Adaptation .pdf Neural Kinematic Networks for Unsupervised Motion Retargetting.pdf
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值