CVPR 2021最佳论文奖的候选名单,华人占据半壁江山,何恺明、陶大程、沈春华等人上榜义目录标题)

作者: 清华AMiner团队

CVPR 2021最佳论文奖的候选名单,华人占据半壁江山,何恺明、陶大程、沈春华等人上榜,共32篇论文,可直接下载!

在这里插入图片描述
CVPR 今年共有7015篇有效投稿,有1663篇被收录,录用率为23.7%。

近期,官方公布了最佳论文奖的候选名单,候选论文有32篇。在候选名单中,有华人参与的论文有18篇,作者来自国内外多所高校和研究机构。

国内高校包括北京大学、哈尔滨工业大学、武汉大学、浙江大学、香港中文大学、香港大学等;

国外高校和研究机构包括新加坡国立大学、阿尔伯塔大学、马普所、苏黎世联邦理工学院、加州大学圣地亚哥分校、、伦敦大学学院、悉尼大学、阿德莱德大学、宾汉姆顿大学、北卡罗来纳大学教堂山分校、康奈尔大学、MIT、华盛顿大学、阿姆斯特丹大学、加州理工学院等;

企业研究机构包括腾讯、商汤研究院、微软亚洲研究院、Facebook AI 研究院、优步研究院、Adobe研究院等。

详细论文列表如下:

1、Privacy-Preserving Image Features via Adversarial Affine Subspace Embeddings

作者:Mihai Dusmanu (ETH Zurich); Johannes L Schönberger (Microsoft); Sudipta Sinha (Microsoft); Marc Pollefeys (ETH Zurich / Microsoft)

https://www.aminer.cn/pub/5ee3527191e011cb3bff76f2

2、Learning Calibrated Medical Image Segmentation via Multi-Rater Agreement Modeling

作者:Wei Ji (University of Alberta); Shuang Yu (Tencent); Junde Wu (Harbin Institute of Technology); Kai Ma (Tencent); Cheng Bian (Tencent); Qi Bi (University of Amsterdam);

3、Diffusion Probabilistic Models for 3D Point Cloud Generation

作者:Shitong Luo (Peking University); Wei Hu (Peking University)

https://www.aminer.cn/pub/603f689d91e011cacfbda368/diffusion-probabilistic-models-for-d-point-cloud-generation

4、Task Programming: Learning Data Efficient Behavior Representations

作者:Jennifer J. Sun (Caltech); Ann Kennedy (Northwestern University); Eric Zhan (Caltech); David J. Anderson (Caltech); Yisong Yue (Caltech); Pietro Perona (California Institute of Technology)

https://www.aminer.cn/pub/5fc4e74a91e011abfa2fb161/task-programming-learning-data-efficient-behavior-representations

5、PoseAug: A Differentiable Pose Augmentation Framework for 3D Human Pose Estimation

作者:Kehong Gong (National University of Singapore); Jianfeng Zhang (NUS); Jiashi Feng (NUS)

https://www.aminer.cn/pub/60950a9d91e011e1dbbca82f/poseaug-a-differentiable-pose-augmentation-framework-for-d-human-pose-estimation

6、SCANimate: Weakly Supervised Learning of Skinned Clothed Avatar Networks

作者:Shunsuke Saito (Facebook); Jinlong Yang (Max Planck Institute for Intelligent Systems); Qianli Ma (Max Planck Institute for Intelligent Systems); Michael J. Black (Max Planck Institute for Intelligent Systems)https://www.aminer.cn/pub/606eebd991e011aa47b6accd/scanimate-weakly-supervised-learning-of-skinned-clothed-avatar-networks

7、On Self-Contact and Human Pose

作者:Lea Müller (Max Planck Institute for Intelligent Systems); Ahmed A A Osman (Max Planck Institute for Intelligent Systems); Siyu Tang (ETH Zurich); Chun-Hao Paul Huang (Max Planck Institute for Intelligent Systems); Michael J. Black (Max Planck Institute for Intelligent Systems)

https://www.aminer.cn/pub/606ee64191e011aa47b6ac44/on-self-contact-and-human-pose

8、Binary TTC: A Temporal Geofence for Autonomous Navigation

作者:Abhishek Badki (University of California, Santa Barbara); Orazio Gallo (NVIDIA Research); Jan Kautz (NVIDIA); Pradeep Sen (UC Santa Barbara)

https://www.aminer.cn/pub/60000fb491e011b170d7be98/binary-ttc-a-temporal-geofence-for-autonomous-navigation

9、Rethinking and Improving the Robustness of Image Style Transfer

作者:Pei Wang (UC San Diego); Yijun Li (Adobe Research); Nuno Vasconcelos (UC San Diego)

https://www.aminer.cn/pub/607590c491e0110f6fe6860d/rethinking-and-improving-the-robustness-of-image-style-transfer

10、Audio-Visual Instance Discrimination with Cross-Modal Agreement

作者:Pedro Morgado (University of California, San Diego); Nuno Vasconcelos (UCSD, USA); Ishan Misra (Facebook AI Research)

https://www.aminer.cn/pub/5ea8009091e0111d387ee980/audio-visual-instance-discrimination-with-cross-modal-agreement

11、Point2Skeleton: Learning Skeletal Representations from Point Clouds

作者:Cheng Lin (The University of Hong Kong); Changjian Li (University College London); Yuan Liu (The University of Hong Kong); Nenglun Chen (The University of Hong Kong); Yi King Choi (The University of Hong Kong); Wenping Wang (The University of Hong Kong)

https://www.aminer.cn/pub/5fc76ad491e0114897921166/point-skeleton-learning-skeletal-representations-from-point-clouds

12、Human POSEitioning System (HPS): 3D Human Pose Estimation and Self-Localization in Large Scenes From Body-Mounted Sensors

作者:Vladimir Guzov (Max Planck Institute for Informatics); Aymen Mir (Max Planck Institute of Informatics); Torsten Sattler (Czech Technical University in Prague); Gerard Pons-Moll (MPII, Germany)

https://www.aminer.cn/pub/6065b54891e011d10ad615c2/human-poseitioning-system-hps-d-human-pose-estimation-and-self-localization-in

13、Where and What? Examining Interpretable Disentangled Representations

作者:Xinqi Zhu (University of Sydney); Chang Xu (University of Sydney); Dacheng Tao (The University of Sydney)

https://www.aminer.cn/pub/6075908591e0110f6fe6860c/where-and-what-examining-interpretable-disentangled-representations

14、Learning To Recover 3D Scene Shape From a Single Image

作者:Wei Yin (University of Adelaide); Jianming Zhang (Adobe Research); Oliver Wang (Adobe Systems Inc); Simon Niklaus (Adobe Research); Long T Mai (Adobe Research); Simon Chen (Adobe Research); Chunhua Shen (University of Adelaide)

https://www.aminer.cn/pub/5fdc7d6c91e01104c918105a/learning-to-recover-d-scene-shape-from-a-single-image

15、GIRAFFE: Representing Scenes As Compositional Generative Neural Feature Fields

作者:Michael Niemeyer (Max Planck Institute for Intelligent Systems, Tübingen and University of Tübingen); Andreas Geiger (MPI-IS and University of Tuebingen)

https://www.aminer.cn/pub/5fbe6df791e011e6e11b3e44/giraffe-representing-scenes-as-compositional-generative-neural-feature-fields

16、Polygonal Building Extraction by Frame Field Learning

作者:Nicolas Girard (Inria Sophia-Antipolis); Dmitriy Smirnov (MIT); Justin M Solomon (MIT); Yuliya Tarabalka (Inria Sophia-Antipolis)

https://www.aminer.cn/pub/5eabf34c91e011664ffd29ad/polygonal-building-segmentation-by-frame-field-learning

17、NeuralRecon: Real-Time Coherent 3D Reconstruction From Monocular Video

作者:Jiaming Sun (SenseTime); Yiming Xie (SenseTime); Linghao Chen (Zhejiang University); Xiaowei Zhou (Zhejiang University); Hujun Bao (Zhejiang University)

https://www.aminer.cn/pub/606703a591e011f2d6d47d37/neuralrecon-real-time-coherent-d-reconstruction-from-monocular-video

18、CoCosNet v2: Full-Resolution Correspondence Learning for Image Translation

作者:Xingran Zhou (Zhejiang University); Bo Zhang (Microsoft Research Asia); Ting Zhang (MSRA); Pan Zhang (USTC); Jianmin Bao (Microsoft Research Asia); Dong Chen (Microsoft Research Asia); Zhongfei Zhang (Binghamton University); Fang Wen (Microsoft Research Asia)https://www.aminer.cn/pub/5fca1f2b91e011654d99e872/full-resolution-correspondence-learning-for-image-translation

19、Less Is More: ClipBERT for Video-and-Language Learning via Sparse Sampling

作者:Jie Lei (UNC Chapel Hill); Linjie Li (Microsoft); Luowei Zhou (Microsoft); Zhe Gan (Microsoft); Tamara Berg (UNC Chapel Hill, USA); Mohit Bansal (University of North Carolina at Chapel Hill); Jingjing Liu (Microsoft)

https://www.aminer.cn/pub/60265e6791e011821e023c85/less-is-more-clipbert-for-video-and-language-learning-via-sparse-sampling

20、Neural Body: Implicit Neural Representations With Structured Latent Codes for Novel View Synthesis of Dynamic Humans

作者:Sida Peng (Zhejiang University); Yuanqing Zhang (Zhejiang University); Yinghao Xu (Chinese University of Hong Kong); Qianqian Wang (Cornell); Qing Shuai (Zhejiang University); Hujun Bao (Zhejiang University); Xiaowei Zhou (Zhejiang University)

https://www.aminer.cn/pub/5fef230091e0113b265a0293/neural-body-implicit-neural-representations-with-structured-latent-codes-for-novel-view

21、Exploring Simple Siamese Representation Learning

作者:Xinlei Chen (FAIR); Kaiming He (Facebook AI Research)

https://www.aminer.cn/pub/5fbcce8d91e01127d58eecf3/exploring-simple-siamese-representation-learning

22、Guided Interactive Video Object Segmentation Using Reliability-Based Attention Maps

作者:Yuk Heo (Korea University); Yeong Jun Koh (Chungnam National University); Chang-Su Kim (Korea university)

https://www.aminer.cn/pub/6081493d91e011bce6b8af25/guided-interactive-video-object-segmentation-using-reliability-based-attention-maps

23、GeoSim: Realistic Video Simulation via Geometry-Aware Composition for Self-DrivingYun Chen (Uber ATG); Frieda Rong (Uber ATG); Shivam Duggal (Delhi Technological University); Shenlong Wang (Uber ATG, University of Toronto); Xinchen Yan (Uber ATG); Sivabalan Manivasagam (University of Toronto); Shangjie Xue (MIT); Ersin Yumer (Uber ATG); Raquel Urtasun (Uber ATG)

24、Neural Lumigraph Rendering

作者:Petr Kellnhofer (Stanford University); Lars C Jebe (Raxium); Andrew Jones (Raxium); Ryan Spicer (Raxium); Kari Pulli (University of Oulu); Gordon Wetzstein (Stanford University)

https://www.aminer.cn/pub/6059c75191e011ed950a5bb3/neural-lumigraph-rendering

25、Event-Based Synthetic Aperture Imaging With a Hybrid Network

作者:Xiang Zhang (Wuhan University); Wei Liao (WuHan University); Lei Yu (Wuhan University); Wen Yang (Wuhan University); Gui-Song Xia (Wuhan University)

26、Energy-Based Learning for Scene Graph Generation

作者:Mohammed Suhail (University of British Columbia); Abhay Mittal (Amazon); Behjat Siddiquie (Amazon); Christopher Broaddus (Amazon); Jayan Eledath (Amazon); gerard medioni (USC); Leonid Sigal (University of British Columbia)

https://www.aminer.cn/pub/6040adb991e011a0653f06a5/energy-based-learning-for-scene-graph-generation

27、Learning High Fidelity Depths of Dressed Humans by Watching Social Media Dance Videos

作者:Yasamin Jafarian (University of Minnesota); Hyun Soo Park (The University of Minnesota)

https://www.aminer.cn/pub/6045eb1b91e011e6352430e5/learning-high-fidelity-depths-of-dressed-humans-by-watching-social-media-dance

28、MP3: A Unified Model To Map, Perceive, Predict and Plan

作者:Sergio Casas (Uber ATG / University of Toronto); Abbas Sadat (Uber ATG); Raquel Urtasun (Uber ATG)

https://www.aminer.cn/pub/6006bb5291e0111a1b6a2348/mp-a-unified-model-to-map-perceive-predict-and-plan

29、NeX: Real-Time View Synthesis With Neural Basis Expansion

作者:Suttisak Wizadwongsa (Vidyasirimedhi Institute of Science and Technology); Pakkapon Phongthawee (Vidyasirimedhi Institute of Science and Technology); Jiraphon Yenphraphai (Vidyasirimedhi Institute of Science and Technology); Supasorn Suwajanakorn (Vidyasirimedhi Institute of Science and Technology)

https://www.aminer.cn/pub/6048ac7391e0115491a5ccb3/nex-real-time-view-synthesis-with-neural-basis-expansion

30、NewtonianVAE: Proportional Control and Goal Identification From Pixels via Physical Latent Spaces

作者:Miguel Jaques (University of Edinburgh); Michael Burke (Monash University); Timothy Hospedales (Edinburgh University)

https://www.aminer.cn/pub/5eda19c991e01187f5d6d86f/newtonianvae-proportional-control-and-goal-identification-from-pixels-via-physical-latent-spaces

31、Fast End-to-End Learning on Protein Surfaces

作者:Freyr Sverrisson (EPFL); Jean Feydy (Imperial College London); Bruno Correia (EPFL); Michael Bronstein (Imperial College London / Twitter)

32、Real-Time High-Resolution Background Matting

作者:Shanchuan Lin (University of Washington); Andrey Ryabtsev (University of Washington); Soumyadip Sengupta (University of Washington); Brian Curless (University of Washington); Steve Seitz (University of Washington); Ira Kemelmacher-Shlizerman (University of Washington)

https://www.aminer.cn/pub/5fd8bece91e0119b22c1f503/real-time-high-resolution-background-matting

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值