enas 参数共享_CVPR2020-Paper-Code-Interpretation

本文介绍了CVPR2020会议的亮点,包括参数共享方法ENAS,以及目标检测、图像识别、3D重建等领域的最新论文成果。文章还提及了多个相关领域的研究进展,如行人检测、图像分割和3D点云处理,探讨了如何提升模型性能和鲁棒性。
摘要由CSDN通过智能技术生成

插个广告:2020极市计算机视觉开发者榜单已于7月20日开赛,8月31日截止提交,基于火焰识别、电动车头盔识别、后厨老鼠识别、摔倒识别四个赛道,47000+数据集,30万奖励等你挑战!点击这里报名

CVPR2020最新信息及论文下载贴(Papers/Codes/Project/PaperReading/Demos/直播分享/论文分享会等)

官网链接:http://cvpr2020.thecvf.com/

时间:Seattle, Washington,2020年6月14日-6月19日

论文接收公布时间:2020年2月24日

相关问题:

总目录

1.CVPR2020接收论文(持续更新)

分类汇总

目录

目标检测Few-Shot Object Detection with Attention-RPN and Multi-Relation Detector

论文地址:https://arxiv.org/abs/1908.01998

AugFPN: Improving Multi-scale Feature Learning for Object Detection

论文地址:https://arxiv.org/abs/1912.05384

Hit-Detector: Hierarchical Trinity Architecture Search for Object Detection

论文地址:https://arxiv.org/abs/2003.11818

代码:https://github.com/ggjy/HitDet.pytorch

Multi-task Collaborative Network for Joint Referring Expression Comprehension and Segmentation

论文地址:https://arxiv.org/abs/2003.08813

CentripetalNet: Pursuing High-quality Keypoint Pairs for Object Detection

论文地址:https://arxiv.org/abs/2003.09119

代码:https://github.com/KiveeDong/CentripetalNet

人脸识别

目标跟踪

三维点云/三维重建/三维检测/三维分割/深度估计

三维点云&重建PointAugment: an Auto-Augmentation Framework for Point Cloud Classification

论文地址:https://arxiv.org/abs/2002.10876

代码:https://github.com/liruihui/PointAugment/

Learning multiview 3D point cloud registration

论文地址:https://arxiv.org/abs/2001.05119

C-Flow: Conditional Generative Flow Models for Images and 3D Point Clouds

论文地址:https://arxiv.org/abs/1912.07009

RandLA-Net: Efficient Semantic Segmentation of Large-Scale Point Clouds

论文地址:https://arxiv.org/abs/1911.11236

Total3DUnderstanding: Joint Layout, Object Pose and Mesh Reconstruction for Indoor Scenes from a Single Image

论文地址:https://arxiv.org/abs/2002.12212

Implicit Functions in Feature Space for 3D Shape Reconstruction and Completion

论文地址:https://arxiv.org/abs/2003.01456

In Perfect Shape: Certifiably Optimal 3D Shape Reconstruction from 2D Landmarks

论文地址:https://arxiv.org/pdf/1911.11924.pdf

Attentive Context Normalization for Robust Permutation-Equivariant Learning

论文地址:https://arxiv.org/abs/1907.02545Weiwei Sun, Wei Jiang, Eduard Trulls, Andrea Tagliasacchi, Kwang Moo Yi

PQ-NET: A Generative Part Seq2Seq Network for 3D Shapes

论文地址:https://arxiv.org/abs/1911.10949

SG-NN: Sparse Generative Neural Networks for Self-Supervised Scene Completion of RGB-D Scans

论文地址:https://arxiv.org/abs/1912.00036

Cascade Cost Volume for High-Resolution Multi-View Stereo and Stereo Matching

论文地址:https://arxiv.org/abs/1912.06378

代码:https://github.com/alibaba/cascade-stereo

三维重建

Leveraging 2D Data to Learn Textured 3D Mesh Generation

论文地址:https://arxiv.org/abs/2004.04180

ARCH: Animatable Reconstruction of Clothed Humans

论文地址:https://arxiv.org/abs/2004.04572

Learning 3D Semantic Scene Graphs from 3D Indoor Reconstructions

论文地址:https://arxiv.org/abs/2004.03967

图像识别

图像特征匹配图像字幕

Normalized and Geometry-Aware Self-Attention Network for Image Captioning

论文地址:https://arxiv.org/abs/2003.08897

图像处理Single Image Reflection Removal through Cascaded Refinement

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值