CVPR2023 | 三维视觉方向论文合集(附代码)

本合集包含三维重建、点云、场景重建/视图合成/新视角合成等细分任务。


合集下载点我跳转下载




1、三维视觉

[1] Learning a 3D Morphable Face Reflectance Model from Low-cost Data

[Code]ReflectanceMM

[2] Tri-Perspective View for Vision-Based 3D Semantic Occupancy Prediction

[Code]GitHub - wzzheng/TPVFormer: An academic alternative to Tesla's occupancy network for autonomous driving.

[3] LinK: Linear Kernel for LiDAR-based 3D Perception

[Code]https://github.com/MCG-NJU/LinK

2、点云

[1] Unsupervised Deep Probabilistic Approach for Partial Point Cloud Registration

[Code]https://github.com/gfmei/UDPReg

 

[2] Deep Graph-based Spatial Consistency for Robust Non-rigid Point Cloud Registration

[Code]https://github.com/qinzheng93/GraphSCNet

[3] Controllable Mesh Generation Through Sparse Latent Point Diffusion Models

[Code]SLIDE: Controllable Mesh Generation Through Sparse Latent Point Diffusion Models

[4] Parameter is Not All You Need: Starting from Non-Parametric Networks for 3D Point Cloud Analysis

[Code]https://github.com/ZrrSkywalker/Point-NN

[5] Rotation-Invariant Transformer for Point Cloud Matching

[Code]GitHub - haoyu94/RoITr: Rotation-Invariant Transformer for Point Cloud Matching

 

[6] GraVoS: Voxel Selection for 3D Point-Cloud Detection

[Code]None

[7] DSVT: Dynamic Sparse Voxel Transformer with Rotated Sets

[Code]https://github.com/Haiyang-W/DSVT

[8] PointCert: Point Cloud Classification with Deterministic Certified Robustness Guarantees

[Code]None

[9] ACL-SPC: Adaptive Closed-Loop system for Self-Supervised Point Cloud Completion

[Code]https://github.com/Sangminhong/ACL-SPC_PyTorch

 

[10] DeepMapping2: Self-Supervised Large-Scale LiDAR Map Optimization

[Code]https://ai4ce.github.io/DeepMapping2/

[11] Frequency-Modulated Point Cloud Rendering with Easy Editing

[Code]GitHub - yizhangphd/FreqPCR: [CVPR 2023 Highlight] Frequency-Modulated Point Cloud Rendering with Easy Editing.

[12] Self-Supervised Image-to-Point Distillation via Semantically Tolerant Contrastive Loss

[Code]None

[13] ProxyFormer: Proxy Alignment Assisted Point Cloud Completion with Missing Part Sensitive Transformer

[Code]GitHub - I2-Multimedia-Lab/ProxyFormer: ProxyFormer (CVPR 2023)

 

[14] Point Cloud Forecasting as a Proxy for 4D Occupancy Forecasting

[Code]https://github.com/tarashakhurana/4d-occ-forecasting

[15] CLIP2: Contrastive Language-Image-Point Pretraining from Real-World Point Cloud Data

[Code]None

[16] Recognizing Rigid Patterns of Unlabeled Point Clouds by Complete and Continuous Isometry Invariants with no False Negatives and no False Positives

[Code]None

[17] NeuralPCI: Spatio-temporal Neural Field for 3D Point Cloud Multi-frame Non-linear Interpolation

[Code]https://github.com/ispc-lab/NeuralPCI

[18] Unsupervised Inference of Signed Distance Functions from Single Sparse Point Clouds without Learning Priors

[Code]GitHub - chenchao15/NeuralTPS

[19] Robust Multiview Point Cloud Registration with Reliable Pose Graph Initialization and History Reweighting

[Code]GitHub - WHU-USI3DV/SGHR: [CVPR 2023] Robust Multiview Point Cloud Registration with Reliable Pose Graph Initialization and History Reweighting

[20] Learning Human-to-Robot Handovers from Point Clouds

[Code]Learning Human-to-Robot Handovers from Point Clouds

[21] Rethinking the Approximation Error in 3D Surface Fitting for Point Cloud Normal Estimation

[Code]GitHub - hikvision-research/3DVision

[22] PartManip: Learning Cross-Category Generalizable Part Manipulation Policy from Point Cloud Observations

[Code]PartManip

[23] NerVE: Neural Volumetric Edges for Parametric Curve Extraction from Point Cloud

[Code]https://dongdu3.github.io/projects/2023/NerVE/

[24] Self-positioning Point-based Transformer for Point Cloud Understanding

[Code]https://github.com/mlvlab/SPoTr

[25] Binarizing Sparse Convolutional Networks for Efficient Point Cloud Analysis

[Code]None

[26] MEnsA: Mix-up Ensemble Average for Unsupervised Multi Target Domain Adaptation on 3D Point Clouds

[Code]GitHub - sinAshish/MEnsA_mtda

3、三维重建

[1] PanoHead: Geometry-Aware 3D Full-Head Synthesis in 360°

[Code]PanoHead: Geometry-Aware 3D Full-Head Synthesis in 360°

[2] Transforming Radiance Field with Lipschitz Network for Photorealistic 3D Scene Stylization

[Code]None

[3] TAPS3D: Text-Guided 3D Textured Shape Generation from Pseudo Supervision

[Code]GitHub - plusmultiply/TAPS3D: Official code repository for the paper: "TAPS3D: Text-Guided 3D Textured Shape Generation from Pseudo Supervision"

 

[4] MV-JAR: Masked Voxel Jigsaw and Reconstruction for LiDAR-Based Self-Supervised Pre-Training

[Code]https://github.com/SmartBot-PJLab/MV-JAR

[5] PartNeRF: Generating Part-Aware Editable 3D Shapes without 3D Supervision

[Code]PartNeRF: Generating Part-Aware Editable 3D Shapes without 3D Supervision | Konstantinos Tertikas

[6] SDFusion: Multimodal 3D Shape Completion, Reconstruction, and Generation

[Code]https://yccyenchicheng.github.io/SDFusion/

[7] Masked Wavelet Representation for Compact Neural Radiance Fields

[Code]https://github.com/daniel03c1/masked_wavelet_nerf

 

[8] Decoupling Human and Camera Motion from Videos in the Wild

[Code]https://vye16.github.io/slahmr/

[9] Structural Multiplane Image: Bridging Neural View Synthesis and 3D Reconstruction

[Code]None

[10] NEF: Neural Edge Fields for 3D Parametric Curve Reconstruction from Multi-view Images

[Code]https://yunfan1202.github.io/NEF/

[11] Shape, Pose, and Appearance from a Single Image via Bootstrapped Radiance Field Inversion

[Code]https://github.com/google-research/nerf-from-image

 

[12] MobileBrick: Building LEGO for 3D Reconstruction on Mobile Devices

[Code]http://code.active.vision/MobileBrick/

[13] Unsupervised 3D Shape Reconstruction by Part Retrieval and Assembly

[Code]http

[14] NeuDA: Neural Deformable Anchor for High-Fidelity Implicit Surface Reconstruction

[Code]NeuDA

[15] HairStep: Transfer Synthetic to Real Using Strand and Depth Maps for Single-View 3D Hair Modeling

[Code]https://paulyzheng.github.io/research/hairstep/

 

[16] MACARONS: Mapping And Coverage Anticipation with RGB Online Self-Supervision

[Code]https://imagine.enpc.fr/~guedona/MACARONS/

[17] Disentangling Orthogonal Planes for Indoor Panoramic Room Layout Estimation with Cross-Scale Distortion Awareness

[Code]GitHub - zhijieshen-bjtu/DOPNet: CVPR'2023:Disentangling Orthogonal Planes for Indoor Panoramic Room Layout Estimation with Cross-Scale Distortion Awareness

[18] Im2Hands: Learning Attentive Implicit Representation of Interacting Two-Hand Shapes

[Code]https://jyunlee.github.io/projects/implicit-two-hands/

[19] ECON: Explicit Clothed humans Obtained from Normals

[Code]ECON: Explicit Clothed humans Optimized via Normal integration

 

[20] Structured 3D Features for Reconstructing Relightable and Animatable Avatars

[Code]https://enriccorona.github.io/s3f/

[21] Structured 3D Features for Reconstructing Controllable Avatars

[Code]None

[22] BundleSDF: Neural 6-DoF Tracking and 3D Reconstruction of Unknown Objects

[Code]BundleSDF: Neural 6-DoF Tracking and 3D Reconstruction of Unknown Objects

[23] Seeing Through the Glass: Neural 3D Reconstruction of Object Inside a Transparent Container

[Code]GitHub - hirotong/ReNeuS

[24] HexPlane: A Fast Representation for Dynamic Scenes

[Code]HexPlane

[25] PAniC-3D: Stylized Single-view 3D Reconstruction from Portraits of Anime Characters

[Code]GitHub - ShuhongChen/panic3d-anime-reconstruction: CVPR 2023: PAniC-3D Stylized Single-view 3D Reconstruction from Portraits of Anime Characters

[26] 3D Line Mapping Revisited

[Code]GitHub - cvg/limap: A toolbox for mapping and localization with line features.

[27] Multi-View Azimuth Stereo via Tangent Space Consistency

[Code]https://github.com/xucao-42/mvas

4、场景重建/视图合成/新视角合成

[1] SINE: Semantic-driven Image-based NeRF Editing with Prior-guided Editing Field

[Code]https://zju3dv.github.io/sine/

[2] ShadowNeuS: Neural SDF Reconstruction by Shadow Ray Supervision

[Code]GitHub - gerwang/ShadowNeuS: ShadowNeuS: Neural SDF Reconstruction by Shadow Ray Supervision (CVPR 2023)

[3] Balanced Spherical Grid for Egocentric View Synthesis

[Code]EgoNeRF - Changwoon Choi

 

[4] Robust Dynamic Radiance Fields

[Code]RoDynRF: Robust Dynamic Radiance Fields

[5] Semantic Ray: Learning a Generalizable Semantic Field with Cross-Reprojection Attention

[Code]https://liuff19.github.io/S-Ray/

[6] MobileNeRF: Exploiting the Polygon Rasterization Pipeline for Efficient Neural Field Rendering on Mobile Architectures

[Code]MobileNeRF

[7] I2-SDF: Intrinsic Indoor Scene Reconstruction and Editing via Raytracing in Neural SDFs

[Code]https://jingsenzhu.github.io/i2-sdf/

 

[8] Learning Detailed Radiance Manifolds for High-Fidelity and 3D-Consistent Portrait Synthesis from Monocular Image

[Code]https://yudeng.github.io/GRAMInverter/

[9] Nerflets: Local Radiance Fields for Efficient Structure-Aware 3D Scene Representation from 2D Supervision

[Code]https://jetd1.github.io/nerflets-web/

[10] Local-to-Global Registration for Bundle-Adjusting Neural Radiance Fields

[Code]https://rover-xingyu.github.io/L2G-NeRF/

[11] DP-NeRF: Deblurred Neural Radiance Field with Physical Scene Priors

[Code]https://dogyoonlee.github.io/dpnerf/

 

[12] SPIn-NeRF: Multiview Segmentation and Perceptual Inpainting with Neural Radiance Fields

[Code]SPIn-NeRF

[13] 3D Video Loops from Asynchronous Input

[Code]https://limacv.github.io/VideoLoop3D_web/

[14] NeRFLiX: High-Quality Neural View Synthesis by Learning a Degradation-Driven Inter-viewpoint MiXer

[Code]https://redrock303.github.io/nerflix/

[15] NeRF-Gaze: A Head-Eye Redirection Parametric Model for Gaze Estimation

[Code]None

 

[16] Renderable Neural Radiance Map for Visual Navigation

[Code]https://rllab-snu.github.io/projects/RNR-Map/

[17] Real-Time Neural Light Field on Mobile Devices

[Code]https://snap-research.github.io/MobileR2L/

[18] Latent-NeRF for Shape-Guided Generation of 3D Shapes and Textures

[Code]GitHub - eladrich/latent-nerf: Official Implementation for "Latent-NeRF for Shape-Guided Generation of 3D Shapes and Textures"

[19] NoPe-NeRF: Optimising Neural Radiance Field with No Pose Prior

[Code]Nope-NeRF

 

[20] SPARF: Neural Radiance Fields from Sparse and Noisy Poses

[Code]https://prunetruong.com/sparf.github.io/

 

[21] EventNeRF: Neural Radiance Fields from a Single Colour Event Camera

[Code]EventNeRF: Neural Radiance Fields from a Single Colour Event Camera

 

[22] Grid-guided Neural Radiance Fields for Large Urban Scenes

[Code]https://city-super.github.io/gridnerf/

 

[23] HandNeRF: Neural Radiance Fields for Animatable Interacting Hands

[Code]None

 

[24] ABLE-NeRF: Attention-Based Rendering with Learnable Embeddings for Neural Radiance Field

[Code]None

 

[25] Progressively Optimized Local Radiance Fields for Robust View Synthesis

[Code]Progressively Optimized Local Radiance Fields for Robust View Synthesis

 

[26] GM-NeRF: Learning Generalizable Model-based Neural Radiance Fields from Multi-view Images

[Code]GitHub - JanaldoChen/GM-NeRF

 

[27] MAIR: Multi-view Attention Inverse Rendering with 3D Spatially-Varying Lighting Estimation

[Code]None

 

[28] Interactive Segmentation of Radiance Fields

[Code]https://rahul-goel.github.io/isrf/

 

[29] Ref-NPR: Reference-Based Non-Photorealistic Radiance Fields for Controllable Scene Stylization

[Code]Ref-NPR: Reference-Based Non-Photorealistic Radiance Fields

 

[30] DiffRF: Rendering-Guided 3D Radiance Field Diffusion

[Code]https://sirwyver.github.io/DiffRF/

 

[31] Magic3D: High-Resolution Text-to-3D Content Creation

[Code]Magic3D: High-Resolution Text-to-3D Content Creation

 

[32] JAWS: Just A Wild Shot for Cinematic Transfer in Neural Radiance Fields

[Code]JAWS

 

[33] SUDS: Scalable Urban Dynamic Scenes

[Code]https://haithemturki.com/suds/

 

[34] NeRF-DS: Neural Radiance Fields for Dynamic Specular Objects

[Code]GitHub - JokerYan/NeRF-DS: This is the code for "NeRF-DS: Neural Radiance Fields for Dynamic Specular Objects".

 

[35] FlexNeRF: Photorealistic Free-viewpoint Rendering of Moving Humans from Sparse Views

[Code]FlexNeRF: Photorealistic Free-viewpoint Rendering of Moving Humans from Sparse Views

 

[36] DyLiN: Making Light Field Networks Dynamic

[Code]DyLiN: Making Light Field Networks Dynamic

 

[37] Efficient View Synthesis and 3D-based Multi-Frame Denoising with Multiplane Feature Representations

[Code]None

 

[38] NeRF-Supervised Deep Stereo

[Code]NeRF-Supervised Deep Stereo

 

[39] Consistent View Synthesis with Pose-Guided Diffusion Models

[Code]Consistent View Synthesis with Pose-Guided Diffusion Models

 

[40] Enhanced Stable View Synthesis

[Code]None

 

[41] NeFII: Inverse Rendering for Reflectance Decomposition with Near-Field Indirect Illumination

[Code]None

 

[42] F²-NeRF: Fast Neural Radiance Field Training with Free Camera Trajectories

[Code]F2-NeRF

 

[43] Neural Fields meet Explicit Geometric Representation for Inverse Rendering of Urban Scenes

[Code]https://nv-tlabs.github.io/fegr/

 

[44] GINA-3D: Learning to Generate Implicit Neural Assets in the Wild

[Code]None

 

[45] MonoHuman: Animatable Human Neural Field from Monocular Video

[Code]MonoHuman: Animatable Human Neural Field from Monocular Video

 

[46] One-Shot High-Fidelity Talking-Head Synthesis with Deformable Neural Radiance Field

[Code]https://www.waytron.net/hidenerf/

 

[47] Neural Lens Modeling

[Code]NeuralLens

 

[48] Neural Residual Radiance Fields for Streamably Free-Viewpoint Videos

[Code]None

 

[49] POEM: Reconstructing Hand in a Point Embedded Multi-view Stereo

[Code]GitHub - lixiny/POEM: [CVPR 2023] POEM: Reconstructing Hand in a Point Embedded Multi-view Stereo

 

[50] Lift3D: Synthesize 3D Training Data by Lifting 2D GAN to 3D Generative Radiance Field

[Code]https://len-li.github.io/lift3d-web/

  • 2
    点赞
  • 32
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
CVPR 2023是计算机视觉和模式识别的顶级会议,UAV(无人机)在该会议上是一个热门的研究领域。 UAV(无人机)技术在过去几年中取得了显著的发展和广泛的应用。它们被广泛用于农业、测绘、监测和救援等领域。CVPR 2023将成为研究者们交流、展示和分享无人机相关研究的理想平台。 首先,CVPR 2023将提供一个特殊的无人机研究专题,以探讨该领域的最新进展和创新。研究人员可以提交和展示基于无人机的计算机视觉和模式识别的研究成果。这些研究可能涉及无人机导航、目标识别、图像处理等方面,以解决现实世界中的问题。 其次,CVPR 2023也将包括无人机在计算机视觉和模式识别中的应用研究。无人机可以提供独特的视角和数据采集能力,用于处理各种计算机视觉任务,如物体检测、场景分割等。研究者可以展示他们基于无人机的方法与传统方法的对比实验结果,并讨论无人机在这些领域的优势和局限性。 此外,CVPR 2023还将包括与无人机相关的新兴技术和趋势的讨论。例如,无人机与深度学习、增强现实等领域的结合,将推动计算机视觉和模式识别的研究和应用取得更大的突破。研究者可以分享他们在这些交叉领域中的创新成果,并与其他学者进行深入的讨论和合作。 总之,CVPR 2023将为无人机在计算机视觉和模式识别领域的研究提供一个重要的平台。它将促进学术界和工业界之间的合作与交流,并为未来的无人机技术发展提供新的思路和方向

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值