[ICCV-23] Paper List - 3D Generation-related

ICCV-23 paper list

目录

Oral Papers

3D from multi-view and sensors

Generative AI

Poster Papers

3D Generation (Neural generative models)

3D from a single image and shape-from-x

3D Editing

Face and gestures

Stylization

Dataset


Oral Papers

3D from multi-view and sensors

  • Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields
  • Tri-MipRF: Tri-Mip Representation for Efficient Anti-Aliasing Neural Radiance Fields
  • LERF: Language Embedded Radiance Fields
  • Mixed Neural Voxels for Fast Multi-view Video Synthesis
  • Multi-Modal Neural Radiance Field for Monocular Dense SLAM with a Light-Weight ToF Sensor
  • Diffusion-Guided Reconstruction of Everyday Hand-Object Interaction Clips
  • Instruct-NeRF2NeRF: Editing 3D Scenes with Instructions
  • Neural Haircut: Prior-Guided Strand-Based Hair Reconstruction
  • ScanNet++: A High-Fidelity Dataset of 3D Indoor Scenes
  • EgoLoc: Revisiting 3D Object Localization from Egocentric Videos with Visual Queries

  • Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields (pdf):Instant-NGP提出grid-based NeRF,对NeRF提速。Grid-based NeRF通常存在锯齿问题(alias),因此mip-NeRF 360提出将采样由射线变为圆锥,实现抗锯齿(Anti-aliasing)。但是,mip-NeRF 360不能很好的与Instant-NGP结合,因此本文提出了zip-NeRF。
  • LERF: Language Embedded Radiance Fields (pdf) :LERF是DFF的后续工作,在DINO features的基础上,额外引入了CLIP features,帮助实现NeRF中的细粒度定位和分类。

Generative AI

  • TexFusion: Synthesizing 3D Textures with Text-Guided Image Diffusion Models
  • Generative Novel View Synthesis with 3D-Aware Diffusion Models
  • VQ3D: Learning a 3D-Aware Generative Model on ImageNet

Poster Papers

3D Generation (Neural generative models)

  • GRAM-HD: 3D-Consistent Image Generation at High Resolution with Generative Radiance Manifolds
  • Generative Multiplane Neural Radiance for 3D-Aware Image Generation
  • Get3DHuman: Lifting StyleGAN-Human into a 3D Generative Model Using Pixel-Aligned Reconstruction Priors
  • Towards High-Fidelity Text-Guided 3D Face Generation and Manipulation Using only Images
  • ATT3D: Amortized Text-to-3D Object Synthesis
  • Fantasia3D: Disentangling Geometry and Appearance for High-quality Text-to-3D Content Creation
  • GETAvatar: Generative Textured Meshes for Animatable Human Avatars
  • Mimic3D: Thriving 3D-Aware GANs via 3D-to-2D Imitation
  • DreamBooth3D: Subject-Driven Text-to-3D Generation
  • 3D-aware Image Generation using 2D Diffusion Models
  • Single-Stage Diffusion NeRF: A Unified Approach to 3D Generation and Reconstruction

3D from a single image and shape-from-x

  • Accurate 3D Face Reconstruction with Facial Component Tokens
  • HiFace: High-Fidelity 3D Face Reconstruction by Learning Static and Dynamic Details
  • Zero-1-to-3: Zero-shot One Image to 3D Object
  • Deformable Model-Driven Neural Rendering for High-Fidelity 3D Reconstruction of Human Heads Under Low-View Settings

3D Editing

  • Vox-E: Text-Guided Voxel Editing of 3D Objects
  • FaceCLIPNeRF: Text-driven 3D Face Manipulation using Deformable Neural Radiance Fields
  • SKED: Sketch-guided Text-based 3D Editing
  • Seal-3D: Interactive Pixel-Level Editing for Neural Radiance Fields

Face and gestures

  • Speech4Mesh: Speech-Assisted Monocular 3D Facial Reconstruction for Speech-Driven 3D Facial Animation
  • Imitator: Personalized Speech-driven 3D Facial Animation
  • EmoTalk: Speech-Driven Emotional Disentanglement for 3D Face Animation
  • SPACE: Speech-driven Portrait Animation with Controllable Expression

Stylization

  • Diffusion in Style
  • Creative Birds: Self-Supervised Single-View 3D Style Transfer
  • StyleDomain: Efficient and Lightweight Parameterizations of StyleGAN for One-shot and Few-shot Domain Adaptation
  • StylerDALLE: Language-Guided Style Transfer Using a Vector-Quantized Tokenizer of a Large-Scale Generative Model
  • X-Mesh: Towards Fast and Accurate Text-driven 3D Stylization via Dynamic Textual Guidance
  • Locally Stylized Neural Radiance Fields
  • DS-Fusion: Artistic Typography via Discriminated and Stylized Diffusion
  • Multi-Directional Subspace Editing in Style-Space
  • StyleDiffusion: Controllable Disentangled Style Transfer via Diffusion Models
  • All-to-Key Attention for Arbitrary Style Transfer
  • DeformToon3D: Deformable Neural Radiance Fields for 3D Toonification
  • Anti-DreamBooth: Protecting Users from Personalized Text-to-image Synthesis
  • Neural Collage Transfer: Artistic Reconstruction via Material Manipulation

Dataset

  • H3WB: Human3.6M 3D WholeBody Dataset and Benchmark
  • SynBody: Synthetic Dataset with Layered Human Models for 3D Human Perception and Modeling
  • Human-centric Scene Understanding for 3D Large-scale Scenario

  • 2
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值