CVPR/SIGGRAPH/ICCV 2023 ——nerf <个人学习用>记录

记录一个全的大型会议链接合集:CVF Open Access

记录一个CVPR2023 日程的网址:CVPR 2023 Schedule

记录一个ICCV2023日程的网址:ICCV 2023 Open Access Repository

记录一个SIGGRAPH2023 日程的网址:Full Program | SIGGRAPH 2023

(专门集中于nerf的)Your NeRF Guide To SIGGRAPH 2023! | Neural Radiance Fields

目录

ICCV 2023

IntrinsicNeRF: Learning Intrinsic Neural Radiance Fields for Editable Novel View Synthesis

SceneRF: Self-Supervised Monocular 3D Scene Reconstruction with Radiance Fields

 CopyRNeRF: Protecting the CopyRight of Neural Radiance Fields

 Tri-MipRF: Tri-Mip Representation for Efficient Anti-Aliasing Neural Radiance Fields

Lighting up NeRF via Unsupervised Decomposition and Enhancement 

SIGGRAPH 2023 

NeRO: Neural Geometry and BRDF Reconstruction of Reflective Objects from Multiview Images

BakedSDF: Meshing Neural SDFs for Real-Time View Synthesis

 3D Gaussian Splatting for Real-Time Radiance Field Rendering

ViP-NeRF: Visibility Prior for Sparse Input Neural Radiance Fields

Nerfstudio: A Modular Framework for Neural Radiance Field Development

Relighting Neural Radiance Fields with Shadow and Highlight Hints

DE-NeRF: DEcoupled Neural Radiance Fields for View-Consistent Appearance Editing and High-Frequency Environmental Relighting

SketchFaceNeRF: Sketch-based Facial Generation and Editing in Neural Radiance Fields

 HumanRF: High-Fidelity Neural Radiance Fields for Humans in Motion

NeRSemble: Multi-view Radiance Field Reconstruction of Human Heads 

 NOFA: NeRF-based One-shot Facial Avatar Reconstruction

LatentAvatar: Learning Latent Expression Code for Expressive Neural Head Avatar

CVPR 2023

ICCV 2023

2023.10.2-10.6

IntrinsicNeRF: Learning Intrinsic Neural Radiance Fields for Editable Novel View Synthesis

paper:IntrinsicNeRF

code:(not released)GitHub - zju3dv/IntrinsicNeRF: code for "IntrinsicNeRF: Learning Intrinsic Neural Radiance Fields for Editable Novel View Synthesis", ICCV 2023

SceneRF: Self-Supervised Monocular 3D Scene Reconstruction with Radiance Fields

paper:https://arxiv.org/pdf/2212.02501.pdf

code:GitHub - astra-vision/SceneRF: [ICCV 2023] Official implementation of "SceneRF: Self-Supervised Monocular 3D Scene Reconstruction with Radiance Fields"

project page:SceneRF: Self-Supervised Monocular 3D Scene Reconstruction with Radiance Fields

CopyRNeRF: Protecting the CopyRight of Neural Radiance Fields

paper:https://arxiv.org/abs/2307.11526

code:——

project page:CopyRNeRF: Protecting the CopyRight of Neural Radiance Fields

Tri-MipRF: Tri-Mip Representation for Efficient Anti-Aliasing Neural Radiance Fields

paper:https://arxiv.org/pdf/2307.11335.pdf

code: GitHub - wbhu/Tri-MipRF: Tri-MipRF: Tri-Mip Representation for Efficient Anti-Aliasing Neural Radiance Fields, ICCV'23 (Oral, Best Paper Finalist)

project page:Tri-MipRF: Tri-Mip Representation for Efficient Anti-Aliasing Neural Radiance Fields

Lighting up NeRF via Unsupervised Decomposition and Enhancement 

paper:https://arxiv.org/pdf/2307.10664.pdf

code:GitHub - onpix/LLNeRF: [ICCV2023] Lighting up NeRF via Unsupervised Decomposition and Enhancement

project page:Lighting up NeRF via Unsupervised Decomposition and Enhancement

(见过最高端美观的项目主页、、、)

SIGGRAPH 2023 

2023.8.6-8.10

 (我宣布siggraph的poster是会议里面最美妙的东西!我总结了一下,大概都是长这个样子的!)

NeRO: Neural Geometry and BRDF Reconstruction of Reflective Objects from Multiview Images

paper:https://arxiv.org/abs/2305.17398

code:GitHub - liuyuan-pal/NeRO: [SIGGRAPH2023] NeRO: Neural Geometry and BRDF Reconstruction of Reflective Objects from Multiview Images

project page:NeRO: Neural Geometry and BRDF Reconstruction of Reflective Objects from Multiview Images

<for reflective object !>就几何形状效果而言,比refnerf好。

首先,通过应用分和近似和集成方向编码来近似直接光和间接光的阴影效果,能够在没有任何物体遮罩的情况下精确地重建反射物体的几何形状。然后,在物体几何形状固定的情况下,使用更精确的采样来恢复环境光线和物体的 BRDF。除此之外,也可以实现重光照。

重建反射物体表面和BRDF函数,输出高精度mesh。

BakedSDF: Meshing Neural SDFs for Real-Time View Synthesis

paper:https://arxiv.org/abs/2302.14859

code:——

project page:BakedSDF

首先优化了体-面混合神经场景表示法,使其具有与场景中的曲面相对应的良好水平集。然后,我们将这一表征烘焙成高质量的三角形网格,并为其配备一个基于球形高斯的简单、快速的视图相关外观模型。最后,我们对烘焙表示法进行优化,以最好地再现捕捉到的视点,从而得到一个可以利用加速多边形光栅化流水线在商品硬件上进行实时视图合成的模型。

3D Gaussian Splatting for Real-Time Radiance Field Rendering

paper:3D Gaussian Splatting for Real-Time Radiance Field Rendering | ACM Transactions on Graphics

code:GitHub - shumash/gaussian-splatting: Original reference implementation of "3D Gaussian Splatting for Real-Time Radiance Field Rendering"

project page:3D Gaussian Splatting for Real-Time Radiance Field Rendering

首先,从摄像机校准过程中产生的稀疏点开始,我们用三维高斯表示场景,这种三维高斯保留了用于场景优化的连续容积辐射场的理想特性,同时避免了在空白空间进行不必要的计算;其次,我们对三维高斯进行交错优化/密度控制,特别是优化各向异性协方差,以实现场景的精确表示;第三,我们开发了一种快速的可见性感知渲染算法,该算法支持各向异性拼接,既能加快训练速度,又能实现实时渲染。

我们引入了三个关键要素,使我们能够实现最先进的视觉质量,同时保持有竞争力的训练时间,更重要的是,我们能够在 1080p 分辨率下进行高质量的实时(≥ 100 fps)小说视图合成。

ViP-NeRF: Visibility Prior for Sparse Input Neural Radiance Fields

paper:https://arxiv.org/abs/2305.00041

code:https://github.com/NagabhushanSN95/ViP-NeRF

project page:ViP-NeRF: Visibility Prior for Sparse Input Neural Radiance Fields 

当输入图像稀疏时,神经辐射场(NeRF)在新视图合成中的性能会明显下降。我们使用输入帧中像素可见度的先验值对 NeRF 训练进行正则化。我们使用平面扫描量计算可见度先验,无需任何预训练。 

Nerfstudio: A Modular Framework for Neural Radiance Field Development

paper:https://arxiv.org/abs/2302.04264

code:GitHub - nerfstudio-project/nerfstudio: A collaboration friendly studio for NeRFs

project page:nerfstudio

一个模块化 PyTorch 框架

Relighting Neural Radiance Fields with Shadow and Highlight Hints

paper:https://dl.acm.org/doi/pdf/10.1145/3588432.3591482

code:——

重光照、SDF。

通过第二个多层感知器对每个点的局部和全局光传输进行建模,除了密度特征、当前位置、法线(来自带符号的距离函数)、视图方向和光照位置之外,还采用阴影和高光提示来帮助网络对相应的高频光传输效应进行建模。

DE-NeRF: DEcoupled Neural Radiance Fields for View-Consistent Appearance Editing and High-Frequency Environmental Relighting

paper:DE-NeRF: DEcoupled Neural Radiance Fields for View-Consistent Appearance Editing and High-Frequency Environmental Relighting | ACM SIGGRAPH 2023 Conference Proceedings

code:——

通过混合光照表示将场景中与视图无关的外观和与视图有关的外观解耦。具体来说,我们首先训练一个带符号的距离函数,为输入场景重建一个显式网格。然后,解耦 NeRF 通过在网格顶点上定义代表几何形状和与视线无关的外观的可学习分离特征,学习将与视线无关的外观附加到重建的网格上。在照明方面,我们使用显式可学习环境贴图和隐式照明网络对其进行近似,以支持低频和高频重新照明。

SketchFaceNeRF: Sketch-based Facial Generation and Editing in Neural Radiance Fields

paper:https://orca.cardiff.ac.uk/id/eprint/159468/1/NeRFFaceSketch_SIG23.pdf

code:——

HumanRF: High-Fidelity Neural Radiance Fields for Humans in Motion

paper:https://arxiv.org/abs/2305.06356

code:GitHub - synthesiaresearch/humanrf: Official code for "HumanRF: High-Fidelity Neural Radiance Fields for Humans in Motion"

project page:HumanRF: High-Fidelity Neural Radiance Fields for Humans in Motion

HumanRF 可对运动中的人体进行时间稳定的新视角合成。它通过自适应地将时域划分为 4D 分解特征网格,以最先进的质量和高压缩率重建长序列。

NeRSemble: Multi-view Radiance Field Reconstruction of Human Heads 

paper:https://arxiv.org/abs/2305.03027

code:https://github.com/tobias-kirschstein/nersemble

project page:NeRSemble: Multi-view Radiance Field Reconstruction of Human Heads

NOFA: NeRF-based One-shot Facial Avatar Reconstruction

paper:https://dl.acm.org/doi/pdf/10.1145/3588432.3591555

code:——

我们提出了一个一次性三维面部头像重建框架。它利用高效的编码器-解码器网络和补偿网络来重建输入图像的典型神经体量,并利用基于 3DMM 的变形场来建立面部动态模型。 

LatentAvatar: Learning Latent Expression Code for Expressive Neural Head Avatar

paper:https://arxiv.org/abs/2305.01190

code:——

CVPR 2023

未完待续、、、

  • 1
    点赞
  • 10
    收藏
    觉得还不错? 一键收藏
  • 3
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值