笔记1

SIREN

Abstract

Implicitly defined, continuous, differentiable signal representations parameterized by neural networks have emerged as a powerful paradigm, offering many possible benefits over conventional representations. However, current network architectures for such implicit neural representations are incapable of modeling signals with fine detail, and fail to represent a signal’s spatial and temporal derivatives, despite the fact that these are essential to many physical signals defined implicitly as the solution to partial differential equations. We propose to leverage periodic activation functions for implicit neural representations and demonstrate that these networks, dubbed sinusoidal representation networks or SIRENs, are ideally suited for representing complex natural signals and their derivatives. We analyze SIREN activation statistics to propose a principled initialization scheme and demonstrate the representation of images, wavefields, video, sound, and their derivatives. Further, we show how SIRENs can be leveraged to solve challenging boundary value problems, such as particular Eikonal equations (yielding signed distance functions), the Poisson equation, and the Helmholtz and wave equations. Lastly, we combine SIRENs with hypernetworks to learn priors over the space of SIREN functions. Please see the project website for a video overview of the proposed method and all applications.

隐式定义的,连续的,可微的信号表示参数化神经网络已经成为一种强大的范式,提供了许多优于传统代表的好处的可能性。然而,目前用于这种隐式神经表示的网络架构无法对信号进行细致的建模,也无法表示信号的空间和时间导数,尽管这些对于许多被隐式定义为偏微分方程解的物理信号是必不可少的。我们建议利用周期性激活并证明了这些网络,被称为正弦表示网络或SIRENs,非常适合表示复杂的自然信号及其衍生物。我们分析了SIREN的激活统计学提出一个有原则的初始化方案,并演示图像、波场、视频、声音及其导数的表示。进一步,我们将展示如何利用SIREN来解决具有挑战性的边值问题作为特殊的Eikonal方程(产生 SDFs),泊松还有亥姆霍兹方程和波动方程。最后,我们把SIREN和超网络来学习先验的SIREN函数的空间。请参见项目网站的视频概述提出的方法和所有应用(Colab 环境)。

Introducing

We are interested in a class of functions Φ that satisfy equations of the form
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-tcCzfmUj-1615031523608)(https://s3.ax1x.com/2021/03/05/6eXjv4.png)]
This implicit problem formulation takes as input the spatial or spatio-temporal coordinates x ∈ R^m and, optionally, derivatives of Φ with respect to these coordinates. Our goal is then to learn a neural network that parameterizes Φ to map x to some quantity of interest while satisfying the constraint presented in Equation (1). Thus, Φ is implicitly defined by the relation defined by F and we refer to neural networks that parameterize such implicitly defined functions as implicit neural representations. As we show in this paper, a surprisingly wide variety of problems across scientific fields fall into this form, such as modeling many different types of discrete signals in image, video, and audio processing using a continuous and differentiable representation, learning 3D shape representations via signed distance functions [1–4], and, more generally, solving boundary value problems, such as the Poisson, Helmholtz, or wave equations.

我们感兴趣的是一类满足这种形式方程的φ函数
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-22GRXlRc-1615031523275)(https://s3.ax1x.com/2021/03/05/6eXjv4.png)]
这个隐式问题公式的输入是空间或时空坐标x∈R^m,可选为φ对这些坐标的导数。我们的目标是学习神经网络,参数化Φx映射到一些感兴趣的数量而满足方程(1)中给出的约束。因此,Φ是隐式定义的关系定义为F,我们指的是神经网络参数化等隐式定义函数隐神经表征。当我们显示摘要,令人惊讶的是各种各样的问题在科学领域陷入这种形式,如离散信号建模许多不同类型的图像,视频和音频处理使用连续、可微的典型代表,学习3D形状表示通过签署了距离函数[1 - 4],而且,更普遍的是,解决边值问题,如泊松或亥姆霍兹波动方程。

A continuous parameterization offers several benefits over alternatives, such as discrete grid-based representations. For example, due to the fact that Φ is defined on the continuous domain of x, it can be significantly more memory efficient than a discrete representation, allowing it to model fine detail that is not limited by the grid resolution but by the capacity of the underlying network architecture. Being differentiable implies that gradients and higher-order derivatives can be computed analytically, for example using automatic differentiation, which again makes these models independent of conventional grid resolutions. Finally, with well-behaved derivatives, implicit neural representations may offer a new toolbox for solving inverse problems

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值