PINN: Physics Informed Neural Networks

Intro

https://en.wikipedia.org/wiki/Physics-informed_neural_networks

Physics-informed neural networks (PINNs) are a type of universal function approximators that can embed the knowledge of any physical laws that govern a given data-set in the learning process, and can be described by partial differential equations (PDEs).[1] They overcome the low data availability of some biological and engineering systems that makes most state-of-the-art machine learning techniques lack robustness, rendering them ineffective in these scenarios.[1] The prior knowledge of general physical laws acts in the training of neural networks (NNs) as a regularization agent that limits the space of admissible solutions, increasing the correctness of the function approximation. This way, embedding this prior information into a neural network results in enhancing the information content of the available data, facilitating the learning algorithm to capture the right solution and to generalize well even with a low amount of training examples.

==> think of it as a SL DNN version of RL's expert trajectory

====> significantly reduce hypothesis space and reduce training time; enforce a baseline of "correctness" of approximation results

==> how exciting! could be a paradigm for how human and machine can collaborate in the future.

Physics Informed Deep Learning

Authors | Physics Informed Deep Learning

==> a concise and sufficiently technical poster to present PI-DL by its leading scholars

Authors

Maziar RaissiParis Perdikaris, and George Em Karniadakis

Abstract

We introduce physics informed neural networks – neural networks that are trained to solve supervised learning tasks while respecting any given law of physics described by general nonlinear partial differential equations. We present our developments in the context of solving two main classes of problems: data-driven solution and data-driven discovery of partial differential equations. Depending on the nature and arrangement of the available data, we devise two distinct classes of algorithms, namely continuous time and discrete time models. The resulting neural networks form a new class of data-efficient universal function approximators that naturally encode any underlying physical laws as prior information. In the first part, we demonstrate how these networks can be used to infer solutions to partial differential equations, and obtain physics-informed surrogate models that are fully differentiable with respect to all input coordinates and free parameters. In the second part, we focus on the problem of data-driven discovery of partial differential equations.

==> the various example equations cannot survive cp, so just check the linked article.

Status Quo* of PINNs

https://arxiv.org/abs/2201.05624

*[Submitted on 14 Jan 2022 (v1), last revised 13 Feb 2022 (this version, v3)]

Scientific Machine Learning through Physics-Informed Neural Networks: Where we are and What's next

Salvatore CuomoVincenzo Schiano di ColaFabio GiampaoloGianluigi RozzaMaziar RaissiFrancesco Piccialli

Physics-Informed Neural Networks (PINN) are neural networks (NNs) that encode model equations, like Partial Differential Equations (PDE), as a component of the neural network itself. PINNs are nowadays used to solve PDEs, fractional equations, and integral-differential equations. This novel methodology has arisen as a multi-task learning framework in which a NN must fit observed data while reducing a PDE residual. This article provides a comprehensive review of the literature on PINNs: while the primary goal of the study was to characterize these networks and their related advantages and disadvantages, the review also attempts to incorporate publications on a larger variety of issues, including physics-constrained neural networks (PCNN), where the initial or boundary conditions are directly embedded in the NN structure rather than in the loss functions. The study indicates that most research has focused on customizing the PINN through different activation functions, gradient optimization techniques, neural network structures, and loss function structures. Despite the wide range of applications for which PINNs have been used, by demonstrating their ability to be more feasible in some contexts than classical numerical techniques like Finite Element Method (FEM), advancements are still possible, most notably theoretical issues that remain unresolved.

pdf download link: 

https://arxiv.org/pdf/2201.05624

A Frontier Application of PINN

https://www.quantamagazine.org/deep-learning-poised-to-blow-up-famed-fluid-equations-20220412/?utm_source=Nature+Briefing&utm_campaign=a20d5070a4-briefing-dy-202204134&utm_medium=email&utm_term=0_c9dfd39373-a20d5070a4-46746038

Summary:

==> the goal is to find singularity in the solutions of Euler's Equation for fluid flow.

==> there are proposed setups, but solving for the singularity by computer simulation is hard since computer cannot work with infinity and precision loss along approximation steps will result in falsely identified singularities.

==> through manipulation of the equation, we can get rid of time dependency and obtain a nice cyclic property where the equation produces "self-similar" result with similar physical setups, only magnified quantities of interests.

==> those equations need solving anew and they contain an unknown rate of magnification parameter;

====> solving those equations with traditional approach is hard, if possible

====> but such a problem is tailored to PINNs

He, Lai, Wang and Javier Gómez-Serrano, a mathematician at Brown University and the University of Barcelona, established a set of physical constraints to help guide their PINN: conditions related to symmetry and other properties, as well as the equations they wanted to solve (they used a set of 2D equations, rewritten using self-similar coordinates, that are known to be equivalent to the 3D Euler equations at points approaching the cylindrical boundary).

They then trained the neural network to search for solutions — and for the self-similar parameter — that satisfied those constraints. “This method is very flexible,” Lai said. “You can always find a solution as long as you impose the correct constraints.” (In fact, the group showcased that flexibility by testing the method on other problems.)

The team’s answer looked a lot like the solution that Hou and Luo had arrived at in 2013. But the mathematicians hope that their approximation paints a more detailed picture of what’s happening, since it marks the first direct calculation of a self-similar solution for this problem. “The new result specifies more precisely how the singularity is formed,” Sverak said — how certain values will blow up, and how the equations will collapse.

“You’re really extracting the essence of the singularity,” Buckmaster said. “It was very difficult to show this without neural networks. It’s clear as night and day that it’s a much easier approach than traditional methods.”

Gómez-Serrano agrees. “This is going to be part of the standard toolboxes that people are going to have at hand in the future,” he said.

Once again, PINNs have revealed what Karniadakis called “hidden fluid mechanics” — only this time, they made headway on a far more theoretical problem than the ones PINNs are usually used for. “I haven’t seen anybody use PINNs for that,” Karniadakis said.

That’s not the only reason mathematicians are excited. PINNs might also be perfectly situated to find another type of singularity that’s all but invisible to traditional numerical methods. These “unstable” singularities might be the only ones that exist for certain models of fluid dynamics, including the Euler equations without a cylindrical boundary (which are already much more complicated to solve) and the Navier-Stokes equations. “Unstable things do exist. So why not find them?” said Peter Constantin, a mathematician at Princeton.

  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
好的,以下是用 PINN 求解二阶 Diffusion 方程的 PyTorch 代码: ```python import torch import numpy as np import matplotlib.pyplot as plt # 定义求解区域的左右边界 x_left = 0.0 x_right = 1.0 # 定义时间和空间步长 delta_t = 0.001 delta_x = 0.05 # 定义神经网络的输入维度和隐藏层维度 input_dim = 2 hidden_dim = 10 # 定义神经网络的输出维度 output_dim = 1 # 定义 Diffusion 方程的初始条件和边界条件 def u_init(x): return np.sin(np.pi * x) def u_left(t): return 0.0 def u_right(t): return 0.0 # 定义神经网络模型 class PINN(torch.nn.Module): def __init__(self): super(PINN, self).__init__() self.fc1 = torch.nn.Linear(input_dim, hidden_dim) self.fc2 = torch.nn.Linear(hidden_dim, hidden_dim) self.fc3 = torch.nn.Linear(hidden_dim, output_dim) def forward(self, x, t): # 拼接时间和空间坐标作为神经网络的输入 xt = torch.cat([x, t], dim=1) h1 = torch.tanh(self.fc1(xt)) h2 = torch.tanh(self.fc2(h1)) out = self.fc3(h2) return out # 定义 PINN 的损失函数 def pinn_loss(model, x_left, x_right, delta_t, delta_x): # 定义空间和时间的网格 x = torch.linspace(x_left, x_right, int((x_right - x_left) / delta_x) + 1).unsqueeze(1) t = torch.linspace(0, 1, int(1 / delta_t) + 1).unsqueeze(1) # 计算模型在内部网格点上的预测值 x_internal = x[1:-1] t_internal = t[1:-1] u_internal = model(x_internal, t_internal) # 计算模型在左右边界上的预测值 u_left_boundary = model(x_left, t_internal) u_right_boundary = model(x_right, t_internal) # 计算初始条件和边界条件的损失 init_loss = torch.mean((model(x, torch.zeros_like(x)) - u_init(x)) ** 2) left_boundary_loss = torch.mean((u_left_boundary - u_left(t_internal)) ** 2) right_boundary_loss = torch.mean((u_right_boundary - u_right(t_internal)) ** 2) # 计算 Diffusion 方程的残差 u_x = torch.autograd.grad(u_internal, x_internal, grad_outputs=torch.ones_like(u_internal), create_graph=True)[0] u_xx = torch.autograd.grad(u_x, x_internal, grad_outputs=torch.ones_like(u_x), create_graph=True)[0] f = u_xx - (1 / delta_t) * (u_internal - u_init(x_internal)) # 计算残差的损失 residual_loss = torch.mean(f ** 2) # 计算总的损失 loss = init_loss + left_boundary_loss + right_boundary_loss + residual_loss return loss # 初始化神经网络模型和优化器 model = PINN() optimizer = torch.optim.Adam(model.parameters(), lr=0.001) # 训练神经网络模型 for i in range(10000): optimizer.zero_grad() loss = pinn_loss(model, x_left, x_right, delta_t, delta_x) loss.backward() optimizer.step() if i % 1000 == 0: print("Iteration {}, Loss = {}".format(i, loss.item())) # 使用训练好的神经网络模型进行预测 x_test = torch.linspace(x_left, x_right, 101).unsqueeze(1) t_test = torch.linspace(0, 1, 1001).unsqueeze(1) u_test = model(x_test, t_test).detach().numpy() # 绘制预测结果 X, T = np.meshgrid(x_test, t_test) plt.pcolormesh(X, T, u_test, cmap='coolwarm') plt.colorbar() plt.xlabel("x") plt.ylabel("t") plt.title("PINN Solution to Diffusion Equation") plt.show() ``` 这段代码中,首先定义了求解区域的左右边界、时间和空间步长等参数,然后定义了 Diffusion 方程的初始条件和边界条件,以及神经网络模型和损失函数。在训练神经网络模型时,使用了 Adam 优化器,最后使用训练好的模型进行预测,并绘制了预测结果。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值