PINN解N-S方程参数反演问题—tf2版本代码+解释


本人是研究工业数字孪生技术的,最近PINN由于其对机理与数据的良好兼顾性非常热门。本文是作者对PINN入门研究过程的一个归纳,主要也是原作者代码是基于TF 1.0代码写的,作者用TF 2.0代码进行了改写,发上来供各位参考,欢迎研究数字孪生的小伙伴一起讨论~

1. 传送门

1.1 PINN基本原理

1.2 PINN作者原文及代码

PINN原作者Maziar Raissi代码是基于tensorflow 1.0代码实现的,在github上有四个问题的说明文档,实现代码和相关数据,详见作者的github

2. N-S反问题

作者对N-S反问题阐述的MD代码,在作者的github项目中有,大意如下:
要根据少量测点数据和PINN模型,求解2D Navier-Stokes方程,如下:
u t + λ 1 ( u u x + v u y ) = − p x + λ 2 ( u x x + u y y ) , v t + λ 1 ( u v x + v v y ) = − p y + λ 2 ( v x x + v y y ) , \begin{array}{c} u_t + \lambda_1 (u u_x + v u_y) = -p_x + \lambda_2(u_{xx} + u_{yy}),\\ v_t + \lambda_1 (u v_x + v v_y) = -p_y + \lambda_2(v_{xx} + v_{yy}), \end{array} ut+λ1(uux+vuy)=px+λ2(uxx+uyy),vt+λ1(uvx+vvy)=py+λ2(vxx+vyy),
式中,
u ( t , x , y ) u(t, x, y) u(t,x,y) x x x轴的速度场, v ( t , x , y ) v(t, x, y) v(t,x,y) y y y轴的速度场, p ( t , x , y ) p(t, x, y) p(t,x,y) 是压力场,都不是定常的。
λ = ( λ 1 , λ 2 ) \lambda = (\lambda_1, \lambda_2) λ=(λ1,λ2) 是两个需要反演的参数,真实值分别为1.0和0.01。
Navier-Stokes的解满足下面散度为0的方程,即:
u x + v y = 0. u_x + v_y = 0. ux+vy=0.
速度场 u u u v v v又可以看做流量场的导数,即:
u = ψ y ,     v = − ψ x , u = \psi_y,\ \ \ v = -\psi_x, u=ψy,   v=ψx,
再结合如下的一些测量数据,求解N-S方程中的 λ 1 , λ 2 \lambda_1, \lambda_2 λ1,λ2两个系数,即:
{ t i , x i , y i , u i , v i } i = 1 N \{t^i, x^i, y^i, u^i, v^i\}_{i=1}^{N} {ti,xi,yi,ui,vi}i=1N
PDE方程的偏差项可定义为:
f : = u t + λ 1 ( u u x + v u y ) + p x − λ 2 ( u x x + u y y ) , g : = v t + λ 1 ( u v x + v v y ) + p y − λ 2 ( v x x + v y y ) , \begin{array}{c} f := u_t + \lambda_1 (u u_x + v u_y) + p_x - \lambda_2(u_{xx} + u_{yy}),\\ g := v_t + \lambda_1 (u v_x + v v_y) + p_y - \lambda_2(v_{xx} + v_{yy}), \end{array} f:=ut+λ1(uux+vuy)+pxλ2(uxx+uyy),g:=vt+λ1(uvx+vvy)+pyλ2(vxx+vyy),
再加上神经网络的预测值和测量值的偏差项,PINN模型的损失可列出:
M S E : = 1 N ∑ i = 1 N ( ∣ u ( t i , x i , y i ) − u i ∣ 2 + ∣ v ( t i , x i , y i ) − v i ∣ 2 ) + 1 N ∑ i = 1 N ( ∣ f ( t i , x i , y i ) ∣ 2 + ∣ g ( t i , x i , y i ) ∣ 2 ) . \begin{array}{rl} MSE :=& \frac{1}{N}\sum_{i=1}^{N} \left(|u(t^i,x^i,y^i) - u^i|^2 + |v(t^i,x^i,y^i) - v^i|^2\right) \\ +& \frac{1}{N}\sum_{i=1}^{N} \left(|f(t^i,x^i,y^i)|^2 + |g(t^i,x^i,y^i)|^2\right). \end{array} MSE:=+N1i=1N(u(ti,xi,yi)ui2+v(ti,xi,yi)vi2)N1i=1N(f(ti,xi,yi)2+g(ti,xi,yi)2).

3. 基于TF 2.0代码实现

3.1 PINN模型定义

导入包并继承keras的Model类定义PINN模型,原文中作者权重初始化是使用了xavier初始化方法,这边也继续基于keras方法复现。
同时增加了权重L2范式衰减方法,大意就是希望各层参数能够平坦化,目的是增加模型收敛速度。

import random
import numpy as np
import scipy.io
import time, datetime


class PhysicsInformedNN(Model):
    # Initialize the class
    def __init__(self, layer, lr=1e-3):
        super(PhysicsInformedNN, self).__init__()

        # 初始化需要反演的参数λ1、λ2
        self.lambda_1 = tf.Variable([0.0], dtype=tf.float32, trainable=True)
        self.lambda_2 = tf.Variable([0.0], dtype=tf.float32, trainable=True)

        # 全连接层定义
        self.model = Sequential()
        self.model.add(Flatten(input_shape=(3, 1)))
        self.model.add(Dense(layer[0], activation='tanh', name="Dense_0",
                             kernel_initializer="glorot_normal",
                             bias_initializer='zeros',
                             kernel_regularizer=l2(theata)
                             ))

        for i in range(1, len(layer) - 1):
            self.model.add(Dense(layer[i], activation='tanh', name="Dense_{}".format(i),
                                 kernel_initializer="glorot_normal",  # xavier初始化方法
                                 bias_initializer='zeros',
                                 kernel_regularizer=l2(theata)   # 权重衰减
                                 ))
            # self.model.add(BatchNormalization(name="BN{}".format(i)))  # 添加数据标准化层

        self.model.add(Dense(layer[-1], activation='tanh', name='Dense_end',
                             kernel_initializer="glorot_normal",
                             bias_initializer='zeros',
                             kernel_regularizer=l2(theata)
                             ))
        self.optimizer = Adam(learning_rate=lr)
	

3.2 模型调用方法定义

用 tf2 的 tf.GradientTape 记录模型计算中的梯度,计算梯度是因为损失函数当中要用。

    # 全连接模型计算
    def call(self, X):
        y = self.model.call(X)
        return y

    # PINN模型计算
    def predict(self, x, y, t):
        lambda_1 = self.lambda_1
        lambda_2 = self.lambda_2

        # 神经网络模型预测ψ(t,x,y) & p(t,x,y)
        x_var = tf.Variable(tf.cast(x, dtype=tf.float32))
        y_var = tf.Variable(tf.cast(y, dtype=tf.float32))
        t_var = tf.Variable(tf.cast(t, dtype=tf.float32))

        with tf.GradientTape(persistent=True) as tape_uxx:
            with tf.GradientTape(persistent=True) as tape_ux:
                with tf.GradientTape(persistent=True) as tape_psi:
                    psi_and_p = self.call(tf.concat([x_var, y_var, t_var], 1))
                    psi = psi_and_p[:, 0:1]
                    p = psi_and_p[:, 1:2]

                # 一阶导
                u = tape_psi.gradient(psi, y_var)
                v = -tape_psi.gradient(psi, x_var)
                p_x = tape_psi.gradient(p, x_var)
                p_y = tape_psi.gradient(p, y_var)

            # 二阶导
            u_t = tape_ux.gradient(u, t_var)
            u_x = tape_ux.gradient(u, x_var)
            u_y = tape_ux.gradient(u, y_var)
            v_t = tape_ux.gradient(v, t_var)
            v_x = tape_ux.gradient(v, x_var)
            v_y = tape_ux.gradient(v, y_var)

        # 三阶导
        u_xx = tape_uxx.gradient(u_x, x_var)
        u_yy = tape_uxx.gradient(u_y, y_var)
        v_xx = tape_uxx.gradient(v_x, x_var)
        v_yy = tape_uxx.gradient(v_y, y_var)

        # 损失项预测
        f_u_pred = u_t + lambda_1 * (u * u_x + v * u_y) + p_x - lambda_2 * (u_xx + u_yy)
        f_v_pred = v_t + lambda_1 * (u * v_x + v * v_y) + p_y - lambda_2 * (v_xx + v_yy)

        u_pred = u
        v_pred = v
        p_pred = p

        # 释放内存
        del tape_psi
        del tape_ux
        del tape_uxx

        return u_pred, v_pred, p_pred, f_u_pred, f_v_pred

3.3 损失函数定义和参数更新

损失函数包含4项,参数更新即是求loss关于模型的 w , b w, b w,b以及 λ 1 , λ 2 \lambda_1, \lambda_2 λ1,λ2的导数,并用Adam优化器进行梯度更新。

    # 损失计算
    def loss_function(self, u_pred, v_pred, u_real, v_real, f_u_pred, f_v_pred):
        loss = tf.reduce_sum(tf.square(u_real - u_pred)) + \
            tf.reduce_sum(tf.square(v_real - v_pred)) + \
            tf.reduce_sum(tf.square(f_u_pred)) + \
            tf.reduce_sum(tf.square(f_v_pred))
        return loss


    # 运行优化
    def run_optimizer(self, x, y, t, u_real, v_real):
        optimizer = self.optimizer

        with tf.GradientTape() as Tape:
            # 调用模型预测
            u_pred, v_pred, p_pred, f_u_pred, f_v_pred = self.predict(x, y, t)

            # loss由4项构成
            loss = self.loss_function(u_pred, v_pred, u_real, v_real, f_u_pred, f_v_pred)

        trainable_variables = self.trainable_variables
        gradients = Tape.gradient(loss, trainable_variables)
        optimizer.apply_gradients(zip(gradients, trainable_variables))

        return loss

3.4 主函数部分

参数设置和训练数据准备这边不再赘述,需要的直接看最下方完整代码。
模型训练采用mini-batch训练方法,代码如下:

# ==================== 2. 训练模型 ====================
    # 实例化
    pinn = PhysicsInformedNN(layer=layers, lr=1e-3)

    start_time = time.time()
    total_time = time.time()
    for step in range(nIter):
        for x_batch,y_batch,t_batch,u_batch,v_batch in data_iter(batch_size, x_train,y_train,t_train,u_train,v_train):
            # 梯度更新
            loss = pinn.run_optimizer(x_batch,y_batch,t_batch,u_batch,v_batch)

        # Print
        if step % 10 == 0:
            elapsed = time.time() - start_time
            print('It: %d, Loss: %.3f, λ1: %.3f, λ2: %.5f, Time: %.2f s' %
                  (step+1, loss, pinn.lambda_1.numpy(), pinn.lambda_2.numpy(), elapsed))
            start_time = time.time()

        if loss < 5:
            break

    print("\n\ntrain time: %.1f s" %(time.time()-total_time))

4. 完整代码

首先说明我没有训练完整,作者设置的epoch数量是200000,在我尝试训练的过程中发现模型确实很难收敛,模型的超参数和结构还有需要优化的地方,希望能一起探讨优化!

另外,绘图部分代码还没弄

"""
@author: Maziar Raissi
@modify: Steven Yin
"""

import tensorflow as tf
from keras.optimizers import Adam
from keras import Sequential, Model
from keras.layers import Dense, Flatten, BatchNormalization
from keras.regularizers import l2

import random
import numpy as np
import scipy.io
import time, datetime


class PhysicsInformedNN(Model):
    # Initialize the class
    def __init__(self, layer, lr=1e-3):
        super(PhysicsInformedNN, self).__init__()

        # 初始化需要反演的参数λ1、λ2
        self.lambda_1 = tf.Variable([0.0], dtype=tf.float32, trainable=True)
        self.lambda_2 = tf.Variable([0.0], dtype=tf.float32, trainable=True)

        # 全连接层定义
        self.model = Sequential()
        self.model.add(Flatten(input_shape=(3, 1)))
        self.model.add(Dense(layer[0], activation='tanh', name="Dense_0",
                             kernel_initializer="glorot_normal",
                             bias_initializer='zeros',
                             kernel_regularizer=l2(theata)
                             ))

        for i in range(1, len(layer) - 1):
            self.model.add(Dense(layer[i], activation='tanh', name="Dense_{}".format(i),
                                 kernel_initializer="glorot_normal",
                                 bias_initializer='zeros',
                                 kernel_regularizer=l2(theata)
                                 ))
            # self.model.add(BatchNormalization(name="BN{}".format(i)))  # 添加数据标准化层

        self.model.add(Dense(layer[-1], activation='tanh', name='Dense_end',
                             kernel_initializer="glorot_normal",
                             bias_initializer='zeros',
                             kernel_regularizer=l2(theata)
                             ))
        self.optimizer = Adam(learning_rate=lr)


    # 全连接模型计算
    def call(self, X):
        y = self.model.call(X)
        return y


    # PINN模型计算
    def predict(self, x, y, t):
        lambda_1 = self.lambda_1
        lambda_2 = self.lambda_2

        # 神经网络模型预测ψ(t,x,y) & p(t,x,y)
        x_var = tf.Variable(tf.cast(x, dtype=tf.float32))
        y_var = tf.Variable(tf.cast(y, dtype=tf.float32))
        t_var = tf.Variable(tf.cast(t, dtype=tf.float32))

        with tf.GradientTape(persistent=True) as tape_uxx:
            with tf.GradientTape(persistent=True) as tape_ux:
                with tf.GradientTape(persistent=True) as tape_psi:
                    psi_and_p = self.call(tf.concat([x_var, y_var, t_var], 1))
                    psi = psi_and_p[:, 0:1]
                    p = psi_and_p[:, 1:2]

                # 一阶导
                u = tape_psi.gradient(psi, y_var)
                v = -tape_psi.gradient(psi, x_var)
                p_x = tape_psi.gradient(p, x_var)
                p_y = tape_psi.gradient(p, y_var)

            # 二阶导
            u_t = tape_ux.gradient(u, t_var)
            u_x = tape_ux.gradient(u, x_var)
            u_y = tape_ux.gradient(u, y_var)
            v_t = tape_ux.gradient(v, t_var)
            v_x = tape_ux.gradient(v, x_var)
            v_y = tape_ux.gradient(v, y_var)

        # 三阶导
        u_xx = tape_uxx.gradient(u_x, x_var)
        u_yy = tape_uxx.gradient(u_y, y_var)
        v_xx = tape_uxx.gradient(v_x, x_var)
        v_yy = tape_uxx.gradient(v_y, y_var)

        # 损失项预测
        f_u_pred = u_t + lambda_1 * (u * u_x + v * u_y) + p_x - lambda_2 * (u_xx + u_yy)
        f_v_pred = v_t + lambda_1 * (u * v_x + v * v_y) + p_y - lambda_2 * (v_xx + v_yy)

        u_pred = u
        v_pred = v
        p_pred = p

        # 释放内存
        del tape_psi
        del tape_ux
        del tape_uxx

        return u_pred, v_pred, p_pred, f_u_pred, f_v_pred


    # 损失计算
    def loss_function(self, u_pred, v_pred, u_real, v_real, f_u_pred, f_v_pred):
        loss = tf.reduce_sum(tf.square(u_real - u_pred)) + \
            tf.reduce_sum(tf.square(v_real - v_pred)) + \
            tf.reduce_sum(tf.square(f_u_pred)) + \
            tf.reduce_sum(tf.square(f_v_pred))
        return loss


    # 运行优化
    def run_optimizer(self, x, y, t, u_real, v_real):
        optimizer = self.optimizer

        with tf.GradientTape() as Tape:
            # 调用模型预测
            u_pred, v_pred, p_pred, f_u_pred, f_v_pred = self.predict(x, y, t)

            # loss由4项构成
            loss = self.loss_function(u_pred, v_pred, u_real, v_real, f_u_pred, f_v_pred)

        trainable_variables = self.trainable_variables
        gradients = Tape.gradient(loss, trainable_variables)
        optimizer.apply_gradients(zip(gradients, trainable_variables))

        return loss


    # 误差评估
    def error(self, u_star, v_star, p_star, u_pred, v_pred, p_pred, lambda_1_value, lambda_2_value):
        # Error
        error_u = np.linalg.norm(u_star - u_pred, 2) / np.linalg.norm(u_star, 2)
        error_v = np.linalg.norm(v_star - v_pred, 2) / np.linalg.norm(v_star, 2)
        error_p = np.linalg.norm(p_star - p_pred, 2) / np.linalg.norm(p_star, 2)

        error_lambda_1 = np.abs(lambda_1_value - 1.0) * 100
        error_lambda_2 = np.abs(lambda_2_value - 0.01) / 0.01 * 100

        return error_u, error_v, error_p, error_lambda_1, error_lambda_2


# 数据batch迭代器
def data_iter(batch_size, x, y, t, u, v):
    num_examples = len(x)
    indices = list(range(num_examples))
    # 随机读取样本
    random.shuffle(indices)
    for i in range(0, num_examples, batch_size):
        j = tf.constant(indices[i: min(i + batch_size, num_examples)])
        yield tf.gather(x, j), tf.gather(y, j), tf.gather(t, j), tf.gather(u, j), tf.gather(v, j)
        
        
if __name__ == "__main__":

    # ==================== 0. 全局参数设置 ====================
    nIter = 20000
    N_train = 5000 # 随机选择训练样本
    batch_size = 100
    layers = [3,32,32,32,32,32,2]
    theata = 0.3 # 权重衰减惩罚因子


    # ==================== 1. 训练数据准备 ====================
    # Load Data
    data = scipy.io.loadmat('../Data/cylinder_nektar_wake.mat')
           
    U_star = data['U_star'] # N x 2 x T
    P_star = data['p_star'] # N x T
    t_star = data['t'] # T x 1
    X_star = data['X_star'] # N x 2
    
    N = X_star.shape[0]     # x∈[1,8] 5000 samples
    T = t_star.shape[0]     # time
    
    # Rearrange Data
    XX = np.tile(X_star[:,0], (1,T)) # N x T
    YY = np.tile(X_star[:,1], (1,T)) # N x T
    TT = np.tile(t_star, (1,N)).T # N x T
    
    UU = U_star[:,0,:] # N x T
    VV = U_star[:,1,:] # N x T
    PP = P_star # N x T
    
    x = XX.flatten()[:,None] # NT x 1
    y = YY.flatten()[:,None] # NT x 1
    t = TT.flatten()[:,None] # NT x 1
    
    u = UU.flatten()[:,None] # NT x 1
    v = VV.flatten()[:,None] # NT x 1
    p = PP.flatten()[:,None] # NT x 1

    idx = np.random.choice(N*T, N_train, replace=False)
    x_train = tf.constant(x[idx,:], dtype=tf.float32)
    y_train = tf.constant(y[idx,:], dtype=tf.float32)
    t_train = tf.constant(t[idx,:], dtype=tf.float32)
    u_train = tf.constant(u[idx,:], dtype=tf.float32)
    v_train = tf.constant(v[idx,:], dtype=tf.float32)
    p_train = tf.constant(p[idx,:], dtype=tf.float32)

    # mini_batch训练


    # ==================== 2. 训练模型 ====================
    # 实例化
    pinn = PhysicsInformedNN(layer=layers, lr=1e-3)

    start_time = time.time()
    total_time = time.time()
    for step in range(nIter):
        for x_batch,y_batch,t_batch,u_batch,v_batch in data_iter(batch_size, x_train,y_train,t_train,u_train,v_train):
            # 梯度更新
            loss = pinn.run_optimizer(x_batch,y_batch,t_batch,u_batch,v_batch)

        # Print
        if step % 10 == 0:
            elapsed = time.time() - start_time
            print('It: %d, Loss: %.3f, λ1: %.3f, λ2: %.5f, Time: %.2f s' %
                  (step+1, loss, pinn.lambda_1.numpy(), pinn.lambda_2.numpy(), elapsed))
            start_time = time.time()

        if loss < 5:
            break

    print("\n\ntrain time: %.1f s" %(time.time()-total_time))


    # ==================== 3. 模型评估 ====================
    # Test Data
    snap = np.array([100])
    x_star = X_star[:, 0:1]
    y_star = X_star[:, 1:2]
    t_star = TT[:, snap]

    u_star = U_star[:, 0, snap]
    v_star = U_star[:, 1, snap]
    p_star = P_star[:, snap]

    # Prediction
    u_pred, v_pred, p_pred, f_u_pred, f_v_pred = pinn.predict(x_star, y_star, t_star)
    lambda_1_value = pinn.lambda_1.numpy()
    lambda_2_value = pinn.lambda_2.numpy()

    error_u, error_v, error_p, error_lambda_1, error_lambda_2 = \
        pinn.error(u_star, v_star, p_star, u_pred, v_pred, p_pred, lambda_1_value, lambda_2_value)

    print('Error u: %e' % error_u)
    print('Error v: %e' % error_v)
    print('Error p: %e' % error_p)
    print('Error l1: %.5f%%' % error_lambda_1)
    print('Error l2: %.5f%%' % error_lambda_2)


    # ==================== 4. 结果保存 & 绘图 ====================
    # 模型保存
    now_time = datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
    pinn.model.save("./models/pinn_model_%s.h5" %now_time)

    # Plot Results
  • 11
    点赞
  • 83
    收藏
    觉得还不错? 一键收藏
  • 20
    评论
以下是基于PyTorch的PINN欧拉方程的间断问题的预测和真实以及误差图的代码示例: ```python import torch import torch.nn as nn import torch.optim as optim import numpy as np import matplotlib.pyplot as plt # 定义欧拉方程PINN模型 class EulerPINN(nn.Module): def __init__(self): super(EulerPINN, self).__init__() self.fc1_u = nn.Linear(2, 20) self.fc2_u = nn.Linear(20, 20) self.fc3_u = nn.Linear(20, 1) self.fc1_f = nn.Linear(2, 20) self.fc2_f = nn.Linear(20, 20) self.fc3_f = nn.Linear(20, 1) def forward(self, x): u = torch.sin(self.fc1_u(x)) u = torch.sin(self.fc2_u(u)) u = self.fc3_u(u) f = torch.sin(self.fc1_f(x)) f = torch.sin(self.fc2_f(f)) f = self.fc3_f(f) return u, f # 定义PINN欧拉方程的函数 def solve_euler_pinn(x, t, x_l, x_r, t_f, model, optimizer, max_epochs): loss_list = [] for epoch in range(max_epochs): optimizer.zero_grad() x.requires_grad = True t.requires_grad = True x_l.requires_grad = True x_r.requires_grad = True t_f.requires_grad = True X = torch.cat([x, t], dim=1) U, F = model(X) U_x = torch.autograd.grad(U, x, grad_outputs=torch.ones_like(U), create_graph=True)[0] U_t = torch.autograd.grad(U, t, grad_outputs=torch.ones_like(U), create_graph=True)[0] F_x = torch.autograd.grad(F, x, grad_outputs=torch.ones_like(F), create_graph=True)[0] F_t = torch.autograd.grad(F, t, grad_outputs=torch.ones_like(F), create_graph=True)[0] loss = nn.MSELoss()(U_x + F, torch.zeros_like(U_x)) + \ nn.MSELoss()(U_t + U * F_x, torch.zeros_like(U_t)) + \ nn.MSELoss()(model(torch.cat([x_l, t_f], dim=1))[0], torch.zeros_like(x_l)) + \ nn.MSELoss()(model(torch.cat([x_r, t_f], dim=1))[0], torch.ones_like(x_r)) loss_list.append(loss.item()) loss.backward() optimizer.step() if epoch % 100 == 0: print('Epoch [{}/{}], Loss: {:.4f}'.format(epoch+1, max_epochs, loss.item())) return loss_list # 定义真实函数 def true_solution(x, t): u = np.zeros((x.shape[0], t.shape[0])) for i in range(x.shape[0]): for j in range(t.shape[0]): if x[i, 0] >= 0 and x[i, 0] <= 0.5*t[j]: u[i, j] = x[i, 0] / t[j] else: u[i, j] = 1 return u # 定义画图函数 def plot_results(x, t, u_pred, u_true): plt.figure(figsize=(10, 10)) plt.subplot(2, 1, 1) plt.pcolor(x, t, u_pred, cmap='jet') plt.colorbar() plt.xlabel('x') plt.ylabel('t') plt.title('Predicted solution') plt.subplot(2, 1, 2) plt.pcolor(x, t, u_true, cmap='jet') plt.colorbar() plt.xlabel('x') plt.ylabel('t') plt.title('True solution') plt.show() # 设置模型参数和求参数 model = EulerPINN() optimizer = optim.Adam(model.parameters(), lr=0.001) max_epochs = 10000 # 设置求区域和边界条件 x_l = torch.linspace(0, 1, 100).reshape(-1, 1) x_r = torch.linspace(0, 1, 100).reshape(-1, 1) t_f = torch.linspace(0, 1, 100).reshape(-1, 1) x = torch.linspace(0, 1, 100).reshape(-1, 1) t = torch.linspace(0, 1, 100).reshape(-1, 1) # 求欧拉方程 loss_list = solve_euler_pinn(x, t, x_l, x_r, t_f, model, optimizer, max_epochs) # 计算预测和真实并画图 X, T = np.meshgrid(x.detach().numpy().flatten(), t.detach().numpy().flatten()) U_pred = model(torch.cat([x, t], dim=1))[0].detach().numpy().reshape(-1, 1) U_true = true_solution(np.concatenate([X.reshape(-1, 1), T.reshape(-1, 1)], axis=1), T.reshape(-1, 1)) U_pred = np.transpose(U_pred.reshape((x.shape[0], t.shape[0]))) U_true = np.transpose(U_true.reshape((x.shape[0], t.shape[0]))) plot_results(X, T, U_pred, U_true) # 画出误差图 plt.figure(figsize=(10, 5)) plt.plot(loss_list) plt.xlabel('Epoch') plt.ylabel('Loss') plt.title('PINN loss') plt.show() ``` 在上面的代码中,我们定义了一个名为`EulerPINN`的类,该类继承自`nn.Module`,并包含了欧拉方程PINN模型。我们还定义了一个名为`solve_euler_pinn`的函数,该函数使用PINN欧拉方程,并返回损失列表。真实函数`true_solution`用于计算欧拉方程的真实。`plot_results`函数用于将预测和真实可视化。在主函数中,我们首先设置模型参数和求参数。然后,我们定义求区域和边界条件。接下来,我们使用`solve_euler_pinn`函数求欧拉方程,并计算预测和真实。最后,我们使用`plot_results`函数将预测和真实可视化,并画出损失图。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 20
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值