PINN代码 -- 二维连续时间模型:Navier-Stokes方程(CPU/GPU+TensorFlow2.10)

《Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations》
作者:Maziar Raissi等人
作者原代码链接:https://github.com/maziarraissi/PINNs
环境:TensorFlow1.x版本

上一篇介绍了适配TensorFlow2.x版本的求解一维薛定谔方程的PINN代码,指路:
PINN代码 – 薛定谔方程(CPU/GPU+TensorFlow2.10)

然而,在实际应用中,二维的连续时间模型更常见。因此,在任务驱动下,继续改Maziar Raissi的PINN代码了。
要求:去https://github.com/maziarraissi/PINNs下载数据集,也就是名字为Data的文件夹。(建议下载整个PINN项目)

Navier-Stokes方程:

方程

分享了两种格式的代码(分块、完整),代码内容完全一致,区别在于分块代码会有部分解释。可点击目录按需跳转查看,避免滑动屏幕浪费时间。

一.环境配置

CPU和GPU环境都跑过了,没发现什么问题。本代码的运行环境如下所示。

GPU:

  • Windows + Anaconda + PyCharm
  • Python 3.9
  • CUDA 11.2 + cuDNN 8.1 + TensorFlow 2.10

已知GPU适配的 TensorFlow 的最高版本为 2.10,如果版本高于2.10,则需要重新配置环境。

CPU:

  • Windows + PyCharm
  • Python 3.9
  • TensorFlow 2.11

CPU能接受的TensorFlow2.x版本比较高,不一定要2.11

二.分块代码

1.库

"""
@author: Maziar Raissi
"""
import os
import sys
sys.path.insert(0, '../../Utilities/')

import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import scipy.io
from scipy.interpolate import griddata
import time
from itertools import product, combinations
import tensorflow_probability as tfp
from mpl_toolkits.axes_grid1 import make_axes_locatable
import matplotlib.gridspec as gridspec

缺什么就下载什么。

np.random.seed(1234)
tf.random.set_seed(1234)
# 禁用Eager Execution
tf.compat.v1.disable_eager_execution()

# 运行初期会输出当前计算任务在分配设备时的详细情况日志,不想了解可以设置日志级别为ERROR,只输出错误信息
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'

2. PINN模型

俗话说,授人以鱼不如授人以渔。这里分享一下的的修改思路,主要有以下两点:

(1)为了保证兼容,将所有被提示不兼容的函数格式改成tf.compat.v1.(函数名),例如作者原代码中使用的是tf.Session,我们只需要改成tf.compat.v1.Session即可。然而,有个别函数不能如此简单粗暴地修改,所以列了一个表格供参考。

原代码我的
tf.Sessiontf.compat.v1.Session
tf.placeholdertf.compat.v1.placeholder
tf.AdamOptimizer()tf.compat.v1.train.AdamOptimizer()
tf.global_variables_initializer()tf.compat.v1.global_variables_initializer()
tf.truncated_normaltf.random.truncated_normal

(2)TensorFlow 2.x版本中移除掉的库/函数,直接删掉。比如作者设置并使用了 L-BFGS-B优化器,其原代码为:

self.optimizer = tf.contrib.opt.ScipyOptimizerInterface(self.loss, method = 'L-BFGS-B', options = {'maxiter': 50000, 'maxfun': 50000, 'maxcor': 50, 'maxls': 50, 'ftol' : 1.0 * np.finfo(float).eps})   
self.optimizer.minimize(self.sess,  feed_dict = tf_dict, fetches = [self.loss, self.lambda_1, self.lambda_2],loss_callback = self.callback)

在TensorFlow1.x版本中,上述代码没问题。但问题在于,TensorFlow 2.x版本移除了 tf.contribAdam优化器已经够用了。本懒人改代码的思路是“多一事不如少一事”,遂直接删除L-BFGS-B优化器相关内容。

以下是设计PINN模型的代码:

class PhysicsInformedNN:
    # Initialize the class
    def __init__(self, x, y, t, u, v, layers):

        X = np.concatenate([x, y, t], 1)

        self.lb = X.min(0)
        self.ub = X.max(0)

        self.X = X

        self.x = X[:, 0:1]
        self.y = X[:, 1:2]
        self.t = X[:, 2:3]

        self.u = u
        self.v = v

        self.layers = layers

        # Initialize NN
        self.weights, self.biases = self.initialize_NN(layers)

        # Initialize parameters
        self.lambda_1 = tf.compat.v1.Variable([0.0], dtype=tf.float32)
        self.lambda_2 = tf.Variable([0.0], dtype=tf.float32)

        # tf placeholders and graph
        self.sess = tf.compat.v1.Session(config=tf.compat.v1.ConfigProto(allow_soft_placement=True, log_device_placement=True))

        self.x_tf = tf.compat.v1.placeholder(tf.float32, shape=[None, self.x.shape[1]])
        self.y_tf = tf.compat.v1.placeholder(tf.float32, shape=[None, self.y.shape[1]])
        self.t_tf = tf.compat.v1.placeholder(tf.float32, shape=[None, self.t.shape[1]])

        self.u_tf = tf.compat.v1.placeholder(tf.float32, shape=[None, self.u.shape[1]])
        self.v_tf = tf.compat.v1.placeholder(tf.float32, shape=[None, self.v.shape[1]])

        self.u_pred, self.v_pred, self.p_pred, self.f_u_pred, self.f_v_pred = self.net_NS(self.x_tf, self.y_tf, self.t_tf)

        self.loss = tf.reduce_sum(tf.square(self.u_tf - self.u_pred)) + \
                    tf.reduce_sum(tf.square(self.v_tf - self.v_pred)) + \
                    tf.reduce_sum(tf.square(self.f_u_pred)) + \
                    tf.reduce_sum(tf.square(self.f_v_pred))

        self.optimizer_Adam = tf.compat.v1.train.AdamOptimizer()
        self.train_op_Adam = self.optimizer_Adam.minimize(self.loss)

        init = tf.compat.v1.global_variables_initializer()
        self.sess.run(init)

    def initialize_NN(self, layers):
        weights = []
        biases = []
        num_layers = len(layers)
        for l in range(0, num_layers - 1):
            W = self.xavier_init(size=[layers[l], layers[l + 1]])
            b = tf.Variable(tf.zeros([1, layers[l + 1]], dtype=tf.float32), dtype=tf.float32)
            weights.append(W)
            biases.append(b)
        return weights, biases

    def xavier_init(self, size):
        in_dim = size[0]
        out_dim = size[1]
        xavier_stddev = np.sqrt(2 / (in_dim + out_dim))
        return tf.Variable(tf.random.truncated_normal([in_dim, out_dim], stddev=xavier_stddev), dtype=tf.float32)

    def neural_net(self, X, weights, biases):
        num_layers = len(weights) + 1

        H = 2.0 * (X - self.lb) / (self.ub - self.lb) - 1.0
        for l in range(0, num_layers - 2):
            W = weights[l]
            b = biases[l]
            H = tf.tanh(tf.add(tf.matmul(H, W), b))
        W = weights[-1]
        b = biases[-1]
        Y = tf.add(tf.matmul(H, W), b)
        return Y

    def net_NS(self, x, y, t):
        lambda_1 = self.lambda_1
        lambda_2 = self.lambda_2

        psi_and_p = self.neural_net(tf.concat([x, y, t], 1), self.weights, self.biases)
        psi = psi_and_p[:, 0:1]
        p = psi_and_p[:, 1:2]

        u = tf.gradients(psi, y)[0]
        v = -tf.gradients(psi, x)[0]

        u_t = tf.gradients(u, t)[0]
        u_x = tf.gradients(u, x)[0]
        u_y = tf.gradients(u, y)[0]
        u_xx = tf.gradients(u_x, x)[0]
        u_yy = tf.gradients(u_y, y)[0]

        v_t = tf.gradients(v, t)[0]
        v_x = tf.gradients(v, x)[0]
        v_y = tf.gradients(v, y)[0]
        v_xx = tf.gradients(v_x, x)[0]
        v_yy = tf.gradients(v_y, y)[0]

        p_x = tf.gradients(p, x)[0]
        p_y = tf.gradients(p, y)[0]

        f_u = u_t + lambda_1 * (u * u_x + v * u_y) + p_x - lambda_2 * (u_xx + u_yy)
        f_v = v_t + lambda_1 * (u * v_x + v * v_y) + p_y - lambda_2 * (v_xx + v_yy)

        return u, v, p, f_u, f_v

    def callback(self, loss, lambda_1, lambda_2):
        print('Loss: %.3e, l1: %.3f, l2: %.5f' % (loss, lambda_1, lambda_2))

    def train(self, nIter):

        tf_dict = {self.x_tf: self.x, self.y_tf: self.y, self.t_tf: self.t,
                   self.u_tf: self.u, self.v_tf: self.v}

        start_time = time.time()
        for it in range(nIter):
            self.sess.run(self.train_op_Adam, tf_dict)

            # Print
            if it % 10 == 0:
                elapsed = time.time() - start_time
                loss_value = self.sess.run(self.loss, tf_dict)
                lambda_1_value = self.sess.run(self.lambda_1)
                lambda_2_value = self.sess.run(self.lambda_2)
                print('It: %d, Loss: %.3e, l1: %.3f, l2: %.5f, Time: %.2f' %
                      (it, loss_value, lambda_1_value, lambda_2_value, elapsed))
                start_time = time.time()

    def predict(self, x_star, y_star, t_star):

        tf_dict = {self.x_tf: x_star, self.y_tf: y_star, self.t_tf: t_star}

        u_star = self.sess.run(self.u_pred, tf_dict)
        v_star = self.sess.run(self.v_pred, tf_dict)
        p_star = self.sess.run(self.p_pred, tf_dict)

        return u_star, v_star, p_star

有些内容我在之前写薛定谔方程代码的时候就写过了,在此只提几个需要注意的地方:

(1)数据初始化

__init__函数的作用是初始化。这里需要注意的是,输入数据维度要一致,这是极容易被忽略的问题。在作者本人写的代码中,所有数据在输入模型之前都被处理成向量。相应的,占位符placeholder的形状也是向量(见下方代码),表示[None, x.shape[1]],这样占位符可以灵活多变,适应多个场景。

self.x_tf = tf.compat.v1.placeholder(tf.float32, shape=[None, self.x.shape[1]])

然而,在实际应用中,当你的数据是图像数据,为了避免信息丢失,应该避免将矩阵flatten成向量的操作。那么,为了保持图像数据的输入,__init__函数里的占位符就不能是向量形状,而应该根据你的数据来设置。举个例子,图像经过处理后的数据是一个形状为HxW的矩阵X(不含RGB通道),那么占位符的形状至少是[None, H, W]

(2)计算图

self.sess = tf.compat.v1.Session(config=tf.compat.v1.ConfigProto(allow_soft_placement=True, log_device_placement=False))

Session是用户使用TensorFlow时的交互式接口,提供了run()方法来执行计算图。 在TensorFlow1.x版本中,用tf.Session()创建会话实例,然后用run()运行一个会话,最后通过close()函数关闭会话、释放资源。

allow_soft_placement 也是布尔型参数,若设置为True,则当运算无法在GPU上执行时,会被自动转移至CPU上执行。log_device_placement 也是布尔型参数,当它为True时,网络运行日志便会记录每个节点使用的计算设备并打印,为减少工作量,默认为False。我在这里设置了True,你可以按个人需求改掉。

注意,TensorFlow2.x版本中,是没有Session会话操作的。我这里设置tf.compat.v1.Session仅仅是考虑兼容性。

3. 训练、测试模型

查看GPU是否可用:

# 列出所有物理设备
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
    print(f"Available GPU devices: {gpus}")
else:
    print("No GPU available.")

参数设置:

N_train = 5000
layers = [3, 20, 20, 20, 20, 20, 20, 20, 20, 2]  # 网络层数

导入数据集:(需要注意数据集路径)

# Load Data  导入数据集
data = scipy.io.loadmat('../Data/cylinder_nektar_wake.mat')

U_star = data['U_star']  # N x 2 x T
P_star = data['p_star']  # N x T
t_star = data['t']  # T x 1
X_star = data['X_star']  # N x 2

N = X_star.shape[0]
T = t_star.shape[0]

# Rearrange Data
XX = np.tile(X_star[:, 0:1], (1, T))  # N x T
YY = np.tile(X_star[:, 1:2], (1, T))  # N x T
TT = np.tile(t_star, (1, N)).T  # N x T

UU = U_star[:, 0, :]  # N x T
VV = U_star[:, 1, :]  # N x T
PP = P_star  # N x T

x = XX.flatten()[:, None]  # NT x 1
y = YY.flatten()[:, None]  # NT x 1
t = TT.flatten()[:, None]  # NT x 1

u = UU.flatten()[:, None]  # NT x 1
v = VV.flatten()[:, None]  # NT x 1
p = PP.flatten()[:, None]  # NT x 1

数据处理的注意事项是需要保证数据类型一致。没有小数点的数据,TensorFlow会默认其为int型,有小数点的则为float型。在计算过程中,数据类型不同会导致计算无法正确执行,所以在导入数据后,应该确认数据类型,或者手动设置所有数据的类型。

作者在论文里写了数据处理方法(建议去读原文,里面有时空的离散方法),直接使用他的代码会发现数据是通过mat文件导入的,多狡猾啊(bushi,极容易让人忽视其设置。

用不含噪声的数据进行训练:

# Training Data  训练数据集
idx = np.random.choice(N * T, N_train, replace=False)
x_train = x[idx, :]
y_train = y[idx, :]
t_train = t[idx, :]
u_train = u[idx, :]
v_train = v[idx, :]
# Training 训练
model = PhysicsInformedNN(x_train, y_train, t_train, u_train, v_train, layers)
model.train(10)   # 当前训练回合数为10

用不含噪声的数据进行测试:

# 测试数据
snap = np.array([100])
x_star = X_star[:, 0:1]
y_star = X_star[:, 1:2]
t_star = TT[:, snap]

u_star = U_star[:, 0, snap]
v_star = U_star[:, 1, snap]
p_star = P_star[:, snap]

# 预测
u_pred, v_pred, p_pred = model.predict(x_star, y_star, t_star)
lambda_1_value = model.sess.run(model.lambda_1)
lambda_2_value = model.sess.run(model.lambda_2)

# Error 误差
error_u = np.linalg.norm(u_star - u_pred, 2) / np.linalg.norm(u_star, 2)
error_v = np.linalg.norm(v_star - v_pred, 2) / np.linalg.norm(v_star, 2)
error_p = np.linalg.norm(p_star - p_pred, 2) / np.linalg.norm(p_star, 2)

error_lambda_1 = np.abs(lambda_1_value - 1.0) * 100
error_lambda_2 = np.abs(lambda_2_value - 0.01) / 0.01 * 100

print('Error u: %e' % error_u)
print('Error v: %e' % error_v)
print('Error p: %e' % error_p)
print('Error l1: %.5f%%' % error_lambda_1)
print('Error l2: %.5f%%' % error_lambda_2)

准备绘图数据:

# 为绘图进行预测
lb = X_star.min(0)
ub = X_star.max(0)
nn = 200
x = np.linspace(lb[0], ub[0], nn)
y = np.linspace(lb[1], ub[1], nn)
X, Y = np.meshgrid(x, y)

# 使用 griddata 插值结果到网格
UU_star = griddata(X_star, u_pred.flatten(), (X, Y), method='cubic')
VV_star = griddata(X_star, v_pred.flatten(), (X, Y), method='cubic')
PP_star = griddata(X_star, p_pred.flatten(), (X, Y), method='cubic')
P_exact = griddata(X_star, p_star.flatten(), (X, Y), method='cubic')

用含噪声的数据进行训练、测试:

noise = 0.01
u_train = u_train + noise * np.std(u_train) * np.random.randn(u_train.shape[0], u_train.shape[1])
v_train = v_train + noise * np.std(v_train) * np.random.randn(v_train.shape[0], v_train.shape[1])

# Training
model = PhysicsInformedNN(x_train, y_train, t_train, u_train, v_train, layers)
model.train(20)   # 当前训练回合数为20

lambda_1_value_noisy = model.sess.run(model.lambda_1)
lambda_2_value_noisy = model.sess.run(model.lambda_2)

error_lambda_1_noisy = np.abs(lambda_1_value_noisy - 1.0) * 100
error_lambda_2_noisy = np.abs(lambda_2_value_noisy - 0.01) / 0.01 * 100

print('Error l1: %.5f%%' % error_lambda_1_noisy)
print('Error l2: %.5f%%' % error_lambda_2_noisy)

4. 绘图

与原代码不同,我只打算展示Vorticity那一部分。自 NumPy 1.16.0 版本起,np.asscalar() 已经被弃用,所以部分代码需要修改。

加载数据集:

data_vort = scipy.io.loadmat('../Data/cylinder_nektar_t0_vorticity.mat')

x_vort = data_vort['x']
y_vort = data_vort['y']
w_vort = data_vort['w']
# modes = np.asscalar(data_vort['modes'])
# nel = np.asscalar(data_vort['nel'])
# 自 NumPy 1.16.0 版本起,np.asscalar() 已经被弃用,故使用.item()
modes = data_vort['modes'].item()
nel = data_vort['nel'].item()

xx_vort = np.reshape(x_vort, (modes + 1, modes + 1, nel), order='F')
yy_vort = np.reshape(y_vort, (modes + 1, modes + 1, nel), order='F')
ww_vort = np.reshape(w_vort, (modes + 1, modes + 1, nel), order='F')

box_lb = np.array([1.0, -2.0])
box_ub = np.array([8.0, 2.0])

只画一张图:

fig, ax = plt.subplots(figsize=(8, 8))
    
gs0 = gridspec.GridSpec(1, 1)
gs0.update(top=1 - 0.06, bottom=1 - 2 / 4 + 0.12, left=0.0, right=1.0, wspace=0)
# 在创建新子图之前,手动删除重叠的子图(如果存在)
if 'ax' in locals():
    ax.remove()
ax = plt.subplot(gs0[:, :])

for i in range(0, nel):
    h = ax.pcolormesh(xx_vort[:, :, i], yy_vort[:, :, i], ww_vort[:, :, i], cmap='seismic', shading='gouraud', vmin=-3, vmax=3)
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.05)
fig.colorbar(h, cax=cax)

ax.plot([box_lb[0], box_lb[0]], [box_lb[1], box_ub[1]], 'k', linewidth=1)
ax.plot([box_ub[0], box_ub[0]], [box_lb[1], box_ub[1]], 'k', linewidth=1)
ax.plot([box_lb[0], box_ub[0]], [box_lb[1], box_lb[1]], 'k', linewidth=1)
ax.plot([box_lb[0], box_ub[0]], [box_ub[1], box_ub[1]], 'k', linewidth=1)

ax.set_aspect('equal', 'box')
ax.set_xlabel('$x$')
ax.set_ylabel('$y$')
ax.set_title('Vorticity', fontsize=10)

plt.savefig('./figures/NavierStokes.png')  # 保存图片
plt.close()

三. 完整代码

"""
@author: Maziar Raissi
"""
import os
import sys
sys.path.insert(0, '../../Utilities/')

import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import scipy.io
from scipy.interpolate import griddata
import time
from itertools import product, combinations
import tensorflow_probability as tfp
from mpl_toolkits.axes_grid1 import make_axes_locatable
import matplotlib.gridspec as gridspec

np.random.seed(1234)
tf.random.set_seed(1234)
# 禁用Eager Execution
tf.compat.v1.disable_eager_execution()

# 运行初期会输出当前计算任务在分配设备时的详细情况日志,不想了解可以设置日志级别为ERROR,只输出错误信息
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'


class PhysicsInformedNN:
    # Initialize the class
    def __init__(self, x, y, t, u, v, layers):

        X = np.concatenate([x, y, t], 1)

        self.lb = X.min(0)
        self.ub = X.max(0)

        self.X = X

        self.x = X[:, 0:1]
        self.y = X[:, 1:2]
        self.t = X[:, 2:3]

        self.u = u
        self.v = v

        self.layers = layers

        # Initialize NN
        self.weights, self.biases = self.initialize_NN(layers)

        # Initialize parameters
        self.lambda_1 = tf.compat.v1.Variable([0.0], dtype=tf.float32)
        self.lambda_2 = tf.Variable([0.0], dtype=tf.float32)

        # tf placeholders and graph
        self.sess = tf.compat.v1.Session(config=tf.compat.v1.ConfigProto(allow_soft_placement=True, log_device_placement=True))

        self.x_tf = tf.compat.v1.placeholder(tf.float32, shape=[None, self.x.shape[1]])
        self.y_tf = tf.compat.v1.placeholder(tf.float32, shape=[None, self.y.shape[1]])
        self.t_tf = tf.compat.v1.placeholder(tf.float32, shape=[None, self.t.shape[1]])

        self.u_tf = tf.compat.v1.placeholder(tf.float32, shape=[None, self.u.shape[1]])
        self.v_tf = tf.compat.v1.placeholder(tf.float32, shape=[None, self.v.shape[1]])

        self.u_pred, self.v_pred, self.p_pred, self.f_u_pred, self.f_v_pred = self.net_NS(self.x_tf, self.y_tf, self.t_tf)

        self.loss = tf.reduce_sum(tf.square(self.u_tf - self.u_pred)) + \
                    tf.reduce_sum(tf.square(self.v_tf - self.v_pred)) + \
                    tf.reduce_sum(tf.square(self.f_u_pred)) + \
                    tf.reduce_sum(tf.square(self.f_v_pred))

        self.optimizer_Adam = tf.compat.v1.train.AdamOptimizer()
        self.train_op_Adam = self.optimizer_Adam.minimize(self.loss)

        init = tf.compat.v1.global_variables_initializer()
        self.sess.run(init)

    def initialize_NN(self, layers):
        weights = []
        biases = []
        num_layers = len(layers)
        for l in range(0, num_layers - 1):
            W = self.xavier_init(size=[layers[l], layers[l + 1]])
            b = tf.Variable(tf.zeros([1, layers[l + 1]], dtype=tf.float32), dtype=tf.float32)
            weights.append(W)
            biases.append(b)
        return weights, biases

    def xavier_init(self, size):
        in_dim = size[0]
        out_dim = size[1]
        xavier_stddev = np.sqrt(2 / (in_dim + out_dim))
        return tf.Variable(tf.random.truncated_normal([in_dim, out_dim], stddev=xavier_stddev), dtype=tf.float32)

    def neural_net(self, X, weights, biases):
        num_layers = len(weights) + 1

        H = 2.0 * (X - self.lb) / (self.ub - self.lb) - 1.0
        for l in range(0, num_layers - 2):
            W = weights[l]
            b = biases[l]
            H = tf.tanh(tf.add(tf.matmul(H, W), b))
        W = weights[-1]
        b = biases[-1]
        Y = tf.add(tf.matmul(H, W), b)
        return Y

    def net_NS(self, x, y, t):
        lambda_1 = self.lambda_1
        lambda_2 = self.lambda_2

        psi_and_p = self.neural_net(tf.concat([x, y, t], 1), self.weights, self.biases)
        psi = psi_and_p[:, 0:1]
        p = psi_and_p[:, 1:2]

        u = tf.gradients(psi, y)[0]
        v = -tf.gradients(psi, x)[0]

        u_t = tf.gradients(u, t)[0]
        u_x = tf.gradients(u, x)[0]
        u_y = tf.gradients(u, y)[0]
        u_xx = tf.gradients(u_x, x)[0]
        u_yy = tf.gradients(u_y, y)[0]

        v_t = tf.gradients(v, t)[0]
        v_x = tf.gradients(v, x)[0]
        v_y = tf.gradients(v, y)[0]
        v_xx = tf.gradients(v_x, x)[0]
        v_yy = tf.gradients(v_y, y)[0]

        p_x = tf.gradients(p, x)[0]
        p_y = tf.gradients(p, y)[0]

        f_u = u_t + lambda_1 * (u * u_x + v * u_y) + p_x - lambda_2 * (u_xx + u_yy)
        f_v = v_t + lambda_1 * (u * v_x + v * v_y) + p_y - lambda_2 * (v_xx + v_yy)

        return u, v, p, f_u, f_v

    def callback(self, loss, lambda_1, lambda_2):
        print('Loss: %.3e, l1: %.3f, l2: %.5f' % (loss, lambda_1, lambda_2))

    def train(self, nIter):

        tf_dict = {self.x_tf: self.x, self.y_tf: self.y, self.t_tf: self.t,
                   self.u_tf: self.u, self.v_tf: self.v}

        start_time = time.time()
        for it in range(nIter):
            self.sess.run(self.train_op_Adam, tf_dict)

            # Print
            if it % 10 == 0:
                elapsed = time.time() - start_time
                loss_value = self.sess.run(self.loss, tf_dict)
                lambda_1_value = self.sess.run(self.lambda_1)
                lambda_2_value = self.sess.run(self.lambda_2)
                print('It: %d, Loss: %.3e, l1: %.3f, l2: %.5f, Time: %.2f' %
                      (it, loss_value, lambda_1_value, lambda_2_value, elapsed))
                start_time = time.time()

    def predict(self, x_star, y_star, t_star):

        tf_dict = {self.x_tf: x_star, self.y_tf: y_star, self.t_tf: t_star}

        u_star = self.sess.run(self.u_pred, tf_dict)
        v_star = self.sess.run(self.v_pred, tf_dict)
        p_star = self.sess.run(self.p_pred, tf_dict)

        return u_star, v_star, p_star


if __name__ == "__main__":

    # 列出所有物理设备
    gpus = tf.config.experimental.list_physical_devices('GPU')
    if gpus:
        print(f"Available GPU devices: {gpus}")
    else:
        print("No GPU available.")

    N_train = 5000

    layers = [3, 20, 20, 20, 20, 20, 20, 20, 20, 2]

    # Load Data  导入数据集
    data = scipy.io.loadmat('../Data/cylinder_nektar_wake.mat')

    U_star = data['U_star']  # N x 2 x T
    P_star = data['p_star']  # N x T
    t_star = data['t']  # T x 1
    X_star = data['X_star']  # N x 2

    N = X_star.shape[0]
    T = t_star.shape[0]

    # Rearrange Data
    XX = np.tile(X_star[:, 0:1], (1, T))  # N x T
    YY = np.tile(X_star[:, 1:2], (1, T))  # N x T
    TT = np.tile(t_star, (1, N)).T  # N x T

    UU = U_star[:, 0, :]  # N x T
    VV = U_star[:, 1, :]  # N x T
    PP = P_star  # N x T

    x = XX.flatten()[:, None]  # NT x 1
    y = YY.flatten()[:, None]  # NT x 1
    t = TT.flatten()[:, None]  # NT x 1

    u = UU.flatten()[:, None]  # NT x 1
    v = VV.flatten()[:, None]  # NT x 1
    p = PP.flatten()[:, None]  # NT x 1

    # ------------- Noiseles Data -------------

    # Training Data  训练数据集
    idx = np.random.choice(N * T, N_train, replace=False)
    x_train = x[idx, :]
    y_train = y[idx, :]
    t_train = t[idx, :]
    u_train = u[idx, :]
    v_train = v[idx, :]

    # Training 训练
    model = PhysicsInformedNN(x_train, y_train, t_train, u_train, v_train, layers)
    model.train(10)

    # 测试数据
    snap = np.array([100])
    x_star = X_star[:, 0:1]
    y_star = X_star[:, 1:2]
    t_star = TT[:, snap]

    u_star = U_star[:, 0, snap]
    v_star = U_star[:, 1, snap]
    p_star = P_star[:, snap]

    # 预测
    u_pred, v_pred, p_pred = model.predict(x_star, y_star, t_star)
    lambda_1_value = model.sess.run(model.lambda_1)
    lambda_2_value = model.sess.run(model.lambda_2)

    # Error 误差
    error_u = np.linalg.norm(u_star - u_pred, 2) / np.linalg.norm(u_star, 2)
    error_v = np.linalg.norm(v_star - v_pred, 2) / np.linalg.norm(v_star, 2)
    error_p = np.linalg.norm(p_star - p_pred, 2) / np.linalg.norm(p_star, 2)

    error_lambda_1 = np.abs(lambda_1_value - 1.0) * 100
    error_lambda_2 = np.abs(lambda_2_value - 0.01) / 0.01 * 100

    print('Error u: %e' % error_u)
    print('Error v: %e' % error_v)
    print('Error p: %e' % error_p)
    print('Error l1: %.5f%%' % error_lambda_1)
    print('Error l2: %.5f%%' % error_lambda_2)

    # 为绘图进行预测
    lb = X_star.min(0)
    ub = X_star.max(0)
    nn = 200
    x = np.linspace(lb[0], ub[0], nn)
    y = np.linspace(lb[1], ub[1], nn)
    X, Y = np.meshgrid(x, y)

    # 使用 griddata 插值结果到网格
    UU_star = griddata(X_star, u_pred.flatten(), (X, Y), method='cubic')
    VV_star = griddata(X_star, v_pred.flatten(), (X, Y), method='cubic')
    PP_star = griddata(X_star, p_pred.flatten(), (X, Y), method='cubic')
    P_exact = griddata(X_star, p_star.flatten(), (X, Y), method='cubic')

    # ------------ Noisy Data --------------
    noise = 0.01
    u_train = u_train + noise * np.std(u_train) * np.random.randn(u_train.shape[0], u_train.shape[1])
    v_train = v_train + noise * np.std(v_train) * np.random.randn(v_train.shape[0], v_train.shape[1])

    # Training
    model = PhysicsInformedNN(x_train, y_train, t_train, u_train, v_train, layers)
    model.train(20)

    lambda_1_value_noisy = model.sess.run(model.lambda_1)
    lambda_2_value_noisy = model.sess.run(model.lambda_2)

    error_lambda_1_noisy = np.abs(lambda_1_value_noisy - 1.0) * 100
    error_lambda_2_noisy = np.abs(lambda_2_value_noisy - 0.01) / 0.01 * 100

    print('Error l1: %.5f%%' % error_lambda_1_noisy)
    print('Error l2: %.5f%%' % error_lambda_2_noisy)

    # ----------------------------------------
    # ------------ Plotting 绘图 --------------
    # ----------------------------------------

    data_vort = scipy.io.loadmat('../Data/cylinder_nektar_t0_vorticity.mat')

    x_vort = data_vort['x']
    y_vort = data_vort['y']
    w_vort = data_vort['w']
    # modes = np.asscalar(data_vort['modes'])
    # nel = np.asscalar(data_vort['nel'])
    # 自 NumPy 1.16.0 版本起,np.asscalar() 已经被弃用
    modes = data_vort['modes'].item()
    nel = data_vort['nel'].item()

    xx_vort = np.reshape(x_vort, (modes + 1, modes + 1, nel), order='F')
    yy_vort = np.reshape(y_vort, (modes + 1, modes + 1, nel), order='F')
    ww_vort = np.reshape(w_vort, (modes + 1, modes + 1, nel), order='F')

    box_lb = np.array([1.0, -2.0])
    box_ub = np.array([8.0, 2.0])

    fig, ax = plt.subplots(figsize=(8, 8))
    # for axis in ax:
    #     axis.axis('off')

    # ------------ Row 0: Vorticity --------------
    gs0 = gridspec.GridSpec(1, 1)
    gs0.update(top=1 - 0.06, bottom=1 - 2 / 4 + 0.12, left=0.0, right=1.0, wspace=0)
    # 在创建新子图之前,手动删除重叠的子图(如果存在)
    if 'ax' in locals():
        ax.remove()
    ax = plt.subplot(gs0[:, :])

    for i in range(0, nel):
        h = ax.pcolormesh(xx_vort[:, :, i], yy_vort[:, :, i], ww_vort[:, :, i], cmap='seismic', shading='gouraud',
                          vmin=-3, vmax=3)
    divider = make_axes_locatable(ax)
    cax = divider.append_axes("right", size="5%", pad=0.05)
    fig.colorbar(h, cax=cax)

    ax.plot([box_lb[0], box_lb[0]], [box_lb[1], box_ub[1]], 'k', linewidth=1)
    ax.plot([box_ub[0], box_ub[0]], [box_lb[1], box_ub[1]], 'k', linewidth=1)
    ax.plot([box_lb[0], box_ub[0]], [box_lb[1], box_lb[1]], 'k', linewidth=1)
    ax.plot([box_lb[0], box_ub[0]], [box_ub[1], box_ub[1]], 'k', linewidth=1)

    ax.set_aspect('equal', 'box')
    ax.set_xlabel('$x$')
    ax.set_ylabel('$y$')
    ax.set_title('Vorticity', fontsize=10)

    plt.savefig('./figures/NavierStokes.png')  # 保存图片
    plt.close()

四. 运行结果

主要目的是为了测试代码能否正常运行,我只训练10回合左右。运行结果仅供参考。

损失:
loss
运行结果:
vo
可以看到画布的大小选得不是很好,可以自行调整。原代码是分两行展示的,我偷懒,只画了Vorticity的图。

  • 10
    点赞
  • 9
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
以下是一个使用PINN算法求解连续时间的Burgers方程Python代码示例: ```python import tensorflow as tf import numpy as np # 定义神经网络模型 class PINN(tf.keras.Model): def __init__(self): super(PINN, self).__init__() self.dense1 = tf.keras.layers.Dense(32, activation='tanh') self.dense2 = tf.keras.layers.Dense(32, activation='tanh') self.dense3 = tf.keras.layers.Dense(1) def call(self, inputs): x, t = inputs u = tf.concat([x, t], axis=1) u = self.dense1(u) u = self.dense2(u) u = self.dense3(u) return u # 定义损失函数 @tf.function def pinn_loss(model, x, t, u_true): with tf.GradientTape(persistent=True) as tape: tape.watch(x) tape.watch(t) u_pred = model([x, t]) u_x = tape.gradient(u_pred, x) u_t = tape.gradient(u_pred, t) u_xx = tape.gradient(u_x, x) f = u_t + u_pred * u_x - 0.1 * u_xx # 定义Burgers方程 loss = tf.reduce_mean(tf.square(u_pred - u_true)) + tf.reduce_mean(tf.square(f)) # 定义总损失 return loss # 定义训练函数 def train(model, x, t, u_true, optimizer, epochs=5000): for epoch in range(epochs): with tf.GradientTape() as tape: loss = pinn_loss(model, x, t, u_true) gradients = tape.gradient(loss, model.trainable_variables) optimizer.apply_gradients(zip(gradients, model.trainable_variables)) if epoch % 100 == 0: print(f"Epoch {epoch}, Loss {loss.numpy()}") # 生成训练数据 np.random.seed(42) n = 2000 x = np.linspace(-1, 1, n)[:, None] t = np.linspace(0, 1, n)[:, None] u_true = np.exp(-0.1 * t) * np.sin(np.pi * x) # 定义模型和优化器 model = PINN() optimizer = tf.keras.optimizers.Adam() # 开始训练 train(model, x, t, u_true, optimizer) ``` 在这段代码中,我们定义了一个包含三个全连接层的神经网络模型`PINN`,然后定义了一个损失函数`pinn_loss`,其中包括了Burgers方程的约束条件和预测值与真实值之间的差距。接着,我们定义了一个训练函数`train`,使用Adam优化器对模型进行训练。最后,我们生成了一些训练数据,包括2000个空间点和时间点,并使用指数函数和正弦函数作为真实解。在训练过程中,我们每100个epoch输出一次损失值。 需要注意的是,由于Burgers方程是一个非线性方程,因此在实际应用中需要对模型和损失函数进行适当的修改和调整。此外,由于Burgers方程的解在某些情况下会出现激波现象,因此需要对训练数据进行适当的处理和采样,以确保模型能够准确预测激波位置和形状。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值