文章复现-PINN-薄板弯曲挠度预测-SciANN代码

参考文献:

[1] 唐明健, 唐和生. 基于物理信息的深度学习求解矩形薄板力学正反问题[J]. Chinese Journal of Computational Mechanics/Jisuan Lixue Xuebao, 2022, 39(1).

[2] Haghighat E, Juanes R. Sciann: A keras/tensorflow wrapper for scientific computations and physics-informed deep learning using artificial neural networks[J]. Computer Methods in Applied Mechanics and Engineering, 2021, 373: 113552.

1. 薄板弯曲理论

1.1 矩形薄板应变与挠度关系

 \varepsilon_x=-(\partial^2\ w)/(\partial x^2\ )\ z\\\varepsilon_y=-(\partial^2\ w)/(\partial y^2\ )\ z\\\gamma=-2\ (\partial^2\ w)/(\partial x\partial y)\ z\\

1.2 本构方程

\sigma_x=E/(1-\upsilon^2\ )\ (\varepsilon_x+\upsilon\varepsilon_y\ )\\ \sigma_y=E/(1-\upsilon^2\ )\ (\varepsilon_y+\upsilon\varepsilon_x\ )\\\tau=E/2(1+\upsilon)\ \ \gamma\\

1.3 薄板弹性曲面微分方程

D\nabla^4\ w=q\\

其中D=(E\delta^3)/(12(1-\mu^2\ ))

2. SciANN代码

2.1 导入第三方库

import numpy as np
import matplotlib.pyplot as plt 
import sciann as sn 
import time
from numpy import pi
from sciann.utils.math import diff, sin,cos

2.2 四边简支矩形薄板的纳维解(徐芝纶 弹性力学下 第五版)构建数据集

2.2.1 物性参数

E=2.1*10**5 #弹性模量
v=0.3       #泊松比
q=50        #均布荷载
a=1
b=1
d=1
D=E*d**3/(12.0*(1-v**2))

2.2.2 划分采样点(20X20)

x_data, y_data = np.meshgrid(
    np.linspace(0, 1, 20), 
    np.linspace(0, 1, 20)
)

x_data = x_data.reshape(-1,1)
y_data = y_data.reshape(-1,1)
xy = np.concatenate([x_data,y_data],axis=1)

2.2.3 计算理论解,构建数据集

w1 = []
fx1 = []
fy1 = []
fxy1 = []
x_real=np.linspace(0,1,20)
y_real=np.linspace(0,1,20)

for y in y_real:
    for x in x_real:
        w2 = 4*q / pi**6 / D *np.sin(pi * x)*np.sin(pi * y)          ##位移
        fx2 = 24*q/pi**4/d**2*(1+v)*np.sin(pi * x)*np.sin(pi * y)    ##x方向正应力
        fy2 = 24*q/pi**4/d**2*(1+v)*np.sin(pi * x)*np.sin(pi * y)    ##y方向正应力
        fxy2 = -24*q/pi**4/d**2*(1-v)*np.cos(pi * x)*np.cos(pi * y)  ##切应力
        w1.append(w2)
        fx1.append(fx2)
        fy1.append(fy2)
        fxy1.append(fxy2)

w1= np.array(w1).reshape(-1,1)
fx1=np.array(fx1).reshape(-1,1)
fy1= np.array(fy1).reshape(-1,1)
fxy1= np.array(fxy1).reshape(-1,1)

##归一化
w3=(w1-w1.mean())/w1.std()
fx3=(fx1-fx1.mean())/fx1.std()
fy3=(fy1-fy1.mean())/fy1.std()
fxy3=(fxy1-fxy1.mean())/fxy1.std()

##组合到一起
train_1 = np.concatenate([w3,fx3],axis=1)
train_2 = np.concatenate([train_1,fy3],axis=1)
train_3 = np.concatenate([train_2,fxy3],axis=1)
train_data=np.concatenate([xy,train_3],axis=1)

2.3 构建神经网络

2.3.1 输入

x = sn.Variable('x',dtype='float64')
y = sn.Variable('y',dtype='float64')

2.3.2 输出

设置3个隐藏层,每个隐藏层40个神经单元,激活函数为双曲正切函数。

sigam_x1 = sn.Functional('sigam_x', [x,y], 3*[40], 'tanh')
sigam_y1 = sn.Functional('sigam_y', [x,y], 3*[40], 'tanh')
sigam_xy1 = sn.Functional('sigam_xy', [x,y], 3*[40], 'tanh')

2.4 损失函数

2.4.1 归一化复原

def invstd(X, mu, sig):
    return X*sig + mu

w = invstd(w_1, w1.mean(), w1.std()) 
sigam_x = invstd(sigam_x1, fx1.mean(), fx1.std())
sigam_y = invstd(sigam_y1, fy1.mean(), fy1.std())
sigam_xy = invstd(sigam_xy1, fxy1.mean(), fxy1.std())

WBAR = np.min([fx1.std(),fy1.std(),fxy1.std()])

2.4.2 求导设置

z=d/2.0
w_x1 = diff(w,x)
w_y1 = diff(w,y)

w_x2 = diff(w,x,order=2)
w_y2 = diff(w,y,order=2)
w_xy = diff(w_x1,y)

u_x=- z *w_x2
u_y=- z *w_y2
u_xy = -2.0*z*w_xy

w_x4=diff(w, x, order=4)
w_y4=diff(w, y, order=4)
w_x2y2 = diff(w_x2, y ,order=2)

2.4.3 构建损失函数

将损失函数在基于数据驱动的神经网络模型基础上添加薄板的弹性曲面微分方程,为了表征尽可能多的物理信息,又将应力应变的本构关系式纳入到损失函数中。

L1 = sn.Data(w_1)  ##挠度
L2 = sn.Data(sigam_x1)   ##x方向正应力
L3 = sn.Data(sigam_y1)   ##x方向正应力
L4 = sn.Data(sigam_xy1)  ##切应力
L5 = sn.Tie(-w_x4,w_y4+2.0*w_x2y2-q/D)  ##弹性曲面微分方程

##本构方程
L6 = sn.Tie(sigam_x/WBAR,E/(1.0-v**2)*(u_x+v*u_y)/WBAR)
L7 = sn.Tie(sigam_y/WBAR,E/(1.0-v**2)*(u_y+v*u_x)/WBAR)
L8 = sn.Tie(sigam_xy/WBAR,E/(2.0*(1+v))*u_xy/WBAR)

2.5 优化模型

model = sn.SciModel(
    inputs=[x, y],
    targets=[L1, L2, L3, L4, L5,L6,L7,L8],
    loss_func="mse",
    plot_to_file='Ban.png'
)

2.6 训练

data_L1 = train_data[:,2]
data_L2 = train_data[:,3]
data_L3 = train_data[:,4]
data_L4 = train_data[:,5]
data_L5 = 'zeros'
data_L6 = 'zeros'
data_L7 = 'zeros'
data_L8 = 'zeros'
target_data = [data_L1, data_L2, data_L3, data_L4,data_L5,data_L6,data_L7,data_L8]

input_data = [train_data[:,0], train_data[:,1]]

t = time.time()
history = model.train(
    x_true=input_data,
    y_true=target_data,
    epochs=2500,
    batch_size=32,
    shuffle=True,
    learning_rate=0.001,
    reduce_lr_after=100,
    stop_loss_value=1e-6,
    verbose=0
)
t = time.time() - t
  • 21
    点赞
  • 22
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值