【神经网络+数学】——(3)神经网络求解一元微分问题(二阶微分)

背景

详见上篇博客
本博客对更复杂的二阶微分问题进行神经网络求解,问题示例参考博客

问题描述

在这里插入图片描述

定义域选取[0,2]

模型代码

神经网络模拟φ(x),利用自动微分得到二阶、一阶微分,代入表达式后作为loss进行训练即可。该方法适用于N阶微分问题,具有搭建快捷、模型结构不受阶数影响的优点,而传统的解析方法对阶数很敏感,求解难度随着阶数的增加直线上升。

使用tf2+python3.7环境,自动微分的结果表示微分函数值,训练代码如下(不包含net类的定义代码,需要付费获取,请私信联系博主):

# 随机打乱
seed = np.random.randint(0, 2021, 1)[0]
np.random.seed(seed)
np.random.shuffle(x_space)
y_space = psy_analytic(x_space)
x_space = tf.reshape(x_space, (-1, 1))
x_space = tf.cast(x_space, tf.float32)  # 默认是float64会报错不匹配,所以要转类型
net = Nx_Net(x_space, tf.reduce_min(x_space), tf.reduce_max(x_space), w=w, activation=activation)
if retrain:
    net.model_load()
optimizer = Adam(lr)
for epoch in range(epochs):
    grad, loss, loss_data, loss_equation,loss_border = net.train_step()
    optimizer.apply_gradients(zip(grad, net.trainable_variables))

    if epoch % 100 == 0:
        print("loss:{}\tloss_data:{}\tloss_equation:{}\tloss_border:{}\tepoch:{}".format(loss, loss_data, loss_equation,loss_border, epoch))
net.model_save()
predict = net.net_call(x_space)
plt.plot(x_space, y_space, 'o', label="True")
plt.plot(x_space, predict, 'x', label="Pred")
plt.legend(loc=1)
plt.title("predictions")
plt.show()

训练参数配置:

retrain = False
activation = 'tanh'
grid = 10
epochs = 20000
lr = 0.001
w = (1, 1,1)

训练日志:

loss:1.4774293899536133	loss_data:0.3582383990287781	loss_equation:0.0964028537273407	loss_border:1.022788166999817	epoch:0
loss:1.139674186706543	loss_data:0.31288060545921326	loss_equation:0.13169381022453308	loss_border:0.6950997114181519	epoch:100
loss:1.0643255710601807	loss_data:0.3342372477054596	loss_equation:0.14209073781967163	loss_border:0.587997555732727	epoch:200
loss:0.981413722038269	loss_data:0.34834182262420654	loss_equation:0.13856054842472076	loss_border:0.49451133608818054	epoch:300
loss:0.8012645840644836	loss_data:0.3721367120742798	loss_equation:0.12421827018260956	loss_border:0.3049095869064331	epoch:400
loss:0.5379026532173157	loss_data:0.42624107003211975	loss_equation:0.04168599843978882	loss_border:0.0699755847454071	epoch:500
loss:0.5068864822387695	loss_data:0.43383288383483887	loss_equation:0.02313089184463024	loss_border:0.04992268607020378	epoch:600
loss:0.5024876594543457	loss_data:0.43435603380203247	loss_equation:0.020750800147652626	loss_border:0.04738083481788635	epoch:700
loss:0.5008866786956787	loss_data:0.4343510568141937	loss_equation:0.019994685426354408	loss_border:0.04654095694422722	epoch:800
loss:0.4999883472919464	loss_data:0.43428468704223633	loss_equation:0.019583197310566902	loss_border:0.046120475977659225	epoch:900
……
loss:0.49528592824935913	loss_data:0.43690577149391174	loss_equation:0.015979139134287834	loss_border:0.04240100085735321	epoch:19600
loss:0.4952779710292816	loss_data:0.4368930459022522	loss_equation:0.015986021608114243	loss_border:0.04239888861775398	epoch:19700
loss:0.49783745408058167	loss_data:0.44942647218704224	loss_equation:0.007159555796533823	loss_border:0.04125141352415085	epoch:19800
loss:0.49526429176330566	loss_data:0.4368693232536316	loss_equation:0.01600196212530136	loss_border:0.042392998933792114	epoch:19900
model saved in  net.weights

Process finished with exit code 0

输出的拟合效果:
在这里插入图片描述
加入构造的神经网络解析解求解结果的边界条件后可进一步优化loss的值:

loss:0.4743349552154541	loss_data:0.44026637077331543	loss_equation:0.015440814197063446	loss_border:0.01862775906920433	epoch:19500
loss:0.47433316707611084	loss_data:0.4401574730873108	loss_equation:0.015557961538434029	loss_border:0.01861770637333393	epoch:19600
loss:0.47433045506477356	loss_data:0.44023728370666504	loss_equation:0.015468712896108627	loss_border:0.018624447286128998	epoch:19700
loss:0.47436419129371643	loss_data:0.43909740447998047	loss_equation:0.016729842871427536	loss_border:0.01853696070611477	epoch:19800
loss:0.47432589530944824	loss_data:0.4402168393135071	loss_equation:0.015486991964280605	loss_border:0.01862206496298313	epoch:19900

结论

拟合效果较好,说明了理论在实践上的可行性,以及代码的正确性;根据神经网络解方程的优势,可推广至N阶微分方程(临时只做了一元的,没有做多元偏微分方程PDE的研究),这是传统的解析方法无论是从可求解性、求解复杂程度、求解速度上都是无法比拟的(低阶时传统解析方法可能优于神经网络方法)

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

大数据李菜

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值