数学是这样一步步“沦陷”的(机器学习解决科学计算)

数学是这样一步步“沦陷”的

引言

近年来,深度学习由于神经网络强大的表示能力在计算机视觉、语音识别和自然语言处理等方面取得了巨大的成功。利用人工神经网络(ANN)进行数值计算来解决各种数学问题引起了大家的广泛研究。例如:

  • 插值、拟合
  • 求解常微分方程(ODE)
  • 求解偏微分方程(PDE)

插值

问题描述

在函数 f ( x ) = s i n ( x ) − 0.5 f(x)=sin(x)-0.5 f(x)=sin(x)0.5, x ∈ ( − 2 π , 2 π ) x\in(-2\pi,2\pi) x(2π,2π)上均匀采样20个点作为插值点,使用 ANN 进行插值的数值计算结果如下:
Interpolation

学习过程
Interpolation

程序源代码

# 开发者:    Leo 刘
# 开发环境: macOs Big Sur
# 开发时间: 2021/8/1 11:43 上午
# 邮箱  : 517093978@qq.com
# @Software: PyCharm

import torch
import torch.nn as nn
import numpy as np
import matplotlib.pyplot as plt
from Dynamic_drawing import image2gif

# 构建输入集
x_data = np.linspace(-2 * np.pi, 2 * np.pi, 300)[:, np.newaxis]
x_data = torch.tensor(x_data).float()
y_data = np.sin(x_data) - 0.5
# y_data = np.sin(x_data)*x_data**2 - 0.5
y_data = torch.tensor(y_data).float()


# 定义神经网络
class Net(torch.nn.Module):
    # 初始化数组,参数分别是初始化信息,特征数,隐藏单元数,输出单元数
    def __init__(self, n_feature, n_hidden, n_output):
        # 此步骤是官方要求
        super(Net, self).__init__()
        # 设置输入层到隐藏层的函数
        self.input = torch.nn.Linear(n_feature, n_hidden)
        # 设置隐藏层到隐藏层的函数
        self.hidden = torch.nn.Linear(n_hidden, n_hidden)
        # 设置隐藏层到输出层的函数
        self.predict = torch.nn.Linear(n_hidden, n_output)

        # 定义向前传播函数

    def forward(self, input):
        # 给x加权成为a,用激活函数将a变成特征b
        # input = torch.relu(self.hidden(input))  # 线性插值
        input = torch.tanh(self.input(input))  # 非线性插值
        # 给a加权成为b,用激活函数将b变成特征c
        input = torch.tanh(self.hidden(input))
        # 给c加权,预测最终结果
        input = self.predict(input)
        return input


def interpolation_node(x_inter_data, y_inter_data, N):
    data_size, w = x_data.size()
    b = np.array(range(N))
    a = 1 + data_size / N * b
    x_inter_node = x_inter_data[a]
    y_inter_node = y_inter_data[a]
    return x_inter_node, y_inter_node


# 初始化网络
print('神经网络结构:')
myNet = Net(1, 10, 1)
print(myNet)

# 设置优化器
# optimizer = torch.optim.SGD(myNet.parameters(), lr=0.05)
optimizer = torch.optim.Adam(myNet.parameters(), lr=0.05)
loss_func = nn.MSELoss()

best_loss, best_epoch = 1000, 0
epochs = 2000  # 训练次数
x_inter_node, y_inter_node = interpolation_node(x_data,y_data, 20)  # 获取插值节点
input = x_inter_node
label = y_inter_node
Loss_list = []
print('开始学习:')
for epoch in range(epochs + 1):
    output = myNet(input)
    loss = loss_func(output, label)  # 计算误差
    Loss_list.append(loss / (len(x_data)))
    optimizer.zero_grad()  # 清除梯度
    loss.backward()  # 误差反向传播
    optimizer.step()  # 梯度更新
    torch.save(myNet.state_dict(), 'new_best_interpolation.mdl')
    # 记录误差
    if epoch % 100 == 0:
        print('epoch:', epoch, 'loss:', loss.item(), )
        # 记录最佳训练次数
        if epoch > int(4 * epochs / 5):
            if torch.abs(loss) < best_loss:
                best_loss = torch.abs(loss).item()
                best_epoch = epoch
                torch.save(myNet.state_dict(), 'new_best_interpolation.mdl')

    # 连续绘图
    if epoch % 10 == 0:
        plt.ion()  # 打开交互模式
        plt.close('all')
        myNet.load_state_dict(torch.load('new_best_interpolation.mdl'))
        with torch.no_grad():
            pred = myNet(x_data)
        plt.plot(x_data, y_data, label='real')
        plt.plot(x_data, pred, label='predict')
        plt.title("Training times:" + str(epoch))
        # 画散点图
        colors1 = '#00CED1'  # 点的颜色
        colors2 = '#DC143C'
        area = np.pi * 2 ** 2  # 点面积
        plt.scatter(input, label, s=area, c=colors2, alpha=0.9, label='node')
        plt.legend(loc="upper right")
        plt.savefig('./Training_process/Interpolation_'+str(epoch)+'.png')
        if epoch == best_epoch:
            plt.savefig('Best_nterpolation.png')
        # plt.show()
        # plt.pause(0.01)
print('=' * 55)
print('学习结束'.center(55))
print('-' * 55)
print('最优学习批次:', best_epoch, '最优误差:', best_loss)
plt.close('all')
plt.ioff()  # 关闭交互模式
plt.title('Error curve')
plt.xlabel('loss vs. epoches')
plt.ylabel('loss')
plt.plot(range(0, epochs + 1), Loss_list, label='Loss')
plt.savefig('Error_curve_interpolation.png')
# plt.show()
print('已生成"最优插值结果图",请打开文件"Best_interpolation.png"查看')
print('已生成"误差曲线图",请打开文件"Error_curve_interpolation.png"查看')
print('-' * 55)
print('准备绘制训练过程动态图')
image2gif.image2gif('Interpolation')
print('=' * 55)

拟合

问题描述

在函数 f ( x ) = s i n ( x ) − 0.5 f(x)=sin(x)-0.5 f(x)=sin(x)0.5, x ∈ ( − 2 π , 2 π ) x\in(-2\pi,2\pi) x(2π,2π)上添加噪声后,使用 ANN 对300个点进行拟合的数值计算结果如下:

Fitting

学习过程
Fitting

程序源代码

# 开发者:    Leo 刘
# 开发环境: macOs Big Sur
# 开发时间: 2021/8/1 11:43 上午
# 邮箱  : 517093978@qq.com
# @Software: PyCharm

import torch
import torch.nn as nn
import numpy as np
import matplotlib.pyplot as plt
from Dynamic_drawing import image2gif

# 构建输入集
x_data = np.linspace(-2 * np.pi, 2 * np.pi, 300)[:, np.newaxis]
x_data = torch.tensor(x_data).float()
noise = np.random.normal(0, 0.05, x_data.shape)
y_data = np.sin(x_data) - 0.5 + noise
y_data = torch.tensor(y_data).float()


# 定义神经网络
class Net(torch.nn.Module):
    # 初始化数组,参数分别是初始化信息,特征数,隐藏单元数,输出单元数
    def __init__(self, n_feature, n_hidden, n_output):
        # 此步骤是官方要求
        super(Net, self).__init__()
        # 设置输入层到隐藏层的函数
        self.input = torch.nn.Linear(n_feature, n_hidden)
        # 设置隐藏层到隐藏层的函数
        self.hidden = torch.nn.Linear(n_hidden, n_hidden)
        # 设置隐藏层到输出层的函数
        self.predict = torch.nn.Linear(n_hidden, n_output)

        # 定义向前传播函数

    def forward(self, input):
        # 给x加权成为a,用激活函数将a变成特征b
        # input = torch.relu(self.hidden(input))  # 线性插值
        input = torch.tanh(self.input(input))  # 非线性插值
        # 给a加权成为b,用激活函数将b变成特征c
        input = torch.tanh(self.hidden(input))
        # 给c加权,预测最终结果
        input = self.predict(input)
        return input


# 初始化网络
myNet = Net(1, 10, 1)
print('神经网络结构:')
print(myNet)

# 设置优化器
# optimizer = torch.optim.SGD(myNet.parameters(), lr=0.05)
optimizer = torch.optim.Adam(myNet.parameters(), lr=0.05)
loss_func = nn.MSELoss()

best_loss, best_epoch = 1000, 0
epochs = 2000  # 训练次数
input = x_data
label = y_data
Loss_list = []
print('开始学习:')
for epoch in range(epochs + 1):
    output = myNet(input)
    loss = loss_func(output, label)  # 计算误差
    Loss_list.append(loss / (len(x_data)))
    optimizer.zero_grad()  # 清除梯度
    loss.backward()  # 误差反向传播
    optimizer.step()  # 梯度更新
    torch.save(myNet.state_dict(), 'new_best_fitting.mdl')
    # 记录误差
    if epoch % 100 == 0:
        print('epoch:', epoch, 'loss:', loss.item(), )
        # 记录最佳训练次数
        if epoch > int(4 * epochs / 5):
            if torch.abs(loss) < best_loss:
                best_loss = torch.abs(loss).item()
                best_epoch = epoch
                torch.save(myNet.state_dict(), 'new_best_fitting.mdl')

    # 连续绘图
    if epoch % 10 == 0:
        plt.ion()  # 打开交互模式
        plt.close('all')
        myNet.load_state_dict(torch.load('new_best_fitting.mdl'))
        with torch.no_grad():
            pred = myNet(x_data)
        plt.plot(x_data, y_data, label='real')
        plt.plot(x_data, pred, label='predict')
        plt.title("Training times:" + str(epoch))
        plt.legend()
        plt.savefig('./Training_process/Fitting_'+str(epoch)+'.png')
        if epoch == best_epoch:
            plt.savefig('Best_fitting.png')
        # plt.show()
        # plt.pause(0.01)
print('=' * 55)
print('学习结束'.center(55))
print('-' * 55)
print('最优学习批次:', best_epoch, '最优误差:', best_loss)
plt.close('all')
plt.ioff()  # 关闭交互模式
plt.title('Error curve')
plt.xlabel('loss vs. epoches')
plt.ylabel('loss')
plt.plot(range(0, epochs + 1), Loss_list, label='Loss')
plt.savefig('Error_curve_Fitting.png')
# plt.show()
print('已生成"最优拟合结果图",请打开文件"Best_fitting.png"查看')
print('已生成"误差曲线图",请打开文件"Error_curve_Fitting.png"查看')
print('-' * 55)
print('准备绘制训练过程动态图')
image2gif.image2gif('Fitting')
print('=' * 55)

常微分方程

问题描述

考虑如下常微分方程:
f ( x ) = d ϕ d x + ( x + 1 + 3 x 2 1 + x + x 3 ) ϕ − x 3 − 2 x − x 2 1 + 3 x 2 1 + x + x 3 , x ∈ ( 0 , 2 ) f(x)=\frac{d\phi}{dx} + (x + \frac{1+3x^2}{1+x+x^3})\phi - x^3 - 2x - x^2\frac{1+3x^2}{1+x+x^3}, x\in (0,2) f(x)=dxdϕ+(x+1+x+x31+3x2)ϕx32xx21+x+x31+3x2,x(0,2)
f ( x ) = 0 , x = 0. f(x) = 0, x=0. f(x)=0,x=0.

ANN求解结果与真解: f ( x ) = e − 1 2 x 2 1 + x + x 3 + x 2 f(x)=\frac{e^{-\frac{1}{2}x^2}}{1+x+x^3}+x^2 f(x)=1+x+x3e21x2+x2的对比如下:

ODE

学习过程
ODE

程序源代码

# 开发者:    Leo 刘
# 开发环境: macOs Big Sur
# 开发时间: 2021/5/21 3:07 下午
# 邮箱  : 517093978@qq.com
# @Software: PyCharm

import matplotlib.pyplot as plt
import numpy as np
import torch
from torch import autograd
from Dynamic_drawing import image2gif

# 构建输入集
# 生成[0,2]区间100个点
x_data = np.linspace(0, 2, 100, endpoint=True)[:, np.newaxis]

x_data = torch.tensor(x_data).float()
# print(x_data.size())
# 已知解析解用于比较
y_data = np.exp(-0.5 * x_data ** 2) / (1 + x_data + x_data ** 3) + x_data ** 2
y_data = torch.tensor(y_data).float()


# 定义神经网络
class Net(torch.nn.Module):
    # 初始化数组,参数分别是初始化信息,特征数,隐藏单元数,输出单元数
    def __init__(self, n_feature, n_hidden, n_output):
        # 此步骤是官方要求
        super(Net, self).__init__()
        # 设置输入层到隐藏层的函数
        self.input = torch.nn.Linear(n_feature, n_hidden)
        # 设置隐藏层到隐藏层的函数
        self.hidden = torch.nn.Linear(n_hidden, n_hidden)
        # 设置隐藏层到输出层的函数
        self.predict = torch.nn.Linear(n_hidden, n_output)

        # 定义向前传播函数

    def forward(self, input):
        # 给x加权成为a,用激活函数将a变成特征b
        # input = torch.relu(self.hidden(input))  # 线性插值
        input = torch.tanh(self.input(input))  # 非线性插值
        # 给a加权成为b,用激活函数将b变成特征c
        input = torch.tanh(self.hidden(input))
        # 给c加权,预测最终结果
        input = self.predict(input)
        return input


# 初始化网络
print('神经网络结构:')
myNet = Net(1, 10, 1)
print(myNet)
# 设置优化器
optimizer = torch.optim.Adam(myNet.parameters(), lr=0.05)

best_loss, best_epoch = 100, 0
epochs = 2000  # 训练次数
input = x_data
# 声明:自动保存参数input的梯度
input.requires_grad_()
Loss_list = []
print('开始学习:')
for epoch in range(epochs + 1):
    output = myNet(input)
    # 梯度
    grads = autograd.grad(outputs=output, inputs=input,
                          grad_outputs=torch.ones_like(output),
                          create_graph=True, retain_graph=True, only_inputs=True)[0]
    lq = (1 + 3 * (input ** 2)) / (1 + input + input ** 3)

    t_loss = (grads + (input + lq) * output - input ** 3 - 2 * input - lq * input * input) ** 2  # 常微分方程F的平方
    loss_func = torch.mean(t_loss) + (output[0] - 1) ** 2  # 每点F平方求和后取平均再加上边界条件
    loss = loss_func  # 计算误差
    Loss_list.append(loss / (len(x_data)))
    optimizer.zero_grad()  # 清除梯度
    loss.backward()  # 误差反向传播
    optimizer.step()  # 梯度更新
    torch.save(myNet.state_dict(), 'new_best_ODE.mdl')
    # 记录误差
    if epoch % 100 == 0:
        print('epoch:', epoch, 'loss:', loss.item(), )
        # 记录最佳训练次数
        if epoch > int(4 * epochs / 5):
            if torch.abs(loss) < best_loss:
                best_loss = torch.abs(loss).item()
                best_epoch = epoch
                torch.save(myNet.state_dict(), 'new_best_ODE.mdl')
    # 连续绘图
    if epoch % 1 == 0:
        plt.ion()  # 打开交互模式
        plt.close('all')
        myNet.load_state_dict(torch.load('new_best_ODE.mdl'))
        with torch.no_grad():
            x_data = np.linspace(0, 2, 100, endpoint=True)[:, np.newaxis]
            x_data = torch.tensor(x_data).float()
            pred = myNet(x_data)
        plt.plot(x_data, y_data, label='real')
        plt.plot(x_data, pred, label='predict')
        plt.title("Training times:" + str(epoch))
        plt.legend()
        plt.savefig('./Training_process/ODE_'+str(epoch)+'.png')
        if epoch == best_epoch:
            plt.savefig('Best_ODE.png')
        # plt.show()
        # plt.pause(0.01)
print('=' * 55)
print('学习结束'.center(55))
print('-' * 55)
print('最优学习批次:', best_epoch, '最优误差:', best_loss)
plt.close('all')
plt.ioff()  # 关闭交互模式
plt.title('Error curve')
plt.xlabel('loss vs. epoches')
plt.ylabel('loss')
plt.plot(range(0, epochs + 1), Loss_list, label='Loss')
plt.savefig('Error_curve_ODE.png')
# plt.show()
print('已生成"最优求解结果图",请打开文件"Best_ODE.png"查看')
print('已生成"误差曲线图",请打开文件"Error_curve_ODE.png"查看')
print('-' * 55)
print('准备绘制训练过程动态图')
image2gif.image2gif('ODE')
print('=' * 55)

偏微分方程

问题描述

考虑泊松方程:
− Δ u ( x ) = 1 , x ∈ Ω -\Delta u(x) = 1, x\in \Omega Δu(x)=1,xΩ
u ( x ) = 0 , x ∈ ∂ Ω , u(x) = 0, x\in \partial \Omega, u(x)=0,xΩ,

其中 Ω = ( − 1 , 1 ) × ( − 1 , 1 ) \ [ 0 , 1 ) × { 0 } \Omega = (-1,1) \times (-1,1) \backslash [0,1) \times \{0\} Ω=(1,1)×(1,1)\[0,1)×{0}.使用 DRM 算法进行求解的数值计算结果如下:

Deep_Ritz

学习过程
Deep_Ritz

程序源代码

# 开发者:    Leo 刘
# 开发环境: macOs Big Sur
# 开发时间: 2021/8/1 10:45 下午
# 邮箱  : 517093978@qq.com
# @Software: PyCharm
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch import optim, autograd
from matplotlib import pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
from Dynamic_drawing import image2gif

# Try to solve the poisson equation:
'''  Solve the following PDE
-\Delta u(x) = 1, x\in \Omega,
u(x) = 0, x\in \partial \Omega  
\Omega = (-1,1) * (-1,1) \ [0,1) *{0}
'''


# 定义DRM中提到的激活函数(但会出现异常值,故采用tanh激活函数)
class PowerReLU(nn.Module):
    """
    Implements simga(x)^(power)
    Applies a power of the rectified linear unit element-wise.

    NOTE: inplace may not be working.
    Can set inplace for inplace operation if desired.
    BUT I don't think it is working now.

    INPUT:
        x -- size (N,*) tensor where * is any number of additional
             dimensions
    OUTPUT:
        y -- size (N,*)
    """

    def __init__(self, inplace=False, power=3):
        super(PowerReLU, self).__init__()
        self.inplace = inplace
        self.power = power

    def forward(self, input):
        y = F.relu(input, inplace=self.inplace)
        return torch.pow(y, self.power)


# 定义一个残差块
class Block(nn.Module):
    """
    Implementation of the block used in the Deep Ritz
    Paper

    Parameters:
    in_N  -- dimension of the input
    width -- number of nodes in the interior middle layer
    out_N -- dimension of the output
    phi   -- activation function used
    """

    def __init__(self, in_N, width, out_N, phi=PowerReLU()):
        super(Block, self).__init__()
        # create the necessary linear layers
        self.L1 = nn.Linear(in_N, width)
        self.L2 = nn.Linear(width, out_N)
        # choose appropriate activation function
        self.phi = nn.Tanh()

    def forward(self, x):
        return self.phi(self.L2(self.phi(self.L1(x)))) + x


# 组装残差块,完成一个完整残差神经网络的搭建
class drrnn(nn.Module):
    """
    drrnn -- Deep Ritz Residual Neural Network

    Implements a network with the architecture used in the
    deep ritz method paper

    Parameters:
        in_N  -- input dimension
        out_N -- output dimension
        m     -- width of layers that form blocks
        depth -- number of blocks to be stacked
        phi   -- the activation function
    """

    def __init__(self, in_N, m, out_N, depth=4, phi=PowerReLU()):
        super(drrnn, self).__init__()
        # set parameters
        self.in_N = in_N
        self.m = m
        self.out_N = out_N
        self.depth = depth
        self.phi = nn.Tanh()
        # list for holding all the blocks
        self.stack = nn.ModuleList()

        # add first layer to list
        self.stack.append(nn.Linear(in_N, m))

        # add middle blocks to list
        for i in range(depth):
            self.stack.append(Block(m, m, m))

        # add output linear layer
        self.stack.append(nn.Linear(m, out_N))

    def forward(self, x):
        # first layer
        for i in range(len(self.stack)):
            x = self.stack[i](x)
        return x


# 内部采样
def get_interior_points(N=512, d=2):
    """
    randomly sample N points from interior of [-1,1]^d
    """
    return torch.rand(N, d) * 2 - 1


# 边界采样
def get_boundary_points(N=32):
    """
    randomly sample N points from boundary
    """
    index = torch.rand(N, 1)
    index1 = torch.rand(N, 1) * 2 - 1
    xb1 = torch.cat((index, torch.zeros_like(index)), dim=1)
    xb2 = torch.cat((index1, torch.ones_like(index1)), dim=1)
    xb3 = torch.cat((index1, torch.full_like(index1, -1)), dim=1)
    xb4 = torch.cat((torch.ones_like(index1), index1), dim=1)
    xb5 = torch.cat((torch.full_like(index1, -1), index1), dim=1)
    xb = torch.cat((xb1, xb2, xb3, xb4, xb5), dim=0)

    return xb


# 初始化神经网络参数
def weights_init(m):
    if isinstance(m, (nn.Conv2d, nn.Linear)):
        nn.init.xavier_normal_(m.weight)
        nn.init.constant_(m.bias, 0.0)


epochs = 50000  # 学习次数
in_N = 2
m = 10
out_N = 1

# print(torch.cuda.is_available())
# 优先选用GPU进行训练
device = torch.device('cpu' if torch.cuda.is_available() else 'cpu')
# 实例化残差神经网络模型
model = drrnn(in_N, m, out_N).to(device)
# Apply weight_init initialization method to submodels
model.apply(weights_init)
# 选择Adam优化器,初始化学习率(lr)为3e-3
optimizer = optim.Adam(model.parameters(), lr=3e-3)
print('神经网络结构:')
print(model)

best_loss, best_epoch = 1000, 0
Loss_list = []
# 开始训练
print('开始学习:')
for epoch in range(epochs + 1):

    # 产生数据集
    xr = get_interior_points()
    xb = get_boundary_points()

    xr = xr.to(device)
    xb = xb.to(device)
    # 声明:自动保存参数xr、xb_rho的梯度
    xr.requires_grad_()
    output_r = model(xr)
    output_b = model(xb)
    # 梯度
    grads = autograd.grad(outputs=output_r, inputs=xr,
                          grad_outputs=torch.ones_like(output_r),
                          create_graph=True, retain_graph=True, only_inputs=True)[0]
    # loss_r为Ritz变分格式、loss_b为边界处理、loss为损失函数
    loss_r = 0.5 * torch.sum(torch.pow(grads, 2), dim=1) - output_r
    loss_r = torch.mean(loss_r)
    loss_b = torch.mean(torch.pow(output_b, 2))
    loss = loss_r + 500 * loss_b
    Loss_list.append(loss / (len(xr) + len(xb)))
    optimizer.zero_grad()  # 优化器参数归零
    loss.backward()  # 误差反向传播
    optimizer.step()  # 神经网络参数(权重及偏置)更新
    torch.save(model.state_dict(), 'new_best_Deep_Ritz.mdl')  # 保存神经网络模型
    if epoch % 100 == 0:
        print('epoch:', epoch, 'loss:', loss.item(), 'loss_r:', (loss_r).item(), 'loss_b:',
              (500 * loss_b).item())
        if epoch > int(4 * epochs / 5):
            if torch.abs(loss) < best_loss:
                best_loss = torch.abs(loss).item()
                best_epoch = epoch
                torch.save(model.state_dict(), 'new_best_Deep_Ritz.mdl')
    # 连续绘图
    if epoch % 500 == 0:
        plt.ion()  # 打开交互模式
        plt.close('all')
        model.load_state_dict(torch.load('new_best_Deep_Ritz.mdl'))
        with torch.no_grad():
            x1 = torch.linspace(-1, 1, 1001)
            x2 = torch.linspace(-1, 1, 1001)
            X, Y = torch.meshgrid(x1, x2)
            Z = torch.cat((Y.flatten()[:, None], Y.T.flatten()[:, None]), dim=1)
            Z = Z.to(device)
            pred = model(Z)

        plt.figure()
        pred = pred.cpu().numpy()
        pred = pred.reshape(1001, 1001)
        ax = plt.subplot(1, 1, 1)
        h = plt.imshow(pred, interpolation='nearest', cmap='rainbow',
                       extent=[-1, 1, -1, 1],
                       origin='lower', aspect='auto')

        plt.title("Training times:" + str(epoch))
        divider = make_axes_locatable(ax)
        cax = divider.append_axes("right", size="5%", pad=0.05)
        plt.colorbar(h, cax=cax)
        plt.savefig('./Training_process/Deep_Ritz_' + str(epoch) + '.png')
        if epoch == best_epoch:
            plt.savefig('Best_Deep_Ritz.png')
        # plt.show()
        # plt.pause(0.02)

print('=' * 55)
print('学习结束'.center(55))
print('-' * 55)
print('最优学习批次:', best_epoch, '最优误差:', best_loss)
plt.close('all')
plt.ioff()  # 关闭交互模式
plt.title('Error curve')
plt.xlabel('loss vs. epoches')
plt.ylabel('loss')
plt.plot(range(0, epochs + 1), Loss_list, label='Loss')
plt.savefig('Error_curve_Deep_Ritz.png')
# plt.show()
print('已生成"最优拟合结果图",请打开文件"Best_Deep_Ritz.png"查看')
print('已生成"误差曲线图",请打开文件"Error_curve_Deep_Ritz.png"查看')
print('-' * 55)
print('准备绘制训练过程动态图')
image2gif.image2gif('Deep_Ritz')
print('=' * 55)

Open your Python!

Python

  • 2
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 3
    评论
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

图灵猫

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值