PINN学习记录(3.3)

本文记录了PINN学习过程中的PhyCRNet模型构建,详细介绍了Loss部分,包括Conv2dDerivative、Conv1dDerivative的实现,以及loss_generator的计算过程。同时,提到了数据处理流程,数据经过encoder、convlstm、pixelshuffle和outputlayer的处理。
摘要由CSDN通过智能技术生成
PhyCRNet模型构建完成

→ T r a i n L o s s \rightarrow Train Loss TrainLoss

part5 Loss

1、Conv2dDerivative

class Conv2dDerivative(nn.Module):
    def __init__(self, DerFilter, resol, kernel_size=3, name=''):
        super(Conv2dDerivative, self).__init__()

        self.resol = resol  # constant in the finite difference
        self.name = name
        self.input_channels = 1
        self.output_channels = 1
        self.kernel_size = kernel_size

        self.padding = int((kernel_size - 1) / 2)
        self.filter = nn.Conv2d(self.input_channels, self.output_channels, self.kernel_size, 
            1, padding=0, bias=False)

        # Fixed gradient operator
        self.filter.weight = nn.Parameter(torch.FloatTensor(DerFilter), requires_grad=False)  

    def forward(self, input):
        derivative = self.filter(input)
        return derivative / self.resol

filter
filter=conv2d(1, 1, 3x3, 1, padding=0, bias=False)

  • 通过给filter传参,得到derivative
  • 定义了filter的权重,weight
  • nn.Parameter()表示对weight会进行优化

derivative
derivative = self.filter(input)
输出Conv2d网络的output
resol
有限差分中的常数

  • laplace_operator:(dx**2)
  • dx_operator:(dx*1)
  • dy_operator:(dy*1)

最后Conv2dDerivative返回 d e r i v a t i v e r e s o l \frac {derivative}{resol} resolderivative
2、Conv1dDerivative

class Conv1dDerivative(nn.Module):
    def __init__(self, DerFilter, resol, kernel_size=3, name=''):
        super(Conv1dDerivative, self).__init__()

        self.resol = resol  # $\delta$*constant in the finite difference
        self.name = name
        self.input_channels = 1
        self.output_channels = 1
        self.kernel_size = kernel_size

        self.padding = int((kernel_size - 1) / 2)
        self.filter = nn.Conv1d(self.input_channels, self.output_channels, self.kernel_size, 
            1, padding
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值