周报(7.8-7.14)

周报(7.8-7.14)

本周工作:

  1. 复现FCNVMB网络代码
  2. 学习机器学习基本常识
  3. 学习池化层相关知识,形成博客一篇

复现FCNVMB网络代码

FCNVMB网络同样分为编码层和解码层,与InversionNet相同,FCNVMB同样使用卷积来提取特征,并使用反卷积来从特征中还原速度模型,但不同之处在于,InversionNet使用卷积来压缩特征,而FCNVMB网络使用的是最大池化层;InversionNet将特征压缩为512个数字,而FCNVMB网络将特征压缩为1024个25*19的特征图像。可以说,FCNVMB网络中编码器向解码器传递的信息要比InversionNet多很多。除此之外,FCNVMB网络还采用了4个跳跃连接,防止梯度消失,直接向解码器传递部分特征信息。

FCNVMB网络的卷积层使用了与InversionNet相同的3*3 conv-BN-ReLU的卷积模块:

# 卷积块
class ConvBlock(nn.Module):
    def __init__(self, in_channel,out_channel,kernel_size=3, stride=1, padding=1, negative_slope=0.2):
        super().__init__()
        conv = nn.Conv2d(in_channel,out_channel,kernel_size=kernel_size,stride=stride,padding=padding)
        bn = nn.BatchNorm2d(out_channel)
        active = nn.LeakyReLU(negative_slope, inplace=True)
        self.layers = nn.Sequential(conv, bn, active)
    def forward(self, inputs):
        outputs = self.layers(inputs)
        return outputs

但这样一个卷积模块只能算是低级操作,两个这样的低级操作被组装为UnetConv2中级模块:

class UnetConv2(nn.Module):
    def __init__(self, in_channel, out_channel):
        super().__init__()
        convBlock1 = ConvBlock(in_channel, out_channel, padding=1)
        convBlock2 = ConvBlock(out_channel, out_channel, padding=1)
        self.layers = nn.Sequential(convBlock1, convBlock2)
    def forward(self, inputs):
        outputs = self.layers(inputs)
        return outputs

而两个不同的高级模块都是用到了这样一个中级模块。UnetDown负责提取特征并下采样:

class UnetDown(nn.Module):
    def __init__(self, in_channel, out_channel):
        super().__init__()
        unetConv2 = UnetConv2(in_channel, out_channel)
        maxpooling = nn.MaxPool2d((2,2), 2, ceil_mode=True)
        self.layers = nn.Sequential(unetConv2, maxpooling)
    def forward(self, inputs):
        outputs = self.layers(inputs)
        return outputs

这里要解释以下maxpooling的参数。而UnetUP负责使用特征还原图像,即上采样:

class UnetUP(nn.Module):
    def __init__(self, in_channel, hidden_channel, out_channel):
        super().__init__()
        unetConv2 = UnetConv2(in_channel, hidden_channel)
        deconv = nn.ConvTranspose2d(hidden_channel, out_channel, (2,2), stride=2)
        self.layers = nn.Sequential(unetConv2, deconv)
    def forward(self, inputs):
        outputs = self.layers(inputs)
        return outputs

使用这些模块构建FCNVMB网络,该网络数据形状变换较复杂,除了常规的网络模块外还存在跳跃连接,图像切割变换等,为了便于说明,详细的解释都在代码中。

class FCNVMB(nn.Module):
    def __init__(self):
        super().__init__()
        # 输入形状 (400, 301, 29) 400宽,301时域,29炮点,其实网络中的实际形状是(batch_size, 29, 400, 301),这里这样写是方便理解
        self.unetDown_1 = UnetDown(29, 64)
        # 经过下采样,输出形状 (200, 151, 64)
        self.unetDown_2 = UnetDown(64, 128)
        # 经过下采样,输出形状 (100, 76, 128)
        self.unetDown_3 = UnetDown(128, 256)
        # 经过下采样,输出形状 (50, 38, 256)
        self.unetDown_4 = UnetDown(256, 512)
        # 经过下采样,输出形状 (25, 19, 512)
        self.unetUP_1 = UnetUP(512, 1024, 512)
        # 经过上采样,输出形状 (50, 38, 512) 并与跳跃连接并变换的数据在特征维度上相加 + [(25, 19, 512)->(50, 38, 512)] from unetDown_4 = (50, 38, 1028)
        self.unetUP_2 = UnetUP(1024, 512, 256)
        # 经过上采样,输出形状 (100, 76, 256) + [(50, 38, 256)->(100, 76, 256)] from unetDown_3 = (100, 76, 512)
        self.unetUP_3 = UnetUP(512, 256, 128)
        # 经过上采样,输出形状 (200, 152, 128) + [(100, 76, 128)->(200, 152, 128)] from unetDown_2 = (200, 152, 256)
        self.unetUP_4 = UnetUP(256, 128, 64)
        # 经过上采样,输出形状 (400, 304, 64) + [(200, 151, 64)->(400, 304, 64)] from unetDown_1 = (400, 304, 128)
        self.unetConv_1 = UnetConv2(128, 64)
        # 经过卷积,输出形状 (400, 304, 64)
        
        # 在上述输出的(1,1)处截取(201, 301, 64)的图片
        
        self.conv = nn.Conv2d(64, 1, 1)
        # 在特征维度上卷积,输出 (201, 301, 1) 的速度模型图像
    
    def __pad(self, inputs, shape_likes):
        # 以 unetUP_1 到 unetUP_2 为例,unetUP_1输出为(50, 38, 512),相连的跳跃连接数据为(25, 19, 512),维度相同,但是图片尺寸不同,因此要将图片填充为指定的形状
        offset1 = (shape_likes.shape[2] - inputs.shape[2])
        # ↑计算跳跃数据与标准数据的差值,这里的形状为(batch_size, 512, 25, 19), 目标数据为(batch_size, 512, 50, 38),算出第三个维度的差值25
        offset2 = (shape_likes.shape[3] - inputs.shape[3])
        # ↑计算第四个维度的差值为9
        padding = [offset2 // 2, (offset2 + 1) // 2, offset1 // 2, (offset1 + 1) // 2]
        # ↑ padding = [矩阵左边添加的数量,矩阵右边添加的数量, 矩阵上边添加的数量, 矩阵下边添加的数量],如果输入是单数,由于整除的存在,右边会多填充一组,如果输入是双数,则左右两边填充数据
        outputs = nn.functional.pad(inputs, padding)
        return outputs
    
    def forward(self, inputs):
        output1 = self.unetDown_1(inputs)
        output2 = self.unetDown_2(output1)
        output3 = self.unetDown_3(output2)
        output4 = self.unetDown_4(output3)
        # this shape (25, 19, 512)
        
        outputs = self.unetUP_1(output4)
        # 上采样,输出为 (50, 38, 512)
        output4 = self.__pad(output4, outputs)
        # 将output4跳跃连接数据转换尺寸
        outputs = torch.cat([outputs, output4], 1)
        # 在特征维度上拼接,输出形状为 (50, 38, 1028)

        outputs = self.unetUP_2(outputs)
        # this shape (100, 76, 256)
        output3 = self.__pad(output3, outputs)
        # this shape same
        outputs = torch.cat([outputs, output3], 1)
        # this shape (100, 76, 512)
        
        outputs = self.unetUP_3(outputs)
        # this shape (200, 152, 128)
        output2 = self.__pad(output2, outputs)
        # this shape same
        outputs = torch.cat([outputs, output2], 1)
        # this shape (200, 152, 256)
        
        outputs = self.unetUP_4(outputs)
        # this shape (400, 304, 64)
        output1 = self.__pad(output1, outputs)
        # this shape same
        outputs = torch.cat([outputs, output1], 1)
        # this shape (400, 304, 128)

        outputs = self.unetConv_1(outputs)
        # 卷积,仅维度变化 (400, 304, 64) 
        
        outputs = nn.functional.pad(outputs, [-1, -2, -1, -198])
        # 剪裁图片 (201, 301, 64)
        outputs = self.conv(outputs)
        # 最后再卷一次 (201, 301, 1)
        return outputs
        

functional.pad

torch.nn.functional.pad 是PyTorch函数,用于在张量的各个轴上添加填充

torch.nn.functional.pad(input, pad, mode='constant', value=None) → [Tensor]

input:输入Tensor

pad:列表,2个为1组,分别表示在某维度的两个方向上扩张的量,如[1, 2]表示在输入左右分别扩充1和2列,[1, 2, 3, 4]表示在输入左右分别扩充1和2列,在输入的上下分别扩充3和4列,以此类推。可以为负值,表示从某方向缩减。

mode:'constant''reflect''replicate''circular'constant表示填充常数,reflect表示填充数据的反射,如左边填充的第1个将填充本行数据的第二个数据[(2 1) 0 1 2 (1 0)],replicate表示填充边界的数据,circular表示填充数据的循环,如[(1 2) 0 1 2( 0 1)]。

value:填充的常数值,默认为0。

下一步工作:

  1. 继续学习地震反演原理,深度学习原理。
  2. 构建SEG数据集并训练网络
  • 6
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
"name": "duang2.0", "version": "0.1.0", "lockfileVersion": 1, "requires": true, "dependencies": { "@ant-design/colors": { "version": "3.2.2", "resolved": "https://registry.npmjs.org/@ant-design/colors/-/colors-3.2.2.tgz", "integrity": "sha512-YKgNbG2dlzqMhA9NtI3/pbY16m3Yl/EeWBRa+lB1X1YaYxHrxNexiQYCLTWO/uDvAjLFMEDU+zR901waBtMtjQ==", "requires": { "tinycolor2": "^1.4.1" } }, "@ant-design/icons": { "version": "2.1.1", "resolved": "https://registry.npmjs.org/@ant-design/icons/-/icons-2.1.1.tgz", "integrity": "sha512-jCH+k2Vjlno4YWl6g535nHR09PwCEmTBKAG6VqF+rhkrSPRLfgpU2maagwbZPLjaHuU5Jd1DFQ2KJpQuI6uG8w==" }, "@ant-design/icons-vue": { "version": "2.0.0", "resolved": "https://registry.npmjs.org/@ant-design/icons-vue/-/icons-vue-2.0.0.tgz", "integrity": "sha512-2c0QQE5hL4N48k5NkPG5sdpMl9YnvyNhf0U7YkdZYDlLnspoRU7vIA0UK9eHBs6OpFLcJB6o8eJrIl2ajBskPg==", "requires": { "@ant-design/colors": "^3.1.0", "babel-runtime": "^6.26.0" } }, "@babel/code-frame": { "version": "7.14.5", "resolved": "https://registry.nlark.com/@babel/code-frame/download/@babel/code-frame-7.14.5.tgz?cache=0&sync_timestamp=1623280394200&other_urls=https%3A%2F%2Fregistry.nlark.com%2F%40babel%2Fcode-frame%2Fdownload%2F%40babel%2Fcode-frame-7.14.5.tgz", "integrity": "sha1-I7CNdA6D9JxeWZRfvxtD6Au/Tts=", "dev": true, "requires": { "@babel/highlight": "^7.14.5" } }, "@babel/compat-data": { "version": "7.14.5", "resolved": "https://registry.nlark.com/@babel/compat-data/download/@babel/compat-data-7.14.5.tgz?cache=0&sync_timestamp=1623280503073&other_urls=https%3A%2F%2Fregistry.nlark.com%2F%40babel%2Fcompat-data%2Fdownload%2F%40babel%2Fcompat-data-7.14.5.tgz", "integrity": "sha1-jvTBjljoAcXJXTwcDyh0omgPreo=", "dev": true }, "@babel/core": { "version": "7.14.6", "resolved": "https://registry.nlark.com/@babel/core/download/@babel/core-7.14.6.tgz", "integrity": "sha1-4IFOwalQAy/xbBOich3jmoQW/Ks=", "dev": true, 解析
07-14

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值