Baidu飞桨【PaddlePaddle】图像分割7天训练营课程介绍【UNet模型代码示例】

上周参加了百度飞桨的一个课程,感觉真的非常受用,因为我本身是做检测任务的,学习分割课程的最初出发点是了解这个方向,但真的想到的是,这门课被真正的圈粉, 收益匪浅,,非常推荐了!!真的是从入门到精通,7日就够了。感兴趣可以直接转链接:https://aistudio.baidu.com/aistudio/course/introduce/1767。下面是官方的课程介绍:

1. 讲师具有多年理论实战经验,在CVPR、ECCV等顶会上发表多篇论文,获得多次比赛Top成绩,并且担任顶会论文审稿人,是图像分割领域的资深专家。

2. 从经典算法到学界前沿,从技术细节到完整流程,语义分割、实例分割、全景分割,带你逐个击破。

3. 手把手理论指导+现场逐行coding,带大家从零实现自己的模型!

大纲

DAY1(10月19日)
1.图像分割综述
2.语义分割初探
3.环境搭建与飞桨动态图实战演示
4.语义分割的数据格式和处理

DAY2(10月20日)
1.FCN全卷积网络结构详解
2.飞桨中的上采样操作实践
3.飞桨实现FCN

DAY3(10月21日)
1.U-Net模型与PSPNet模型详解
2.飞桨实现UNet/PSPNet
3.飞桨实现DilatedResnet
4.分割网络loss和metrics实现

DAY4(10月22日)
1.Dilated Conv 原理和细节
2.ASPP模块解析
3.DeepLab系列详解
4.实现DeepLabV3/ASPP/MultiGrid
5.分割网络loss和metrics实现

DAY5(10月23日)
1.深入解析GCN(图卷积网络)
2.Graph-based Segmentation多个方法详解 (GloRe, GCU, GINet) 
3.GCN代码简要解析
4.在Pascal Context上实现GloRe

DAY6(10月24日)
1.实例分割与全景分割概述
2.实例分割:Mask R-CNN和SOLO
3.全景分割:PanapticFPN和UPSNet

DAY7(10月25日)
1.主流分割数据集介绍
2.最近研究进展探讨
3.课程总结与Q&A

下面是UNet的网络模型代码,真的是“手撸代码的神”朱老师带着走的!

import numpy as np
import paddle
import paddle.fluid as fluid
from paddle.fluid.dygraph import to_variable
from paddle.fluid.dygraph import Layer
from paddle.fluid.dygraph import Conv2D
from paddle.fluid.dygraph import BatchNorm
from paddle.fluid.dygraph import Pool2D
from paddle.fluid.dygraph import Conv2DTranspose


class Encoder(Layer):
    def __init__(self, num_channels, num_filters):
        super(Encoder, self).__init__()

        # return features before and after pool
        self.conv1 = Conv2D(num_channels, num_filters, filter_size=3, stride=1, padding=1)
        self.bn1 = BatchNorm(num_filters, act='relu')

        self.conv2 = Conv2D(num_filters, num_filters, filter_size=3, stride=1, padding=1)
        self.bn2 = BatchNorm(num_filters, act='relu')

        self.pool = Pool2D(pool_size=2, pool_stride=2, pool_type='max', ceil_mode=True)


    def forward(self, inputs):
        # TODO: finish inference part
        x = self.conv1(inputs)
        x = self.bn1(x)
        x = self.conv2(x)
        x = self.bn2(x)
        x_pooled = self.pool(x)

        return x, x_pooled


class Decoder(Layer):
    def __init__(self, num_channels, num_filters):
        super(Decoder, self).__init__()

        self.up = Conv2DTranspose(num_channels=num_channels, 
                                  num_filters=num_filters, 
                                  filter_size=2,
                                  stride=2)
        self.conv1 = Conv2D(num_channels,
                            num_filters, 
                            filter_size=3, 
                            stride=1, 
                            padding=1)
        self.bn1 = BatchNorm(num_filters, act='relu')

        self.conv2 = Conv2D(num_filters, 
                            num_filters, 
                            filter_size=3, 
                            stride=1, 
                            padding=1)
        self.bn2 = BatchNorm(num_filters, act='relu')

    def forward(self, inputs_prev, inputs):

        # TODO: forward contains an Pad2d and Concat
        x = self.up(inputs)
        h_diff = (inputs_prev.shape[2] - x.shape[2])
        w_diff = (inputs_prev.shape[3] - x.shape[3])
        x = fluid.layers.pad2d(x, paddings=[h_diff//2, h_diff - h_diff//2, w_diff//2, w_diff - w_diff//2])
        x = fluid.layers.concat([inputs_prev, x], axis=1)
        x = self.conv1(x)
        x = self.bn1(x)
        x = self.conv2(x)
        x = self.bn2(x)
        #Pad

        return x


class UNet(Layer):
    def __init__(self, num_classes=59):
        super(UNet, self).__init__()

        self.down1 = Encoder(num_channels=3, num_filters=64)
        self.down2 = Encoder(num_channels=64, num_filters=128)
        self.down3 = Encoder(num_channels=128, num_filters=256)
        self.down4 = Encoder(num_channels=256, num_filters=512)

        self.mid_conv1 = Conv2D(512, 1024, filter_size=1, padding=0, stride=1)
        self.mid_bn1 = BatchNorm(1024, act='relu')
        self.mid_conv2 = Conv2D(1024, 1024, filter_size=1, padding=0, stride=1)
        self.mid_bn2 = BatchNorm(1024, act='relu')

        self.up4 = Decoder(num_channels=1024, num_filters=512)
        self.up3 = Decoder(num_channels=512, num_filters=256)
        self.up2 = Decoder(num_channels=256, num_filters=128)
        self.up1 = Decoder(num_channels=128, num_filters=64)   

        self.last_conv = Conv2D(num_channels=64, num_filters=num_classes, filter_size=1)

    def forward(self, inputs):
        x1, x = self.down1(inputs)
        print(x1.shape, x.shape)
        x2, x = self.down2(x)
        print(x2.shape, x.shape)
        x3, x = self.down3(x)
        print(x3.shape, x.shape)
        x4, x = self.down4(x)
        print(x4.shape, x.shape)

        # middle layers
        x = self.mid_conv1(x)
        x = self.mid_bn1(x)
        x = self.mid_conv2(x)
        x = self.mid_bn2(x)

        print(x4.shape, x.shape)
        x = self.up4(x4, x)
        print(x3.shape, x.shape)
        x = self.up3(x3, x)
        print(x2.shape, x.shape)
        x = self.up2(x2, x)
        print(x1.shape, x.shape)
        x = self.up1(x1, x)
        print(x.shape)

        x = self.last_conv(x)

        return x


def main():
    with fluid.dygraph.guard(fluid.CUDAPlace(0)):
        model = UNet(num_classes=59)
        x_data = np.random.rand(1, 3, 123, 123).astype(np.float32)
        inputs = to_variable(x_data)
        pred = model(inputs)

        print(pred.shape)


if __name__ == "__main__":
    main()

 

©️2020 CSDN 皮肤主题: 大白 设计师:CSDN官方博客 返回首页