model.py

学习:https://zhuanlan.zhihu.com/p/29024978 (我觉得蛮不错的教程,推荐给大家看)

神经网络的结构写在这个文件里model.py 

本篇文章使用了DnCNN这两个版本的代码举例子

https://github.com/SaoYan/DnCNN-PyTorch/blob/master/models.py

 

首先,网络结构,一共有17层网络,残差学习。图片插入有问题,感兴趣的可以去看论文。

import torch
import torch.nn as nn

class DnCNN(nn.Module):
    def __init__(self, channels, num_of_layers=17):
        super(DnCNN, self).__init__()
        kernel_size = 3
        padding = 1
        features = 64
        layers = []
        layers.append(nn.Conv2d(in_channels=channels, out_channels=features, kernel_size=kernel_size, padding=padding, bias=False))  #注意最开始channels是等于输入图片的channels
        layers.append(nn.ReLU(inplace=True))
        for _ in range(num_of_layers-2):
            layers.append(nn.Conv2d(in_channels=features, out_channels=features, kernel_size=kernel_size, padding=padding, bias=False))
            layers.append(nn.BatchNorm2d(features))
            layers.append(nn.ReLU(inplace=True))
        layers.append(nn.Conv2d(in_channels=features, out_channels=channels, kernel_size=kernel_size, padding=padding, bias=False)) #最后输出的channel数和输入一样,对于所有网络基本就是这样
        self.dncnn = nn.Sequential(*layers)
    def forward(self, x):
        out = self.dncnn(x)
        return out

构建网络最开始写一个class,然后def _init_(输入的量),然后super(DnCNN,self).__init__()这三句对于我来说是程式化的存在,建个网络最开始写上。

然后是网络的参数,比如卷积核大小,padding数,features等等,具体有哪些参数是必要的先定义呢,例如

torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros')

这里面我不清楚的有group,dilation

官网解释:

  • dilation controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this link has a nice visualization of what dilation does.

  • groups controls the connections between inputs and outputs. in_channels and out_channels must both be divisible by groups. For example,

    • At groups=1, all inputs are convolved to all outputs.

    • At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated.

    • At groups= in_channels, each input channel is convolved with its own set of filters, of size: \left\lfloor\frac{out\_channels}{in\_channels}\right\rfloor⌊in_channelsout_channels​⌋.

中文解释:http://blog.may-workshop.com/?p=6676 、http://www.ituring.com.cn/article/468202

总之你需要操作函数里的参数,你都需要提前定义。

然后就是需要什么层,就加入什么层,注意前后channel数要对应。

另外一个版本的DnCNN是根据第一个版本改的

class DnCNN(nn.Module):
    def __init__(self, depth=17, n_channels=64, image_channels=1, use_bnorm=True, kernel_size=3):
        super(DnCNN, self).__init__()
        kernel_size = 3
        padding = 1
        layers = []

        layers.append(nn.Conv2d(in_channels=image_channels, out_channels=n_channels, kernel_size=kernel_size, padding=padding, bias=True))
        layers.append(nn.ReLU(inplace=True))
        for _ in range(depth-2):
            layers.append(nn.Conv2d(in_channels=n_channels, out_channels=n_channels, kernel_size=kernel_size, padding=padding, bias=False))
            layers.append(nn.BatchNorm2d(n_channels, eps=0.0001, momentum = 0.95))
            layers.append(nn.ReLU(inplace=True))
        layers.append(nn.Conv2d(in_channels=n_channels, out_channels=image_channels, kernel_size=kernel_size, padding=padding, bias=False))
        self.dncnn = nn.Sequential(*layers)
        self._initialize_weights()

    def forward(self, x):
        y = x
        out = self.dncnn(x)
        return y-out

    def _initialize_weights(self):
        for m in self.modules():
            if isinstance(m, nn.Conv2d):
                init.orthogonal_(m.weight)
                print('init weight')
                if m.bias is not None:
                    init.constant_(m.bias, 0)
            elif isinstance(m, nn.BatchNorm2d):
                init.constant_(m.weight, 1)
                init.constant_(m.bias, 0)

可以看到这一版本的代码增加了initialize_weightd的部分,为什么要加初始化权重呢?

在网上搜了几个解释:https://lcylmhlcy.github.io/2018/09/07/pytorch-init/

https://www.cnblogs.com/xiaojianliu/articles/9623546.html#_label0

https://www.cnblogs.com/marsggbo/p/7462682.html

总之,构建网络最好加入这一步,会使结果更好吧。

 

DnCNN有很多版本,我比较倾向上面的写法,简单明了。但是其实还有一个复杂版,我也发现有很多神经网络是采用下面提到的复杂版。有兴趣的可以研究一下:写的长和短有啥区别。

https://github.com/Liyong8490/DnCNN-pytorch/blob/master/models_DnCNN.py

大概了解这些,可以根据一个网络结构自己写model.py了

 

 

 

 

 

 

 

 

 

 

 

 

 

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值