pytorch中几个常见的需要重写的类


由于是初学者,目前在学习过程中遇到的就以上三类,如果还有其他的话欢迎补充。

1 Dataset类

有时候需要使用我们自己的数据集,而非pytroch自带的数据集,这时候就有必要改写Dataset类了。相关操作见 Dataset类的改写,里面写得比较详细了,摘录几句比较重要的话。

当我们集成了一个 Dataset类之后,我们需要重写 __len__ 方法,该方法提供了dataset的大小; __getitem__ 方法, 该方法支持从 0 到 len(self)的索引

class DealDataset(Dataset):
    """
        下载数据、初始化数据,都可以在这里完成
    """
    def __init__(self):
        xy = np.loadtxt('../dataSet/diabetes.csv.gz', delimiter=',', dtype=np.float32) # 使用numpy读取数据
        self.x_data = torch.from_numpy(xy[:, 0:-1])
        self.y_data = torch.from_numpy(xy[:, [-1]])
        self.len = xy.shape[0]
    
    def __getitem__(self, index):
        return self.x_data[index], self.y_data[index]

    def __len__(self):
        return self.len

# 实例化这个类,然后我们就得到了Dataset类型的数据,记下来就将这个类传给DataLoader,就可以了。    
dealDataset = DealDataset()

train_loader2 = DataLoader(dataset=dealDataset,
                          batch_size=32,
                          shuffle=True)


for epoch in range(2):
    for i, data in enumerate(train_loader2):
        # 将数据从 train_loader 中读出来,一次读取的样本数是32个
        inputs, labels = data

        # 将这些数据转换成Variable类型
        inputs, labels = Variable(inputs), Variable(labels)

        # 接下来就是跑模型的环节了,我们这里使用print来代替
        print("epoch:", epoch, "的第" , i, "个inputs", inputs.data.size(), "labels", labels.data.size())

2 transform

摘自自定义transform详解
在这里插入图片描述
在这里插入图片描述

class AddPepperNoise(object):
    """增加椒盐噪声
    Args:
        snr (float): Signal Noise Rate
        p (float): 概率值,依概率执行该操作
    """
 
    def __init__(self, snr, p=0.9):
        assert isinstance(snr, float) and (isinstance(p, float))    # 2020 07 26 or --> and
        self.snr = snr
        self.p = p
 
    def __call__(self, img):
        """
        Args:
            img (PIL Image): PIL Image
        Returns:
            PIL Image: PIL image.
        """
        if random.uniform(0, 1) < self.p:      #概率判断
            img_ = np.array(img).copy()
            h, w, c = img_.shape
            signal_pct = self.snr              #信噪比。信噪比0.9,说明信号占90%
            noise_pct = (1 - self.snr)         #噪声占比0.1
            mask = np.random.choice((0, 1, 2), size=(h, w, 1), p=[signal_pct, noise_pct/2., noise_pct/2.])
            mask = np.repeat(mask, c, axis=2)
            img_[mask == 1] = 255   # 盐噪声
            img_[mask == 2] = 0     # 椒噪声
            return Image.fromarray(img_.astype('uint8')).convert('RGB')
        else:
            return img
————————————————
版权声明:本文为CSDN博主「/home/liupc」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/pengchengliu/article/details/108683509

3 网络结构

主要是自定义池化、激活函数等,这一部分我直接贴代码了。

from torch import nn
import torch.functional as F


# 为什么还要这么麻烦地写Reshape和GlobalAvgPool2d的类?因为想将它们加到nn.Sequential中去,而想加进去必须得是nn.Module或其子类的对象
class Reshape(nn.Module):
    def __init__(self, *args):
        super(Reshape, self).__init__()

    def forward(self, x):
        return x.view(x.shape[0], -1)


class GlobalAvgPool2d(nn.Module):
    # 全局平均池化层可通过将池化窗口形状设置成输入的高和宽实现
    def __init__(self):
        super(GlobalAvgPool2d, self).__init__()

    def forward(self, x):
        return F.avg_pool2d(x, kernel_size=x.size()[2:])


class CNN(nn.Module):
    def __init__(self):
        super(CNN, self).__init__()
        self.cnn = nn.Sequential(
            nn.Conv2d(1, 64, 3),
            nn.ReLU(inplace=True),
            nn.MaxPool2d(2, 2),
            nn.Conv2d(64, 256, 3),
            nn.ReLU(inplace=True),
            nn.MaxPool2d(3, 3),
            Reshape(),    # 添加的其实是自定义网络结构的类的对象
            nn.Linear(256*7*7, 4096),
            nn.ReLU(inplace=True),
            nn.Linear(4096, 1024),
            nn.ReLU(inplace=True),
            nn.Linear(1024, 7)
        )

    def forward(self, x):
        return self.cnn(x)


class VGG(nn.Module):
    def __init__(self, conv_arch, fc_features, fc_hidden_units):
        super(VGG, self).__init__()
        self.conv_arch = conv_arch    # 卷积层的结构
        self.fc_features = fc_features
        self.fc_hidden_units = fc_hidden_units
        self.net = nn.Sequential()

    def vgg_block(self, num_convs, in_channels, out_channels):
        blk = []
        for i in range(num_convs):
            if i == 0:
                blk.append(nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=1))
            else:
                blk.append(nn.Conv2d(out_channels, out_channels, kernel_size=3, padding=1))
            blk.append(nn.ReLU())
        blk.append(nn.MaxPool2d(kernel_size=2, stride=2))   # 这里会使宽高减半
        return nn.Sequential(*blk)

    def generate_vgg(self):
        # 卷积层部分
        for i, (num_convs, in_channels, out_channels) in enumerate(self.conv_arch):
            # 每经过一个vgg_block都会使宽高减半
            self.net.add_module("vgg_block_{}".format(i+1), self.vgg_block(num_convs, in_channels, out_channels))

        # 全连接层
        self.net.add_module("fc", nn.Sequential(
            Reshape(),
            nn.Linear(self.fc_features, self.fc_hidden_units),
            nn.ReLU(),
            nn.Dropout(0.5),
            nn.Linear(self.fc_hidden_units, self.fc_hidden_units),
            nn.ReLU(),
            nn.Dropout(0.5),
            nn.Linear(self.fc_hidden_units, 7)
        ))

    def forward(self, x):
        return self.net(x)

4、激活函数

class Swish(nn.Module):

    def __init__(self):
        super(Swish, self).__init__()
        self.beta = nn.Parameter(torch.tensor(1.0))

    def forward(self, x):
        return x * torch.sigmoid(self.beta * x)


class Lambda(nn.Module):
	'''
	传入一个函数
	'''
    def __init__(self, f):
        super(Lambda, self).__init__()
        self.f = f

    def forward(self, x):
        return self.f(x)


NONLINEARITIES = {
    "tanh": nn.Tanh(),
    "relu": nn.ReLU(),
    "softplus": nn.Softplus(),
    "elu": nn.ELU(),
    "swish": Swish(),
    "square": Lambda(lambda x: x**2),
    "identity": Lambda(lambda x: x),
}

5、可求导的操作

具体建议看定义torch.autograd.Function的子类,自己定义某些操作,且定义反向求导函数,此文只贴出部分比较重要的代码

这里自己定义一个线性函数(传入参数是Variable)
涉及到的数学计算:

y = x*w +b # 自己定义的LinearFunction
z = f(y)
下面的grad_output = dz/dy
根据复合函数求导法则:
1. dz/dx =  dz/dy * dy/dx = grad_output*dy/dx = grad_output*w
2. dz/dw =  dz/dy * dy/dw = grad_output*dy/dw = grad_output*x
3. dz/db = dz/dy * dy/db = grad_output*1
import torch.autograd.Function as Function
class LinearFunction(Function):
   # 创建torch.autograd.Function类的一个子类
    # 必须是staticmethod
    @staticmethod
    # 第一个是ctx,第二个是input,其他是可选参数。
    # ctx在这里类似self,ctx的属性可以在backward中调用。
    # 自己定义的Function中的forward()方法,所有的Variable参数将会转成tensor!因此这里的input也是tensor.在传入forward前,autograd engine会自动将Variable unpack成Tensor。
    def forward(ctx, input, weight, bias=None):
        print(type(input))
        ctx.save_for_backward(input, weight, bias) # 将Tensor转变为Variable保存到ctx中
        output = input.mm(weight.t())  # torch.t()方法,对2D tensor进行转置
        if bias is not None:
            output += bias.unsqueeze(0).expand_as(output) #unsqueeze(0) 扩展处第0# expand_as(tensor)等价于expand(tensor.size()), 将原tensor按照新的size进行扩展
        return output

    @staticmethod
    def backward(ctx, grad_output): 
        # grad_output为反向传播上一级计算得到的梯度值
        input, weight, bias = ctx.saved_variables
        grad_input = grad_weight = grad_bias = None
        # 分别代表输入,权值,偏置三者的梯度
        # 判断三者对应的Variable是否需要进行反向求导计算梯度
        if ctx.needs_input_grad[0]:
            grad_input = grad_output.mm(weight) # 复合函数求导,链式法则
        if ctx.needs_input_grad[1]:
            grad_weight = grad_output.t().mm(input) # 复合函数求导,链式法则
        if bias is not None and ctx.needs_input_grad[2]:
            grad_bias = grad_output.sum(0).squeeze(0)

        return grad_input, grad_weight, grad_bias

#建议把新操作封装在一个函数中
def linear(input, weight, bias=None):
    # First braces create a Function object. Any arguments given here
    # will be passed to __init__. Second braces will invoke the __call__
    # operator, that will then use forward() to compute the result and
    # return it.
    return LinearFunction()(input, weight, bias)#调用forward()

# 或者使用apply方法对自己定义的方法取个别名
linear = LinearFunction.apply

#检查实现的backward()是否正确
from torch.autograd import gradcheck
# gradchek takes a tuple of tensor as input, check if your gradient
# evaluated with these tensors are close enough to numerical
# approximations and returns True if they all verify this condition.
input = (Variable(torch.randn(20,20).double(), requires_grad=True),)
test = gradcheck(LinearFunction(), input, eps=1e-6, atol=1e-4)
print(test)  # 没问题的话输出True

这里定义一个乘以常数的操作(输入参数是Tensor)

class MulConstant(Function):
    @staticmethod
    def forward(ctx, tensor, constant):
        # ctx is a context object that can be used to stash information
        # for backward computation
        ctx.constant = constant
        return tensor * constant

    @staticmethod
    def backward(ctx, grad_output):
        # We return as many input gradients as there were arguments.
        # Gradients of non-Tensor arguments to forward must be None.
        # constant
        return grad_output * ctx.constant, None # 这里并没有涉及到Variable
  • 用自己定义的Function来创建Module

扩展module就很简单,需要重载 nn.Module中的init和forward

import torch.nn as nn
class Linear(nn.Module):
    def __init__(self, input_features, output_features, bias=True):
        super(Linear, self).__init__()
        self.input_features = input_features
        self.output_features = output_features
        # nn.Parameter is a special kind of Variable, that will get
        # automatically registered as Module's parameter once it's assigned
        # 这个很重要! Parameters是默认需要梯度的!
        self.weight = nn.Parameter(torch.Tensor(output_features, input_features))
        if bias:
            self.bias = nn.Parameter(torch.Tensor(output_features))
        else:
            # You should always register all possible parameters, but the
            # optional ones can be None if you want.
            self.register_parameter('bias', None)
        # Not a very smart way to initialize weights
        self.weight.data.uniform_(-0.1, 0.1)
        if bias is not None:
            self.bias.data.uniform_(-0.1, 0.1)
    def forward(self, input):
        # See the autograd section for explanation of what happens here.
        return LinearFunction.apply(input, self.weight, self.bias)
        # 或者 return LinearFunction()(input, self.weight, self.bias)

补充

模型参数添加

可以根据求导与否可以分为两类:nn.Parameterregister_buffer

class ActNorm(nn.Module):
	"""
    activation normalization
    """

	def __init__(self, in_channel, logdet=True):
		super().__init__()

		# 每一个channel都会有一个loc和scale用以对channel作规范化
		# nn.Paramter用于定义模型参数,与下面的register_buffer函数不太一样,requires_grad=True
		self.loc = nn.Parameter(torch.zeros(1, in_channel, 1, 1))
		self.scale = nn.Parameter(torch.ones(1, in_channel, 1, 1))

		# register_buffer函数用于给nn.Module类声明一个不是模型参数但为模型中的统计量的属性,
		# 这种属性是不会自动求导更新的,这里Actnorm的对象会被赋予名为initialized的属性,值为0
		# This is typically used to register a buffer that should not to be
		# considered a model parameter.
		self.register_buffer("initialized", torch.tensor(0, dtype=torch.uint8))

		self.logdet = logdet
  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

三只佩奇不结义

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值