Pytorch:初始化

4.4 初始化策略

在深度学习中参数的初始化十分重要,良好的初始化能让模型更快收敛,并达到更高水平,而糟糕的初始化则可能使得模型迅速瘫痪。PyTorch中nn.Module的模块参数都采取了较为合理的初始化策略,因此一般不用我们考虑,当然我们也可以用自定义初始化去代替系统的默认初始化。而当我们在使用Parameter时,自定义初始化则尤为重要,因t.Tensor()返回的是内存中的随机数,很可能会有极大值,这在实际训练网络中会造成溢出或者梯度消失。PyTorch中nn.init模块就是专门为初始化而设计,如果某种初始化策略nn.init不提供,用户也可以自己直接初始化。

In [55]:

# 利用nn.init初始化
from torch.nn import init
linear = nn.Linear(3, 4)

t.manual_seed(1)
# 等价于 linear.weight.data.normal_(0, std)
init.xavier_normal_(linear.weight)

Out[55]:

Parameter containing:
tensor([[ 0.3535,  0.1427,  0.0330],
        [ 0.3321, -0.2416, -0.0888],
        [-0.8140,  0.2040, -0.5493],
        [-0.3010, -0.4769, -0.0311]])

In [56]:

# 直接初始化
import math
t.manual_seed(1)

# xavier初始化的计算公式
std = math.sqrt(2)/math.sqrt(7.)
linear.weight.data.normal_(0,std)

Out[56]:

tensor([[ 0.3535,  0.1427,  0.0330],
        [ 0.3321, -0.2416, -0.0888],
        [-0.8140,  0.2040, -0.5493],
        [-0.3010, -0.4769, -0.0311]])

In [57]:

# 对模型的所有参数进行初始化
for name, params in net.named_parameters():
    if name.find('linear') != -1:
        # init linear
        params[0] # weight
        params[1] # bias
    elif name.find('conv') != -1:
        pass
    elif name.find('norm') != -1:
        pass

初始化的具体例子

# -*- coding: utf-8 -*-
"""
Created on 2019

@author: fancp
"""

import torch 
import torch.nn as nn

w = torch.empty(3,5)

#1.均匀分布 - u(a,b)
#torch.nn.init.uniform_(tensor, a=0.0, b=1.0)
print(nn.init.uniform_(w))
# =============================================================================
# tensor([[0.9160, 0.1832, 0.5278, 0.5480, 0.6754],
#         [0.9509, 0.8325, 0.9149, 0.8192, 0.9950],
#         [0.4847, 0.4148, 0.8161, 0.0948, 0.3787]])
# =============================================================================

#2.正态分布 - N(mean, std)
#torch.nn.init.normal_(tensor, mean=0.0, std=1.0)
print(nn.init.normal_(w))
# =============================================================================
# tensor([[ 0.4388,  0.3083, -0.6803, -1.1476, -0.6084],
#         [ 0.5148, -0.2876, -1.2222,  0.6990, -0.1595],
#         [-2.0834, -1.6288,  0.5057, -0.5754,  0.3052]])
# =============================================================================

#3.常数 - 固定值 val
#torch.nn.init.constant_(tensor, val)
print(nn.init.constant_(w, 0.3))
# =============================================================================
# tensor([[0.3000, 0.3000, 0.3000, 0.3000, 0.3000],
#         [0.3000, 0.3000, 0.3000, 0.3000, 0.3000],
#         [0.3000, 0.3000, 0.3000, 0.3000, 0.3000]])
# =============================================================================

#4.全1分布
#torch.nn.init.ones_(tensor)
print(nn.init.ones_(w))
# =============================================================================
# tensor([[1., 1., 1., 1., 1.],
#         [1., 1., 1., 1., 1.],
#         [1., 1., 1., 1., 1.]])
# =============================================================================

#5.全0分布
#torch.nn.init.zeros_(tensor)
print(nn.init.zeros_(w))
# =============================================================================
# tensor([[0., 0., 0., 0., 0.],
#         [0., 0., 0., 0., 0.],
#         [0., 0., 0., 0., 0.]])
# =============================================================================

#6.对角线为 1,其它为 0
#torch.nn.init.eye_(tensor)
print(nn.init.eye_(w))
# =============================================================================
# tensor([[1., 0., 0., 0., 0.],
#         [0., 1., 0., 0., 0.],
#         [0., 0., 1., 0., 0.]])
# =============================================================================

#7.xavier_uniform 初始化
#torch.nn.init.xavier_uniform_(tensor, gain=1.0)
#From - Understanding the difficulty of training deep feedforward neural networks - Bengio 2010
print(nn.init.xavier_uniform_(w, gain=nn.init.calculate_gain('relu')))
# =============================================================================
# tensor([[-0.1270,  0.3963,  0.9531, -0.2949,  0.8294],
#         [-0.9759, -0.6335,  0.9299, -1.0988, -0.1496],
#         [-0.7224,  0.2181, -1.1219,  0.8629, -0.8825]])
# =============================================================================

#8.xavier_normal 初始化
#torch.nn.init.xavier_normal_(tensor, gain=1.0)
print(nn.init.xavier_normal_(w))
# =============================================================================
# tensor([[ 1.0463,  0.1275, -0.3752,  0.1858,  1.1008],
#         [-0.5560,  0.2837,  0.1000, -0.5835,  0.7886],
#         [-0.2417,  0.1763, -0.7495,  0.4677, -0.1185]])
# =============================================================================

#9.kaiming_uniform 初始化
#torch.nn.init.kaiming_uniform_(tensor, a=0, mode='fan_in', nonlinearity='leaky_relu')
#From - Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification - HeKaiming 2015
print(nn.init.kaiming_uniform_(w, mode='fan_in', nonlinearity='relu'))
# =============================================================================
# tensor([[-0.7712,  0.9344,  0.8304,  0.2367,  0.0478],
#         [-0.6139, -0.3916, -0.0835,  0.5975,  0.1717],
#         [ 0.3197, -0.9825, -0.5380, -1.0033, -0.3701]])
# =============================================================================

#10.kaiming_normal 初始化
#torch.nn.init.kaiming_normal_(tensor, a=0, mode='fan_in', nonlinearity='leaky_relu')
print(nn.init.kaiming_normal_(w, mode='fan_out', nonlinearity='relu'))
# =============================================================================
# tensor([[-0.0210,  0.5532, -0.8647,  0.9813,  0.0466],
#         [ 0.7713, -1.0418,  0.7264,  0.5547,  0.7403],
#         [-0.8471, -1.7371,  1.3333,  0.0395,  1.0787]])
# =============================================================================

#11.正交矩阵 - (semi)orthogonal matrix
#torch.nn.init.orthogonal_(tensor, gain=1)
#From - Exact solutions to the nonlinear dynamics of learning in deep linear neural networks - Saxe 2013
print(nn.init.orthogonal_(w))
# =============================================================================
# tensor([[-0.0346, -0.7607, -0.0428,  0.4771,  0.4366],
#         [-0.0412, -0.0836,  0.9847,  0.0703, -0.1293],
#         [-0.6639,  0.4551,  0.0731,  0.1674,  0.5646]])
# =============================================================================

#12.稀疏矩阵 - sparse matrix 
#torch.nn.init.sparse_(tensor, sparsity, std=0.01)
#From - Deep learning via Hessian-free optimization - Martens 2010
print(nn.init.sparse_(w, sparsity=0.1))
# =============================================================================
# tensor([[ 0.0000,  0.0000, -0.0077,  0.0000, -0.0046],
#         [ 0.0152,  0.0030,  0.0000, -0.0029,  0.0005],
#         [ 0.0199,  0.0132, -0.0088,  0.0060,  0.0000]])
# =============================================================================

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
PyTorch 自定义初始化可以使得我们初始化参数时更加灵活和个性化。在使用 PyTorch 进行深度学习任务时,初始值设置是非常重要的。参数初始值一定程度上影响了算法的收敛速度和精度。因此,自定义初始化是非常有必要的。 PyTorch的torch.nn.init模块提供了一些常用的初始化方式,包括常见的随机初始化(uniform,normal等),常数初始化(zeros,ones等),以及一些比较有名的网络模型特定的初始化方式,如Xavier初始化,Kaiming初始化等。但有时候我们需要自定义的初始化方法,此时就需要自定义初始化。 我们可以使用register_parameter方法为模型中的每一个参数自定义初始化方法,如下所示: ``` class CustomModel(nn.Module): def __init__(self): super(CustomModel, self).__init__() self.weight = nn.Parameter(torch.Tensor(1, 100)) self.register_parameter('bias', None) self.reset_parameters() def reset_parameters(self): stdv = 1. / math.sqrt(self.weight.size(1)) self.weight.data.uniform_(-stdv, stdv) model = CustomModel() ``` 在以上的代码中,我们可以看到,在模型内部通过register_parameter方法给bias参数设置值为None,表明bias参数不需要在初始化时使用模型默认的初始化方式。然后在通过重载reset_parameters方法,我们自己进行参数初始化。 通过这种自定义初始化方式,我们可以方便地对网络模型中的参数进行初始化,从而达到优化模型的目的,提高算法的效果。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值