pytorch 常见问题(待更新)

pytorch 常见问题(待更新)

1、nn.Module 和 nn.functional 的区别

参考 Python深度学习:基于PyTorch

nn中的层,一类是继承了nn.Module,其命名一般为nn.Xxx(第一个是大写),如nn.Linearnn.Conv2dnn.CrossEntropyLoss等。另一类是 nn.functional中的函数,其名称一般为nn.funtional.xxx,如 nn.functional.linearnn.funtional.conv2dnn.funtional.cross_entropy等。从功能来说两者相当,基于nn.Moudle能实现的层,使用nn.funtional也可实现, 反之亦然,而且性能方面两者也没有太大差异。不过在具体使用时,两者还是有区别,主要区别如下:

  • 1)nn.Xxx继承于nn.Modulenn.Xxx需要先实例化并传入参数,然后以函数调用的方式调用实例化的对象并传入输入数据。它能够很好地与 nn.Sequential结合使用,而nn.functional.xxx无法与nn.Sequential结合使用。

  • 2)nn.Xxx不需要自己定义和管理weight、bias参数;而nn.functional.xxx 需要自己定义weight、bias参数,每次调用的时候不仅要传入input,还要手动传入weight、 bias等参数,不利于代码复用。

就以nn.functional.linearnn.Linear为例,会发现

  • 在调用前者时,需要手动传入weight,bias;
  • 而在使用后者时,一般在__init__的时候已经帮你把weight,bias初始化好了,或者通过权重文件将值加载到指定参数中了,而且模块中的回调函数(forward())已经F.linear进行二次封装,使用时直接传入input即可
#nn.functional包下的linear
def linear(input: Tensor, weight: Tensor, bias: Optional[Tensor] = None) -> Tensor:
    r"""
    Applies a linear transformation to the incoming data: :math:`y = xA^T + b`.

    This operator supports :ref:`TensorFloat32<tf32_on_ampere>`.

    Shape:

        - Input: :math:`(N, *, in\_features)` N is the batch size, `*` means any number of
          additional dimensions
        - Weight: :math:`(out\_features, in\_features)`
        - Bias: :math:`(out\_features)`
        - Output: :math:`(N, *, out\_features)`
    """
    if has_torch_function_variadic(input, weight):
        return handle_torch_function(linear, (input, weight), input, weight, bias=bias)
    return torch._C._nn.linear(input, weight, bias)
#nn包下的Linear模块
class Linear(Module):
    r"""Applies a linear transformation to the incoming data: :math:`y = xA^T + b`

    This module supports :ref:`TensorFloat32<tf32_on_ampere>`.

    Args:
        in_features: size of each input sample
        out_features: size of each output sample
        bias: If set to ``False``, the layer will not learn an additive bias.
            Default: ``True``

    Shape:
        - Input: :math:`(N, *, H_{in})` where :math:`*` means any number of
          additional dimensions and :math:`H_{in} = \text{in\_features}`
        - Output: :math:`(N, *, H_{out})` where all but the last dimension
          are the same shape as the input and :math:`H_{out} = \text{out\_features}`.

    Attributes:
        weight: the learnable weights of the module of shape
            :math:`(\text{out\_features}, \text{in\_features})`. The values are
            initialized from :math:`\mathcal{U}(-\sqrt{k}, \sqrt{k})`, where
            :math:`k = \frac{1}{\text{in\_features}}`
        bias:   the learnable bias of the module of shape :math:`(\text{out\_features})`.
                If :attr:`bias` is ``True``, the values are initialized from
                :math:`\mathcal{U}(-\sqrt{k}, \sqrt{k})` where
                :math:`k = \frac{1}{\text{in\_features}}`

    Examples::

        >>> m = nn.Linear(20, 30)
        >>> input = torch.randn(128, 20)
        >>> output = m(input)
        >>> print(output.size())
        torch.Size([128, 30])
    """
    __constants__ = ['in_features', 'out_features']
    in_features: int
    out_features: int
    weight: Tensor

    def __init__(self, in_features: int, out_features: int, bias: bool = True,
                 device=None, dtype=None) -> None:
        factory_kwargs = {'device': device, 'dtype': dtype}
        super(Linear, self).__init__()
        self.in_features = in_features
        self.out_features = out_features
        self.weight = Parameter(torch.empty((out_features, in_features), **factory_kwargs))
        if bias:
            self.bias = Parameter(torch.empty(out_features, **factory_kwargs))
        else:
            self.register_parameter('bias', None)
        self.reset_parameters()

    def reset_parameters(self) -> None:
        init.kaiming_uniform_(self.weight, a=math.sqrt(5))
        if self.bias is not None:
            fan_in, _ = init._calculate_fan_in_and_fan_out(self.weight)
            bound = 1 / math.sqrt(fan_in) if fan_in > 0 else 0
            init.uniform_(self.bias, -bound, bound)

    def forward(self, input: Tensor) -> Tensor:
        return F.linear(input, self.weight, self.bias)

    def extra_repr(self) -> str:
        return 'in_features={}, out_features={}, bias={}'.format(
            self.in_features, self.out_features, self.bias is not None
        )
  • 3)Dropout操作在训练和测试阶段是有区别的,使用nn.Xxx方式定义 Dropout,在调用model.eval()之后,自动实现状态的转换,而使用 nn.functional.xxx却无此功能。

总的来说,两种功能都是相同的,但PyTorch官方推荐:具有学习参数的(例如,conv2d,linear,batch_norm)采用nn.Xxx方式没有学习参数的(例 如,maxpool、loss func、activation func)等根据个人选择使用 nn.functional.xxx或者nn.Xxx方式。

2、load_state_dict() 源码解析

参考

load_state_dict函数如何将权重文件中的OrderDict参数和值加载到模型各子模块的参数中。

load_state_dict源码

def load_state_dict(self, state_dict: 'OrderedDict[str, Tensor]',
                  strict: bool = True):
  r"""Copies parameters and buffers from :attr:`state_dict` into
  this module and its descendants. If :attr:`strict` is ``True``, then
  the keys of :attr:`state_dict` must exactly match the keys returned
  by this module's :meth:`~torch.nn.Module.state_dict` function.

  Args:
      state_dict (dict): a dict containing parameters and
          persistent buffers.
      strict (bool, optional): whether to strictly enforce that the keys
          in :attr:`state_dict` match the keys returned by this module's
          :meth:`~torch.nn.Module.state_dict` function. Default: ``True``

  Returns:
      ``NamedTuple`` with ``missing_keys`` and ``unexpected_keys`` fields:
              * **missing_keys** is a list of str containing the missing keys
              * **unexpected_keys** is a list of str containing the unexpected keys
      """
      missing_keys: List[str] = []
      unexpected_keys: List[str] = []
      error_msgs: List[str] = []

      # copy state_dict so _load_from_state_dict can modify it
      metadata = getattr(state_dict, '_metadata', None)
      state_dict = state_dict.copy()
      if metadata is not None:
          # mypy isn't aware that "_metadata" exists in state_dict
          state_dict._metadata = metadata  # type: ignore[attr-defined]

      def load(module, prefix=''):
          local_metadata = {} if metadata is None else metadata.get(prefix[:-1], {})
          module._load_from_state_dict(
              state_dict, prefix, local_metadata, True, missing_keys, unexpected_keys, error_msgs)
          for name, child in module._modules.items():
              if child is not None:
                  load(child, prefix + name + '.')

      load(self)
      del load

      if strict:
          if len(unexpected_keys) > 0:
              error_msgs.insert(
                  0, 'Unexpected key(s) in state_dict: {}. '.format(
                      ', '.join('"{}"'.format(k) for k in unexpected_keys)))
          if len(missing_keys) > 0:
              error_msgs.insert(
                  0, 'Missing key(s) in state_dict: {}. '.format(
                      ', '.join('"{}"'.format(k) for k in missing_keys)))

      if len(error_msgs) > 0:
          raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
                             self.__class__.__name__, "\n\t".join(error_msgs)))
      return _IncompatibleKeys(missing_keys, unexpected_keys)

load(self)内部函数中关于module._load_from_state_dict源码如下:

def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict,
						  missing_keys, unexpected_keys, error_msgs):
	for hook in self._load_state_dict_pre_hooks.values():
		hook(state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs)

	local_name_params = itertools.chain(self._parameters.items(), self._buffers.items())
	local_state = {k: v.data for k, v in local_name_params if v is not None}

	for name, param in local_state.items():
		key = prefix + name
		if key in state_dict:
			input_param = state_dict[key]

			# Backward compatibility: loading 1-dim tensor from 0.3.* to version 0.4+
			if len(param.shape) == 0 and len(input_param.shape) == 1:
				input_param = input_param[0]

			if input_param.shape != param.shape:
				# local shape should match the one in checkpoint
				error_msgs.append('size mismatch for {}: copying a param with shape {} from checkpoint, '
								  'the shape in current model is {}.'
								  .format(key, input_param.shape, param.shape))
				continue

			if isinstance(input_param, Parameter):
				# backwards compatibility for serialized parameters
				input_param = input_param.data
			try:
				param.copy_(input_param)
			except Exception:
				error_msgs.append('While copying the parameter named "{}", '
								  'whose dimensions in the model are {} and '
								  'whose dimensions in the checkpoint are {}.'
								  .format(key, param.size(), input_param.size()))
		elif strict:
			missing_keys.append(key)

	if strict:
		for key, input_param in state_dict.items():
			if key.startswith(prefix):
				input_name = key[len(prefix):]
				input_name = input_name.split('.', 1)[0]  # get the name of param/buffer/child
				if input_name not in self._modules and input_name not in local_state:
					unexpected_keys.append(key)

下面的代码可以分成两个部分来看:

  1. load(self):这个函数会递归地对模型进行参数恢复,其中的_load_from_state_dict的源码如上所示。
  • 首先我们需要明确state_dict这个变量表示你之前保存的模型参数序列,而_load_from_state_dict函数中的local_state表示你的代码中定义的模型的结构
  • 那么_load_from_state_dict的作用简单理解就是假如我们现在需要对一个名为conv.weight的子模块做参数恢复,那么就以递归的方式先判断conv是否在state__dictlocal_state中,如果不在就把conv添加到unexpected_keys中去,否则递归的判断conv.weight是否存在,如果都存在就执行param.copy_(input_param), 这样就完成了conv.weight的参数拷贝。
  1. if strict

这个部分的作用是判断上面参数拷贝过程中是否有unexpected_keys或者missing_keys,如果有就报错,代码不能继续执行。当然,如果strict=False,则会忽略这些细节。

Note:

  • 自定义模型类(继承nn.Module)如果不在forward中调用该子模块,最好不要对其初始化。因为此时model.state_dict()包含该初始化模块的键值对(比如linear4.weight
  • 通过阅读源码,我们可以利用local_state = {k: v.data for k, v in local_name_params if v is not None},将不同数据集(Pascal VOC)的预训练权重文件加载到当前要训练数据集(COCO2017)的模型之中。
  • 1
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值