torch.nn的实现去调用torch.nn.functional,实现方式是一致的。它们的区别是:
nn可以写在深度学习模型的初始化中,其是一个类;F函数不可以,它是一个实际的函数,其需要输入实际的input
例如nn.ReLu和F.relu,其代码如下。
代码:
class ReLU(Module):
r"""Applies the rectified linear unit function element-wise:
:math:`\text{ReLU}(x) = (x)^+ = \max(0, x)`
Args:
inplace: can optionally do the operation in-place. Default: ``False``
Shape:
- Input: :math:`(N, *)` where `*` means, any number of additional
dimensions
- Output: :math:`(N, *)`, same shape as the input
.. image:: ../scripts/activation_images/ReLU.png
Examples::
>>> m = nn.ReLU()
>>> input = torch.randn(2)
>>> output = m(input)
An implementation of CReLU - https://arxiv.org/abs/1603.05201
>>> m = nn.ReLU()