Pytorch中的torch.nn.Linear()方法的详解

torch.nn.Linear()作为深度学习中最简单的线性变换方法,其主要作用是对输入数据应用线性转换,先来看一下官方的解释及介绍:

class Linear(Module):
    r"""Applies a linear transformation to the incoming data: :math:`y = xA^T + b`

    This module supports :ref:`TensorFloat32<tf32_on_ampere>`.

    Args:
        in_features: size of each input sample
        out_features: size of each output sample
        bias: If set to ``False``, the layer will not learn an additive bias.
            Default: ``True``

    Shape:
        - Input: :math:`(N, *, H_{in})` where :math:`*` means any number of
          additional dimensions and :math:`H_{in} = \text{in\_features}`
        - Output: :math:`(N, *, H_{out})` where all but the last dimension
          are the same shape as the input and :math:`H_{out} = \text{out\_features}`.

    Attributes:
        weight: the learnable weights of the module of shape
            :math:`(\text{out\_features}, \text{in\_features})`. The values are
            initialized from :math:`\mathcal{U}(-\sqrt{k}, \sqrt{k})`, where
            :math:`k = \frac{1}{\text{in\_features}}`
        bias:   the learnable bias of the module of shape :math:`(\text{out\_features})`.
                If :attr:`bias` is ``True``, the values are initialized from
                :math:`\mathcal{U}(-\sqrt{k}, \sqrt{k})` where
                :math:`k = \frac{1}{\text{in\_features}}`

    Examples::

        >>> m = nn.Linear(20, 30)
        >>> input = torch.randn(128, 20)
        >>> output = m(input)
        >>> print(output.size())
        torch.Size([128, 30])
    """
    __constants__ = ['in_features', 'out_features']
    in_features: int
    out_features: int
    weight: Tensor

    def __init__(self, in_features: int, out_features: int, bias: bool = True) -> None:
        super(Linear, self).__init__()
        self.in_features = in_features
        self.out_features = out_features
        self.weight = Parameter(torch.Tensor(out_features, in_features))
        if bias:
            self.bias = Parameter(torch.Tensor(out_features))
        else:
            self.register_parameter('bias', None)
        self.reset_parameters()

    def reset_parameters(self) -> None:
        init.kaiming_uniform_(self.weight, a=math.sqrt(5))
        if self.bias is not None:
            fan_in, _ = init._calculate_fan_in_and_fan_out(self.weight)
            bound = 1 / math.sqrt(fan_in)
            init.uniform_(self.bias, -bound, bound)

    def forward(self, input: Tensor) -> Tensor:
        return F.linear(input, self.weight, self.bias)

    def extra_repr(self) -> str:
        return 'in_features={}, out_features={}, bias={}'.format(
            self.in_features, self.out_features, self.bias is not None
        )


# This class exists solely for Transformer; it has an annotation stating
# that bias is never None, which appeases TorchScript

这里我们主要看__init__()方法,很容易知道,当我们使用这个方法时一般需要传入2~3个参数,分别是in_features: int, out_features: int, bias: bool = True,第三个参数是说是否加偏置(bias),简单来讲,这个函数其实就是一个'一次函数':y = xA^T + b,(T表示张量A的转置),首先super(Linear, self).__init__()就是老生常谈的方法,之后初始化in_features和out_features,接下来就是比较重要的weight的设置,我们可以很清晰的看到weight的shape是(out_features,in_features)的,而我们在做xA^T时,并不是x和A^T相乘的,而是x和A.weight^T相乘的,这里需要大大留意,也就是说先对A做转置得到A.weight,然后在丢入y = xA^T + b中,得出结果。

接下来奉上一个小例子来实践一下:

import torch

# 随机初始化一个shape为(128,20)的Tensor
x = torch.randn(128,20)
# 构造线性变换函数y = xA^T + b,且参数(20,30)指的是A的shape,则A.weight的shape就是(30,20)了
y= torch.nn.Linear(20,30)
output = y(x)
# 按照以上逻辑使用torch中的简单乘法函数进行检验,结果很显然与上述符合
# 下面的y.weight可以理解为一个shape为(30,20)的一个可学习的矩阵,.t()表示转置
# y.bias若为TRUE,则bias是一个Tensor,且其shape为out_features,在该程序中应为30
# 更加细致的表达一下y = (128 * 20) * (30 * 20)^T + (if bias (1,30) ,else: 0)
ans = torch.mm(x,y.weight.t())+y.bias
print('ans.shape:\n',ans.shape)
print(torch.equal(ans,output))

由于本人水平有限,难免出现错误,欢迎大佬批评指正~

  • 2
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
torch.nn.LinearPyTorch的一个模块,用于定义一个线性层。它接受两个参数,即输入和输出的维度。通过调用torch.nn.Linear(input_dim, output_dim),可以创建一个线性层,其input_dim是输入的维度,output_dim是输出的维度。Linear模块的主要功能是执行线性变换,将输入数据乘以权重矩阵,并加上偏置向量。这个函数的具体实现可以参考PyTorch官方文档的链接。 在引用的示例linear1是一个Linear模块的实例。可以通过print(linear1.weight.data)来查看linear1的权重。示例给出了权重的具体数值。 在引用的示例,x是一个Linear模块的实例,输入维度为5,输出维度为2。通过调用x(data)来计算线性变换的结果。在这个示例,输入data的维度是(5,5),输出的维度是(5,2)。可以使用torch.nn.functional.linear函数来实现与torch.nn.Linear相同的功能,其weight和bias分别表示权重矩阵和偏置向量。 以上是关于torch.nn.Linear的一些介绍和示例。如果需要更详细的信息,可以参考PyTorch官方文档关于torch.nn.Linear的说明。 https://pytorch.org/docs/stable/_modules/torch/nn/modules/linear.html<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* [torch.nn.Linear详解](https://blog.csdn.net/sazass/article/details/123568203)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v92^chatsearchT0_1"}}] [.reference_item style="max-width: 33.333333333333336%"] - *2* [torch.nn.Linear](https://blog.csdn.net/weixin_41620490/article/details/127833324)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v92^chatsearchT0_1"}}] [.reference_item style="max-width: 33.333333333333336%"] - *3* [pytorch 笔记:torch.nn.Linear() VS torch.nn.function.linear()](https://blog.csdn.net/qq_40206371/article/details/124473437)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v92^chatsearchT0_1"}}] [.reference_item style="max-width: 33.333333333333336%"] [ .reference_list ]

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值