基于Pytorch理解attention decoder网络结构

2019.1.4更新

Pytorch的tutorials上目前的attention不是论文上原本的attention,是有问题的,详见讨论:https://github.com/spro/practical-pytorch/issues/84

可以看https://github.com/spro/practical-pytorch/blob/master/seq2seq-translation/seq2seq-translation.ipynb 这个链接上的代码,是符合原论文的attention.

pytorch目前用于计算权重的输入都是decoder里的信息(见下文的图), 没有包含encoder的信息,实际上应该用当前 目标语言隐层状态(q)*encoder_outputs(k).

看来目前的Pytorch还不是那么的成熟!!

___________________________________________________________________________________________

普通encoder-decoder模型的decoder只使用encoder最后输出的唯一向量(包含翻译对象的信息)来作为输入,

而attention decoder将encoder所有outputs的向量都作为输入,这种方法显然能覆盖到更多的信息。

同时为了确定哪个encoder的output对当前词decoder的影响更大,使用decoder的前一个隐层+decoder的输入词嵌入(翻译目标的词嵌入)组合生成encoder-outputs的权值,并使用反向传播更新参数。

网络架构图如下: 详见官方教程https://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html

attention_decoder代码解析:

class AttnDecoderRNN(nn.Module):
    def __init__(self, hidden_size, output_size, dropout_p=0.1, max_length=MAX_LENGTH):
        super(AttnDecoderRNN, self).__init__()
        self.hidden_size = hidden_size
        self.output_size = output_size  # 另一种语言的词汇量
        self.dropout_p = dropout_p
        self.max_length = max_length

        self.embedding = nn.Embedding(self.output_size, self.hidden_size)
        self.attn = nn.Linear(self.hidden_size * 2, self.max_length)
        self.attn_combine = nn.Linear(self.hidden_size * 2, self.hidden_size)
        self.dropout = nn.Dropout(self.dropout_p)
        self.gru = nn.GRU(self.hidden_size, self.hidden_size)
        self.out = nn.Linear(self.hidden_size, self.output_size)

    def forward(self, input, hidden, encoder_outputs):  # forward的参数是decoder的输入
        # decoder的input是另一种语言的词汇,要么是target,要么是上一个单元返回的output中概率最大的一个
        # 初始的hidden用的是encoder的最后一个hidden输出
        embedded = self.embedding(input).view(1, 1, -1)
        embedded = self.dropout(embedded)
        # 将embedded的256词向量和hidden的256词向量合在一起,变成512维向量
        # 再用线性全连接变成10维(最长句子词汇数),在算softmax,看
        attn_weight = F.softmax(
            self.attn(torch.cat((embedded[0], hidden[0]), 1)), dim=1
        )
        # torch.cat用于粘贴,dim=1指dim1方向粘贴
        # torch.bmm是批矩阵乘操作,attention里将encoder的输出和attention权值相乘
        # bmm: (1,1,10)*(1,10,256),权重*向量,得到attention向量
        # unsqueeze用于插入一个维度(修改维度)
        attn_applied = torch.bmm(attn_weight.unsqueeze(0),
                                 encoder_outputs.unsqueeze(0))
        output = torch.cat((embedded[0], attn_applied[0]), 1)
        output = self.attn_combine(output).unsqueeze(0)
        output = F.relu(output)
        output, hidden = self.gru(output, hidden)

        output = F.log_softmax(self.out(output[0]), dim=1)
        return output, hidden, attn_weight

    def initHidden(self):
        return torch.zeros(1, 1, self.hidden_size, device=device)

 

  • 6
    点赞
  • 24
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
我可以提供一个参考,你可以参考以下代码:import torch import torch.nn as nn import torch.nn.functional as F from torch.autograd import Variableclass Attention_Unet(nn.Module): def __init__(self, in_channels=1, out_channels=1, init_features=32): super(Attention_Unet, self).__init__() #Encoder self.conv1 = nn.Sequential(nn.Conv2d(in_channels, init_features, 3, padding=1), nn.BatchNorm2d(init_features), nn.ReLU(inplace=True)) self.conv2 = nn.Sequential(nn.Conv2d(init_features, init_features, 3, padding=1), nn.BatchNorm2d(init_features), nn.ReLU(inplace=True)) self.maxpool = nn.MaxPool2d(2, 2) self.conv3 = nn.Sequential(nn.Conv2d(init_features, init_features*2, 3, padding=1), nn.BatchNorm2d(init_features*2), nn.ReLU(inplace=True)) self.conv4 = nn.Sequential(nn.Conv2d(init_features*2, init_features*2, 3, padding=1), nn.BatchNorm2d(init_features*2), nn.ReLU(inplace=True)) self.conv5 = nn.Sequential(nn.Conv2d(init_features*2, init_features*4, 3, padding=1), nn.BatchNorm2d(init_features*4), nn.ReLU(inplace=True)) self.conv6 = nn.Sequential(nn.Conv2d(init_features*4, init_features*4, 3, padding=1), nn.BatchNorm2d(init_features*4), nn.ReLU(inplace=True)) self.conv7 = nn.Sequential(nn.Conv2d(init_features*4, init_features*8, 3, padding=1), nn.BatchNorm2d(init_features*8), nn.ReLU(inplace=True)) self.conv8 = nn.Sequential(nn.Conv2d(init_features*8, init_features*8, 3, padding=1), nn.BatchNorm2d(init_features*8), nn.ReLU(inplace=True)) #Decoder self.upconv1 = nn.ConvTranspose2d(init_features*8, init_features*4, 2, stride=2) self.conv9 = nn.Sequential(nn.Conv2d(init_features*12, init_features*4, 3, padding=1), nn.BatchNorm2d(init_features*4), nn.ReLU(inplace=True)) self.conv10 = nn.Sequential(nn.Conv2d(init_features*4, init_features*4, 3, padding=1), nn.BatchNorm2d(init_features*4), nn.ReLU(inplace=True)) self.upconv2 = nn.ConvTranspose2d(init_features*4, init_features*2, 2, stride=2) self.conv11 = nn.Sequential(nn.Conv2d(init_features*6, init_features*2, 3, padding=1), nn.BatchNorm2d(init_features*2), nn.ReLU(inplace=True)) self.conv12 = nn.Sequential(nn.Conv2d(init_features*2, init_features*2, 3, padding=1), nn.BatchNorm2d(init_features*2), nn.ReLU(inplace=True)) self.upconv3 = nn.ConvTranspose2d(init_features*2, init_features, 2, stride=2) self.conv13 = nn.Sequential(nn.Conv2d(init_features*3, init_features, 3, padding=1), nn.BatchNorm2d(init_features), nn.ReLU(inplace=True)) self.conv14 = nn.Sequential(nn.Conv2d(init_features, init_features, 3, padding=1), nn.BatchNorm2d(init_features), nn.ReLU(inplace=True)) self.conv15 = nn.Conv2d(init_features, out_channels, 1) def forward(self, x): # Encoder x1 = self.conv1(x) x2 = self.conv2(x1) x3 = self.maxpool(x2) x4 = self.conv3(x3) x5 = self.conv4(x4) x6 = self.maxpool(x5) x7 = self.conv5(x6) x8 = self.conv6(x7) x9 = self.maxpool(x8) x10 = self.conv7(x9) x11 = self.conv8(x10) # Decoder x12 = self.upconv1(x11) x12 = torch.cat((x12, x8), dim=1) # concat along channel axis x13 = self.conv9(x12) x14 = self.conv10(x13) x15 = self.upconv2(x14) x15 = torch.cat((x15, x5), dim=1) x16 = self.conv11(x15) x17 = self.conv12(x16) x18 = self.upconv3(x17) x18 = torch.cat((x18, x2), dim=1) x19 = self.conv13(x18) x20 = self.conv14(x19) x21 = self.conv15(x20) return x21

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值