Tomato学习笔记-Seq2Seq

目录

  1. 简介
  2. 循环神经网络及其变形
  3. Seq2Seq

1.简介

        Seq2Seq有什么用呢?当我们期望输入一个序列例如X={x1,x2……xn}时,期待获得一个Y={y1,y2……ym}的输出,此时n\neqm,例如翻译任务。本次学习经历均是基于翻译任务进行,故而之后我将把输入序列称之为源语句,每个xi代表单词向量,对应Y称之为目标语句。

        那么你能想到用什么样的神经网络模型来处理呢?显然类似于全连接和卷积的传统神经网络例如AlexNet肯定不行,因为它本身都不太适合序列的输出。循环神经网络倒是可以输出序列,但通常输出的序列个数人为恒定,不方便于使用;再加上通常RNN模型一般是一个输入对应一个输出,m<=n,这导致例如翻译中,目标语句的单词个数大于源语句的单词个数时难以处理。

        不过,RNN倒是给了我们灵感,若我们将源语句放进RNN中跑,拿取最后一个结果当做能表征整个源语句的特征向量(如果你觉得这一个向量不能表征的话,那么恭喜你,你已经想到了Attention模型了),再用该特征向量结合目标语句进行训练。

2.循环神经网络及其变形

        图片是我随便从baidu上下载下来的图片,其实RNN模型随手一搜基本上都没啥问题,毕竟是最基础的循环神经网络模型。

        那么我简单从Pytorch上来讲讲它的RNN,Pytorch并没有实现RNN,可能它觉得如需使用,用户可以自行编写吧。但没学懂之前可倒了大霉了(-_-||)

        直接丢一段伪代码给大家李姐,当然这个伪代码不能直接跑

import torch.nn as nn

def train(RNN, input, hidden):
    hidden = RNN.initHidden() # 可以直接设置为全0,主要用于初始隐藏向量
    output, hidden = RNN(input, hidden)

#将RNN的内部分解即
class RNN(nn.Module):
    def __init__(self): 略

    def forward(self ,input, hidden):
        temp_output = input
        seq_len, _ = input.shape
        for i in range(seq_len):
            temp_output, hidden = RNNCell(output, hidden)
            output.append(temp_output)
        return output, hidden

    def initHidden(self):
        return np.zeros(...)

class RNNCell(nn.Module):
    '''略'''
    def forward(self, input, hidden):
        #这里可以自由发挥,如果你真的有想法,
        #可以简单的自己创造一个,例如全连接(最基础的RNN)、各种门来控制(LSTM和GRU之类)
    
        

3.Seq2Seq

        Seq2Seq最常见的是Encoder-Decoder模型,也就是我在简介中所说的将源语句编码再与目标语句链接进行训练的模型。如图:

      

        Encoder部分和Decoder部分都是一个RNN模型,当然若你不想用RNN模型当Encoder和Decoder也行,那将直接进入地狱难度的学习,请左转Transformer模型。该模型直接将Encoder,Decoder用矩阵乘法表示,并且可以不用当前一个RNNCell运行而直接跑出结果,不仅效率很快,而且特征提取能力也大大提升。

        Encoder-Decoder代码如下:

import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import random

trans_sentences = {'SOS glad to meet you EOS':'SOS ni hao EOS', 
                   'SOS nice to meet you EOS':'SOS ni hao EOS',
                   'SOS how old are you EOS':'SOS ni ji sui EOS',
                   'SOS you are handsome EOS':'SOS ni zhen shuai EOS',
                   'SOS you are so good EOS':'SOS ni zhen bang EOS',
                   'SOS you are nice EOS':'SOS ni zhen hao EOS',
                   'SOS I love you EOS':'SOS wo ai ni EOS',
                   'SOS I dislike you EOS':'SOS wo bu xi huan ni EOS',
                   'SOS I have an apple EOS':'SOS wo you yi ge ping guo EOS'}

def getTotalWords(trans_sentences):
    words = []
    for sen in trans_sentences.keys():
        for word in sen.split(' '):
            words.append(word)
    for sen in trans_sentences.values():
        for word in sen.split(' '):
            words.append(word)        
    return sorted(list(set(words)))


def sent2vector(sent):
    words = sent.split(' ')
    vec = []
    for word in words:
        vec.append(wordList.index(word))
    return torch.tensor(vec)


def label2word(label):
    words = []
    for i in label:
        words.append(wordList[i])
    return words


def output2word(output):
    output = output.squeeze()
    if(len(output.shape) == 1):
        maxv, maxi = torch.topk(output, 1, dim=0)
    elif(len(output.shape) == 2):
        maxv, maxi = torch.topk(output, 1, dim=1)
        
    return label2word(maxi)
    

wordList = getTotalWords(trans_sentences)

# simple encoder-decoder component
class EncoderLSTM(nn.Module):
    
    def __init__(self, input_size, hidden_size, output_size):
        super(EncoderLSTM, self).__init__()
        
        self.input_size = input_size
        self.hidden_size = hidden_size
        self.output_size = output_size
        
        self.embed = nn.Embedding(self.input_size, 10)
        self.lstm = nn.LSTM(10, self.hidden_size)
        self.fc = nn.Linear(self.hidden_size, self.output_size)
        
    def HiddenInit(self):
        return (torch.zeros(1, 1, self.hidden_size),
                torch.zeros(1, 1, self.hidden_size))
    
    def forward(self, input, hidden):
        x = self.embed(input)
        if(len(x.shape) != 3):
            x = x.view(x.shape[0], 1, -1)
        
        output, hidden = self.lstm(x, hidden)
        output = self.fc(output[-1,:])
        return output.view(1,1,-1), hidden
        
        
class DecoderLSTM(nn.Module):
    
    def __init__(self, input_size, hidden_size, output_size):
        super(DecoderLSTM, self).__init__()
        
        self.input_size = input_size
        self.hidden_size = hidden_size
        self.output_size = output_size
        
        self.embed = nn.Embedding(self.input_size, 10)
        self.cat2hid = nn.Linear(self.input_size + 10 , self.input_size)
        self.lstm = nn.LSTM(self.input_size, self.hidden_size)
        self.fc = nn.Linear(self.hidden_size, self.output_size)
        
    def HiddenInit(self):
        return (torch.zeros(1, 1, self.hidden_size),
                torch.zeros(1, 1, self.hidden_size))
    
    def forward(self, input, hidden, encoder_embed):
        x = self.embed(input)
        if(len(x.shape) == 2):
            x = x.view(x.shape[0], 1, -1)
        elif(len(x.shape) == 1):
            x = x.view(1, 1, -1)
        
        x = torch.cat((x, encoder_embed), dim = 2)
        x = self.cat2hid(x)
        output, hidden = self.lstm(x, hidden)
        output = self.fc(output)
        return output, hidden

input_size = len(wordList)
hidden_size = len(wordList)
output_size = len(wordList)

encoder = EncoderLSTM(input_size, hidden_size, output_size)
decoder = DecoderLSTM(input_size, hidden_size, output_size)

encoder_opt = optim.Adam(encoder.parameters(), lr=0.01)
decoder_opt = optim.Adam(decoder.parameters(), lr=0.01)
crit = nn.CrossEntropyLoss()

running_loss = 0
for epoch in range(100):
    for input in trans_sentences.keys():
        
        target = trans_sentences[input]
        target_label = sent2vector(target)
        input_vec = sent2vector(input)
        hidden = encoder.HiddenInit()
        decoder_input = target_label[0:1]
        
        encoder_opt.zero_grad()
        decoder_opt.zero_grad()
        
        encoder_embed, decoder_hidden = encoder(input_vec, hidden)
        
        ForcingLearning = True if random.random() < 0.5 else False # True means according to the target, False means according to the prediction
#         ForcingLearning = True
        loss = 0
        predict = []
        decoder_hidden = decoder.HiddenInit()
        if(ForcingLearning == True):
            for i in range(len(target_label) - 1):
                # decoder_output give next word, decoder_hidden for hidden vector input
                decoder_output, decoder_hidden = decoder(decoder_input, decoder_hidden, encoder_embed)
                predict.append(output2word(decoder_output))
                loss += crit(decoder_output.view(decoder_output.shape[0], -1), target_label[i+1:i+2])
                decoder_input = target_label[i+1:i+2]
        else:
            for i in range(len(target_label) - 1):
                decoder_output, decoder_hidden = decoder(decoder_input, decoder_hidden, encoder_embed)
                predict.append(output2word(decoder_output))
                loss += crit(decoder_output.view(decoder_output.shape[0], -1), target_label[i+1:i+2])
                
                _, predict_i = torch.topk(decoder_output.squeeze(), 1, dim=0)
                decoder_input = predict_i

        loss.backward(retain_graph=True)
        encoder_opt.step()
        decoder_opt.step()
        
        running_loss += loss.data
        if(epoch % 25 == 24):
            print("——————————————————————————————————————")
            print("in epoch ( %d ), the average loss is ( %.5f )" % (epoch, running_loss / 25))
            print("The original sentences is [%s], translation is [%s]" % (input, predict))
            running_loss = 0

def predict(model, original):
    
    original = "SOS " + original + " EOS"
    input_vec = sent2vector(original)
    encoder, decoder = model
    decoder_input = sent2vector("SOS")
    
    with torch.no_grad():
        hidden = encoder.HiddenInit()
        encoder_embed, decoder_hidden = encoder(input_vec, hidden)
        
        predict = ["SOS"]
        while (predict[-1] != "EOS"):
                decoder_output, decoder_hidden = decoder(decoder_input, decoder_hidden, encoder_embed)
                predict.append(output2word(decoder_output)[0])
                _, predict_i = torch.topk(decoder_output.squeeze(), 1, dim=0)
                decoder_input = predict_i
        
        print(predict[1:-1])
    
predict((encoder, decoder), "ni ji sui")

# complex encoder-decoder with attention mechanicsm
# simple encoder-decoder component
class EncoderLSTM(nn.Module):
    
    def __init__(self, input_size, hidden_size, output_size):
        super(EncoderLSTM, self).__init__()
        
        self.input_size = input_size
        self.hidden_size = hidden_size
        self.output_size = output_size
        
        self.embed = nn.Embedding(self.input_size, 10)
        self.lstm = nn.LSTM(10, self.hidden_size)
        self.fc = nn.Linear(self.hidden_size, self.output_size)
        
    def HiddenInit(self):
        return (torch.zeros(1, 1, self.hidden_size),
                torch.zeros(1, 1, self.hidden_size))
    
    def forward(self, input, hidden):
        x = self.embed(input)
        if(len(x.shape) != 3):
            x = x.view(x.shape[0], 1, -1)
        
        output, hidden = self.lstm(x, hidden)
        output = self.fc(output)
        return output, hidden
        
        
class AttentionDecoderLSTM(nn.Module):
    
    def __init__(self, input_size, hidden_size, output_size, seq_size):
        super(AttentionDecoderLSTM, self).__init__()
        
        self.input_size = input_size
        self.hidden_size = hidden_size
        self.output_size = output_size
        self.seq_size = seq_size
        
        self.embed = nn.Embedding(self.input_size, 10)
        self.attn = nn.Linear(self.hidden_size * 2 + 10 , self.seq_size)
        self.attn_combine = nn.Linear(self.hidden_size + 10, self.input_size)
        
        self.lstm = nn.LSTM(self.input_size, self.hidden_size)
        self.fc = nn.Linear(self.hidden_size, self.output_size)
        
    def HiddenInit(self):
        return (torch.zeros(1, 1, self.hidden_size),
                torch.zeros(1, 1, self.hidden_size))
    
    def forward(self, input, hidden, encoder_embed):
        x = self.embed(input)
        if(len(x.shape) == 2):
            x = x.view(x.shape[0], 1, -1)
        elif(len(x.shape) == 1):
            x = x.view(1, 1, -1)
        
        input = x
        x = torch.cat((hidden[0], hidden[1], x), dim = 2)
        weight = F.softmax(self.attn(x), dim=2)
        
#         print(weight[:, :, 0:encoder_embed.shape[0]].shape, encoder_embed.transpose(0, 1).shape)
        attn_applied = torch.bmm(weight[:, :, 0:encoder_embed.shape[0]], encoder_embed.transpose(0, 1))
        x = torch.cat((attn_applied, input), dim=2)
        x = self.attn_combine(x)
        
        output, hidden = self.lstm(x, hidden)
        output = self.fc(output)
        return output, hidden

input_size = len(wordList)
hidden_size = len(wordList)
output_size = len(wordList)

encoder = EncoderLSTM(input_size, hidden_size, output_size)
decoder = AttentionDecoderLSTM(input_size, hidden_size, output_size, 6)

encoder_opt = optim.Adam(encoder.parameters(), lr=0.01)
decoder_opt = optim.Adam(decoder.parameters(), lr=0.01)
crit = nn.CrossEntropyLoss()

running_loss = 0
for epoch in range(1000):
    for input in trans_sentences.keys():
        
        target = trans_sentences[input]
        target_label = sent2vector(target)
        input_vec = sent2vector(input)
        hidden = encoder.HiddenInit()
        decoder_input = target_label[0:1]
        
        encoder_opt.zero_grad()
        decoder_opt.zero_grad()
        
        encoder_embed, decoder_hidden = encoder(input_vec, hidden)
        
#         ForcingLearning = True if random.random() < 0.5 else False # True means according to the target, False means according to the prediction
        ForcingLearning = True
        loss = 0
        predict = []
        if(ForcingLearning == True):
            for i in range(len(target_label) - 1):
                # decoder_output give next word, decoder_hidden for hidden vector input
                decoder_hidden = decoder.HiddenInit()
                decoder_output, _ = decoder(decoder_input, decoder_hidden, encoder_embed)
                predict.append(output2word(decoder_output))
                loss += crit(decoder_output.view(decoder_output.shape[0], -1), target_label[i+1:i+2])
                decoder_input = target_label[i+1:i+2]
        else:
            for i in range(len(target_label) - 1):
                decoder_output, decoder_hidden = decoder(decoder_input, decoder_hidden, encoder_embed)
                predict.append(output2word(decoder_output))
                loss += crit(decoder_output.view(decoder_output.shape[0], -1), target_label[i+1:i+2])
                
                _, predict_i = torch.topk(decoder_output.squeeze(), 1, dim=0)
                decoder_input = predict_i

        loss.backward(retain_graph=True)
        encoder_opt.step()
        decoder_opt.step()
        
        running_loss += loss.data
        if(epoch % 250 == 249):
            print("——————————————————————————————————————")
            print("in epoch ( %d ), the average loss is ( %.5f )" % (epoch, running_loss / 250))
            print("The original sentences is [%s], translation is [%s]" % (input, predict))
            running_loss = 0

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值