Sequence to Sequence Learning with Neural Networks,从RNN开始

Sequence to Sequence Learning with Neural Networks,从RNN开始

Sequence to Sequence Learning with Neural Networks这篇文章是Google在2014年发表的,较早的使用了Seq2Seq结构的文章,实现了从输入序列映射到不等长的输出序列的学习,在机器翻译的任务中,取得了非常好的成绩。作者首先指出深度神经网络能够在困难的学习任务中达到卓越的性能,但并不适用于从序列映射到到未知长度序列,如机器翻译和语音识别。本文建立模型的主要思想是,使用多层LSTM(长短期记忆网络,Long Short-Term Memory)将输入序列映射成一个固定维度的向量,然后使用另外一个多层LSTM从该向量中解码出目标序列。也就是Sequence to Sequence,经常缩写为Seq2Seq。下面将从什么是RNN开始介绍:

什么是RNN

RNN(循环神经网络,Recurrent Neural Network)是更好地处理时序信息而设计的。它引入状态变量来存储过去的信息,并用其与当前的输入共同决定当前的输出。
时序信息,以语言模型为例,假设一段长度为T的文本中的词依次为 x 1 , x 2 , … x T x_1,x_2,…x_T x1,x2,xT那么在离散的时间序列中, x t ( 1 ≤ t ≤ T ) x_t(1≤t≤T) xt(1tT)可以看做时间步t的序列信息。
RNN基本模型如下,

在这里插入图片描述
每个时间步为一个单元,有两个输入,一个是当前时间步的序列信息 x t x_t xt,另一个是由上一个时间步计算输出的状态信息 h t h_t ht。在A中的操作一般为全连接和激活函数。
h t = f h ( x t , h t − 1 ) = t a n h ( W h x x t + W h h h t − 1 + b h ) y t = f y ( h t ) = s o f t m a x ( W y h t ) \begin{aligned} & h_t=f_h(x_t,h_{t-1})=tanh(W_{hx}x_t+W_{hh}h_{t-1} + bh) \\ & y_t=f_y(h_t)=softmax(W_yh_t) \end{aligned} ht=fh(xt,ht1)=tanh(Whxxt+Whhht1+bh)yt=fy(ht)=softmax(Wyht)
但这种最简单的RNN结构有一定局限性,如果一个序列足够长,那它们很难把信息从较早的时间步传输到后面的时间步,如下图可以一定程度反映这种情况。所以提出了LSTM模型。
在这里插入图片描述

什么是LSTM模型

LSTM多了一个表示cell记忆的值。LSTM为克服短期记忆问题的解决方案是,它们引入称作“门”的内部机制,可以调节信息流。这些门结构可以学习序列中哪些数据是要保留的重要信息,哪些是要删除的。通过这样做,它可以沿着长链序列传递相关信息来执行预测。几乎所有基于RNN的先进结果都是通过LSTM和其变种GRU实现的。
结构图和公式如下:
在这里插入图片描述
c ~ < t > = t a n h ( W c [ a < t − 1 > , x < t > ] + b c ) Γ u = σ ( W u [ a < t − 1 > , x < t > ] + b u ) Γ f = σ ( W f [ a < t − 1 > , x < t > ] + b f ) Γ o = σ ( W o [ a < t − 1 > , x < t > ] + b o ) c < t > = Γ u ∗ c ~ < t > + Γ f ∗ c < t − 1 > a < t > = Γ o ∗ t a n h ( c < t > ) \begin{aligned} &\widetilde c^{<t>}=tanh(W_c[a^{<t-1>},x^{<t>}]+b_c)\\ &\Gamma_u=\sigma(W_u[a^{<t-1>},x^{<t>}]+b_u)\\ &\Gamma_f=\sigma(W_f[a^{<t-1>},x^{<t>}]+b_f)\\ &\Gamma_o=\sigma(W_o[a^{<t-1>},x^{<t>}]+b_o)\\ &c^{<t>}=\Gamma_u*\widetilde c^{<t>}+\Gamma_f*c^{<t-1>}\\ &a^{<t>}=\Gamma_o*tanh(c^{<t>}) \end{aligned} c <t>=tanh(Wc[a<t1>,x<t>]+bc)Γu=σ(Wu[a<t1>,x<t>]+bu)Γf=σ(Wf[a<t1>,x<t>]+bf)Γo=σ(Wo[a<t1>,x<t>]+bo)c<t>=Γuc <t>+Γfc<t1>a<t>=Γotanh(c<t>)
图中的a就是之前所说的h,表示隐藏状态信息。从图中可以看出,输入值和输出值的个数都从2变成了3,多了一个表示记忆的值C;单元内部多了forget gate遗忘门,update gate更新门(输入门)和output gate输出门。
遗忘门、更新门和输出门的计算方式相同,各自权重和 [ a t − 1 , x t ] [a_{t-1},x_t] [at1,xt]相乘加上偏置,得到的值用sigmoid函数激活为一个0到1之间的表示比例的值。
从多出来的输入值C着手,就有了两个问题,多出来的输入值 C C C如何在每一个时间步中更新自己? C C C是怎样起作用的?
在这里插入图片描述

  1. 现将 C t − 1 C_{t-1} Ct1 Γ f \Gamma_f Γf元素对应位置相乘,来把上一时间步传进来的 C C C的值“忘记”一部分。
  2. 用当前时间步的另外两个输入值 x t x_t xt a t − 1 a_{t-1} at1得到 C C C的更新值 C ~ t \widetilde C_t C t
  3. 用更新门控制更新值的分量加到“忘记”过的 C t − 1 C_{t-1} Ct1中,就得到了当前时间步的 C C C的值 C t C_t Ct
  4. C t C_t Ct用双曲正切函数激活得到一个-1到1之间的值,再与输出门元素对应位置相乘,就得到当前时间步的状态值。一方面传递到下一个时间步进行计算,一方面用来计算当前时间步的输出值y

回到论文中

在这里插入图片描述
论文中模型结构的主要思想是,用多层LSTM将输入序列映射到定长变量,再由定长变量通过多层LSTM解码出序列,W左边的部分为编码器encoder,右边的部分为解码器decoder,解码器预测到句尾符号<EOS>(视为一个特殊的单词)时停止解码,这使得模型能够生成不定长度的序列。如下图所示,输入序列为ABC以及序列结束符号<EOS>,映射到定长变量v,生成输出序列WXYZ以及序列结束符号<EOS>。
解码器中将每个时间步的输出值作为下一个时间步的输入值进行预测,体现了一种条件概率的思想。
P ( y 1 , … , y T ′ ) = ∏ t = 1 T ′ P ( y t ∣ v , y 1 , … , y t − 1 ) P(y_1,…,y_{T'})=\prod_{t=1}^{T'}P(y_t|v,y_1,…,y_{t-1}) P(y1,,yT)=t=1TP(ytv,y1,yt1)
P ( y 1 , y 2 , … , y T ′ ) = P ( y 1 ∣ v ) P ( y 2 ∣ v , y 1 ) … P ( y T ′ ∣ v , y 1 , y 2 , … , y T ′ − 1 ) P(y_1,y_2,…,y_{T'})=P(y1|v)P(y2|v,y1)…P(y{T'}|v,y_1,y_2,…,y_{T'-1}) P(y1,y2,,yT)=P(y1v)P(y2v,y1)P(yTv,y1,y2,,yT1)
在这个等式中,每个 P ( y t ∣ v , y 1 , … , y t − 1 ) P(y_t|v,y_1,…,y_{t-1}) P(ytv,y1,,yt1)分布用词汇表中所有单词的softmax表示。
此外还有以下几个关键点:

  • 模型中encoder和decoder使用了不同的LSTM模型
    因为这样做可以增加模型参数的数量,但计算代价可忽略不计,并且很自然的可以在多语言对上训练LSTM。
  • 输入序列时,将序列逆序输入到模型中取得了更好的效果
    如图中,要将ABC翻译为WXYZ的话,输入序列应为CBA。论文坦诚表明没有给出完善的理论依据,给出的解释是,顺序与逆序输入并没有改变平均距离(这里距离指的是time step diff),但是却让源句子与翻译目标语句开头的几个词的距离变短了,也就是加强了源句子中第一个单词A与翻译目标语句中第一个单词W的联系,而句子末尾的词距离变长的代价似乎并不显著,因此逆序输入会得到更好的效果。更为精准的翻译语句开头的单词,提升了句子翻译水平。
  • 深层次的LSTM模型比浅层次的模型要好,文章中使用了4层LSTM
  • 在decoder中应用了beam search
    假设词表大小为3,内容为a, b, c。 beam size为2
  1. 生成第1个词的时候,选择概率最大的两个词,假设为a, c,那么当前序列就是a, c
  2. 生成第2个词的时候,我们将当前序列a和c,分别与词表中的所有词进行组合,得到新的6个序列aa ab ac ca cb cc,然后从其中选择2个得分最高的,当作当前序列,假如为aa cb
  3. 后面会不断重复这个过程,直到遇到结束符为止。最终输出2个得分最高的序列。

参考资料

https://zhuanlan.zhihu.com/p/46981722
https://towardsdatascience.com/illustrated-guide-to-recurrent-neural-networks-79e5eb8049c9
https://www.cnblogs.com/zuotongbin/p/10698843.html

  • 1
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
https://www.udemy.com/deep-learning-recurrent-neural-networks-in-python/ Deep Learning: Recurrent Neural Networks in Python GRU, LSTM, + more modern deep learning, machine learning, and data science for sequences Created by Lazy Programmer Inc. Last updated 5/2017 English What Will I Learn? Understand the simple recurrent unit (Elman unit) Understand the GRU (gated recurrent unit) Understand the LSTM (long short-term memory unit) Write various recurrent networks in Theano Understand backpropagation through time Understand how to mitigate the vanishing gradient problem Solve the XOR and parity problems using a recurrent neural network Use recurrent neural networks for language modeling Use RNNs for generating text, like poetry Visualize word embeddings and look for patterns in word vector representations Requirements Calculus Linear algebra Python, Numpy, Matplotlib Write a neural network in Theano Understand backpropagation Probability (conditional and joint distributions) Write a neural network in Tensorflow Description Like the course I just released on Hidden Markov Models, Recurrent Neural Networks are all about learning sequences – but whereas Markov Models are limited by the Markov assumption, Recurrent Neural Networks are not – and as a result, they are more expressive, and more powerful than anything we’ve seen on tasks that we haven’t made progress on in decades. So what’s going to be in this course and how will it build on the previous neural network courses and Hidden Markov Models? In the first section of the course we are going to add the concept of time to our neural networks. I’ll introduce you to the Simple Recurrent Unit, also known as the Elman unit. We are going to revisit the XOR problem, but we’re going to extend it so that it becomes the parity problem – you’ll see that regular feedforward neural networks will have trouble solving this problem but recurrent networks will work because the key is to treat the input as a sequence. In the next section of the course, we are going to revisit one of the most popular applications of recurrent neural networks – language modeling. You saw when we studied Markov Models that we could do things like generate poetry and it didn’t look too bad. We could even discriminate between 2 different poets just from the sequence of parts-of-speech tags they used. In this course, we are going to extend our language model so that it no longer makes the Markov assumption. Another popular application of neural networks for language is word vectors or word embeddings. The most common technique for this is called Word2Vec, but I’ll show you how recurrent neural networks can also be used for creating word vectors. In the section after, we’ll look at the very popular LSTM, or long short-term memory unit, and the more modern and efficient GRU, or gated recurrent unit, which has been proven to yield comparable performance. We’ll apply these to some more practical problems, such as learning a language model from Wikipedia data and visualizing the word embeddings we get as a result. All of the materials required for this course can be downloaded and installed for FREE. We will do most of our work in Numpy, Matplotlib, and Theano. I am always available to answer your questions and help you along your data science journey. This course focuses on “how to build and understand“, not just “how to use”. Anyone can learn to use an API in 15 minutes after reading some documentation. It’s not about “remembering facts”, it’s about “seeing for yourself” via experimentation. It will teach you how to visualize what’s happening in the model internally. If you want more than just a superficial look at machine learning models, this course is for you. See you in class! NOTES: All the code for this course can be downloaded from my github: /lazyprogrammer/machine_learning_examples In the directory: rnn_class Make sure you always “git pull” so you have the latest version! HARD PREREQUISITES / KNOWLEDGE YOU ARE ASSUMED TO HAVE: calculus linear algebra probability (conditional and joint distributions) Python coding: if/else, loops, lists, dicts, sets Numpy coding: matrix and vector operations, loading a CSV file Deep learning: backpropagation, XOR problem Can write a neural network in Theano and Tensorflow TIPS (for getting through the course): Watch it at 2x. Take handwritten notes. This will drastically increase your ability to retain the information. Write down the equations. If you don’t, I guarantee it will just look like gibberish. Ask lots of questions on the discussion board. The more the better! Realize that most exercises will take you days or weeks to complete. Write code yourself, don’t just sit there and look at my code. USEFUL COURSE ORDERING: (The Numpy Stack in Python) Linear Regression in Python Logistic Regression in Python (Supervised Machine Learning in Python) (Bayesian Machine Learning in Python: A/B Testing) Deep Learning in Python Practical Deep Learning in Theano and TensorFlow (Supervised Machine Learning in Python 2: Ensemble Methods) Convolutional Neural Networks in Python (Easy NLP) (Cluster Analysis and Unsupervised Machine Learning) Unsupervised Deep Learning (Hidden Markov Models) Recurrent Neural Networks in Python Artificial Intelligence: Reinforcement Learning in Python Natural Language Processing with Deep Learning in Python Who is the target audience? If you want to level up with deep learning, take this course. If you are a student or professional who wants to apply deep learning to time series or sequence data, take this course. If you want to learn about word embeddings and language modeling, take this course. If you want to improve the performance you got with Hidden Markov Models, take this course. If you’re interested the techniques that led to new developments in machine translation, take this course. If you have no idea about deep learning, don’t take this course, take the prerequisites.
eep Learning: Recurrent Neural Networks in Python: LSTM, GRU, and more RNN machine learning architectures in Python and Theano (Machine Learning in Python) by LazyProgrammer English | 8 Aug 2016 | ASIN: B01K31SQQA | 86 Pages | AZW3/MOBI/EPUB/PDF (conv) | 1.44 MB Like Markov models, Recurrent Neural Networks are all about learning sequences - but whereas Markov Models are limited by the Markov assumption, Recurrent Neural Networks are not - and as a result, they are more expressive, and more powerful than anything we’ve seen on tasks that we haven’t made progress on in decades. In the first section of the course we are going to add the concept of time to our neural networks. I’ll introduce you to the Simple Recurrent Unit, also known as the Elman unit. We are going to revisit the XOR problem, but we’re going to extend it so that it becomes the parity problem - you’ll see that regular feedforward neural networks will have trouble solving this problem but recurrent networks will work because the key is to treat the input as a sequence. In the next section of the book, we are going to revisit one of the most popular applications of recurrent neural networks - language modeling. One popular application of neural networks for language is word vectors or word embeddings. The most common technique for this is called Word2Vec, but I’ll show you how recurrent neural networks can also be used for creating word vectors. In the section after, we’ll look at the very popular LSTM, or long short-term memory unit, and the more modern and efficient GRU, or gated recurrent unit, which has been proven to yield comparable performance. We’ll apply these to some more practical problems, such as learning a language model from Wikipedia data and visualizing the word embeddings we get as a result. All of the materials required for this course can be downloaded and installed for FREE. We will do most of our work in Numpy, Matplotlib, and Theano. I am always available to answer

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值