Gated Recurrent Unit (GRU)

                               Gated Recurrent Unit (GRU)

Outline          

                  Background

                  GRU Network

                  GRU vs. LSTM

                  Experiment

                  References

Background

    A gated recurrent unit (GRU) was proposed by Cho et al. [2014] to make each recurrent unit to adaptively capture dependencies of different time scales.

Solving problems existed in RNN: Gradient Vanishing.

Example:

GRU Network

GRU vs. LSTM

 

Code Example:

import tensorflow as tf

x = tf.constant([[1]], dtype = tf.float32)

state0_lstm = lstm_cell.zero_state(1,dtype=tf.float32)

output,state = lstm_cell(x,state0_lstm)

state0_gru = gru_cell.zero_state(1,dtype=tf.float32)

output2,state2 = gru_cell(x,state0_gru)

with tf.Session() as sess:

sess.run(init)

print(sess.run(output))

print(sess.run(state))

print(sess.run(output2))

print(sess.run(state2))

Experiment

References

1. Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling

2. Learned-norm pooling for deep feedforward and recurrent neural networks

3. Long short-term memory

转载于:https://www.cnblogs.com/AcceptedLin/p/9778974.html

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值