你一定能看懂的tf.nn.bidirectional_dynamic_rnn()详解

7 篇文章 0 订阅
5 篇文章 0 订阅

tf.nn.bidirectional_dynamic_rnn

tf.nn.bidirectional_dynamic_rnn(
 
    cell_fw,
    cell_bw,
    inputs,
    sequence_length=None,
    initial_state_fw=None,
    initial_state_bw=None,
    dtype=None,
    parallel_iterations=None,
    swap_memory=False,
    time_major=False,
    scope=None
)

输入

  • cell_fw:一个RNNCell实例,用于前向。
  • cell_bw: 一个RNNCell实例,用于后向。
  • Inputs:RNN的输入。一般的shape为[batch_size,seq_len,embedding_size]
  • sequence_length:可选参数。一个大小为[batch_size]的int32/int64类型向量。表示每个输入样本长度,如时间步长,dynamic可以处理不定长的,但我做的实验意义不大。
  • Initial_state:_fw:可选参数。一个针对前向RNN的初始状态。[batch_size, cell_fw.state_size]
  • Initial_state_bw: 可选参数。一个针对后向RNN的初始状态。[batch_size, cell_fw.state_size]

输出

一个(outputs,output_states)元组,其中:

  • outputs 是一个一个(output_fw,output_bw)元组,里面包括前向和后向的结果,每一个结果都包含所有的时刻的输出结果 shape为[batch_size,num_step,num_hidden]
  • output_states 一个(output_state_fw,output_state_bw)的元组,每一个结果里面的都包含两个状态,只包含最后一个时刻(前后、后项的最后一个)的两个输出,所以没有n_step 包含两个结果 ,一个是 C(最后一个时刻后 LSTM里面最上面的通道,通常用作下一个时刻的输入),H(最后一个时刻的输出,可以从下面的代码验证中得到验证)

加Attention机制

思想: 利用最后一层的输出(因为最后一层融合了所有的信息) 跟每一层进行相乘得出attention权重,然后利用权重把每一步的结果都加起来得到最终的一个结果

代码验证

import tensorflow as tf

import numpy as np

tf.reset_default_graph()

# Bi-LSTM(Attention) Parameters
embedding_dim = 2
n_hidden = 5 # number of hidden units in one cell

# 3 words sentences (=sequence_length is 3)
sentences = ["i love you", "he loves me", "she likes baseball", "i hate you", "sorry for that", "this is awful"]


word_list = " ".join(sentences).split()
word_list = list(set(word_list))
word_dict = {w: i for i, w in enumerate(word_list)}
vocab_size = len(word_dict)

input_batch = []
for sen in sentences:
    input_batch.append(np.asarray([word_dict[n] for n in sen.split()]))

target_batch = []
for out in labels:
    target_batch.append(np.eye(n_class)[out]) # ONE-HOT : To using Tensor Softmax Loss function

# LSTM Model
X = tf.placeholder(tf.int32, [None, n_step])
Y = tf.placeholder(tf.int32, [None, n_class])
out = tf.Variable(tf.random_normal([n_hidden * 2, n_class]))

embedding = tf.Variable(tf.random_uniform([vocab_size, embedding_dim]))
input = tf.nn.embedding_lookup(embedding, X) # [batch_size, len_seq3, embedding_dim2]

lstm_fw_cell = tf.nn.rnn_cell.LSTMCell(n_hidden) # 5
lstm_bw_cell = tf.nn.rnn_cell.LSTMCell(n_hidden)

# output : [batch_size, len_seq, n_hidden], states : [batch_size, n_hidden]
output, final_state = tf.nn.bidirectional_dynamic_rnn(lstm_fw_cell,lstm_bw_cell, input, dtype=tf.float32)

# Attention

# Training
with tf.Session() as sess:
    init = tf.global_variables_initializer()
    sess.run(init)
    #print(input_batch.shape)
    output, final_state = sess.run([output, final_state], feed_dict={X: input_batch, Y: target_batch})
    #print(output.shape())
    print("output_fw-----------------------")
    print(output[0])
    print("output_bw-----------------------")
    print(output[1])
    print("=-----------------*****************************************------------------=")
    print("output_fw_C-----------------------")
    print(final_state[0][0])
    print("output_fw_H-----------------------")
    print(final_state[0][1])
    print("output_bw_C-----------------------")
    print(final_state[1][0])
    print("output_bw_H-----------------------")
    print(final_state[1][1])


#结果
output_fw-----------------------
[[[ 0.00394321 -0.02351782 -0.03405523  0.00305696  0.00405202]
  [ 0.06117373 -0.08493181 -0.10548183  0.0173593   0.01642234]
  [ 0.09302159 -0.14045352 -0.18056425  0.02292664  0.02151979]]

 [[ 0.00897689 -0.0114818  -0.01073236  0.0026193   0.00251714]
  [-0.01570433 -0.01093796 -0.02816775 -0.00127712  0.00047153]
  [ 0.03564881 -0.08152001 -0.12945394  0.01309824  0.01412579]]

 [[ 0.0022017  -0.0556691  -0.09676469  0.00533017  0.00813764]
  [-0.02163241 -0.03916193 -0.09379555  0.00117761  0.0033054 ]
  [-0.02534943 -0.03482693 -0.09665176  0.00129206  0.00241509]]

 [[ 0.00394321 -0.02351782 -0.03405523  0.00305696  0.00405202]
  [ 0.0251001  -0.04374754 -0.05323521  0.00777274  0.00770803]
  [ 0.06088063 -0.10670137 -0.14363492  0.01596653  0.01687609]]

 [[-0.02074501 -0.03782491 -0.08010133  0.00106625  0.0042809 ]
  [-0.02030762 -0.02580994 -0.06796815  0.00064477  0.00200086]
  [ 0.01159763 -0.09142994 -0.1786155   0.01095188  0.01238727]]

 [[ 0.03423732 -0.04540066 -0.0511797   0.0099178   0.00961432]
  [ 0.03746802 -0.09071141 -0.12944394  0.01133813  0.0131082 ]
  [ 0.04090166 -0.07015887 -0.0894173   0.00954981  0.00913159]]]
output_bw-----------------------
[[[ 0.02337044  0.12136219  0.05673089 -0.01224817  0.0593852 ]
  [ 0.01874987  0.09703043  0.0713386  -0.05112801  0.09921131]
  [ 0.03311865  0.06603951  0.02200164 -0.0013612   0.04199017]]

 [[ 0.04622315  0.08167503 -0.02331962  0.02814881 -0.01469414]
  [ 0.06475978  0.1021239  -0.02712514  0.05594093 -0.01951348]
  [ 0.02701424  0.06634663  0.03368881 -0.01682826  0.05579737]]

 [[ 0.17932206  0.11006899 -0.17011283  0.17438392 -0.14428209]
  [ 0.13817243  0.08259486 -0.13932668  0.15030426 -0.13008778]
  [ 0.06183491  0.04734965 -0.05742453  0.07186026 -0.05271984]]

 [[ 0.03479569  0.09810638  0.01812333  0.01501928  0.02510414]
  [ 0.02493659  0.08395203  0.03107387 -0.00838013  0.04572592]
  [ 0.03311865  0.06603951  0.02200164 -0.0013612   0.04199017]]

 [[ 0.16409379  0.1282753  -0.13091424  0.17192362 -0.10831039]
  [ 0.09441089  0.10732475 -0.05974932  0.08499315 -0.04574324]
  [ 0.06810828  0.07835089 -0.01157835  0.05307872  0.00895631]]

 [[ 0.0615179   0.09249893 -0.00614712  0.01541615  0.02442793]
  [ 0.07529346  0.07565043 -0.03864583  0.07073426 -0.02094099]
  [ 0.01069495  0.01040417 -0.01082753  0.01262293 -0.00998173]]]
=-----------------*****************************************------------------=
output_fw_C-----------------------
[[ 0.18077272 -0.25065544 -0.2970943   0.05401518  0.04332165]
 [ 0.0666052  -0.14184129 -0.21055089  0.03050351  0.02707803]
 [-0.04982244 -0.06662414 -0.17368984  0.00284229  0.00523079]
 [ 0.11556551 -0.18880183 -0.23572075  0.03740995  0.03326564]
 [ 0.0214364  -0.15807846 -0.28311607  0.0270236   0.02515054]
 [ 0.08471472 -0.13929133 -0.17238948  0.01954564  0.01927523]]
output_fw_H-----------------------
[[ 0.09302159 -0.14045352 -0.18056425  0.02292664  0.02151979]
 [ 0.03564881 -0.08152001 -0.12945394  0.01309824  0.01412579]
 [-0.02534943 -0.03482693 -0.09665176  0.00129206  0.00241509]
 [ 0.06088063 -0.10670137 -0.14363492  0.01596653  0.01687609]
 [ 0.01159763 -0.09142994 -0.1786155   0.01095188  0.01238727]
 [ 0.04090166 -0.07015887 -0.0894173   0.00954981  0.00913159]]
output_bw_C-----------------------
[[ 0.04923753  0.29065222  0.10612877 -0.02204721  0.12080507]
 [ 0.09414361  0.17565773 -0.04700623  0.05542824 -0.02873464]
 [ 0.3784477   0.3159748  -0.2940823   0.28821698 -0.27351445]
 [ 0.07218795  0.23126517  0.03378158  0.02722929  0.05014303]
 [ 0.32902873  0.36530167 -0.22143884  0.27972797 -0.21702379]
 [ 0.13681822  0.22507632 -0.01161141  0.02861859  0.04536273]]
output_bw_H-----------------------
[[ 0.02337044  0.12136219  0.05673089 -0.01224817  0.0593852 ]
 [ 0.04622315  0.08167503 -0.02331962  0.02814881 -0.01469414]
 [ 0.17932206  0.11006899 -0.17011283  0.17438392 -0.14428209]
 [ 0.03479569  0.09810638  0.01812333  0.01501928  0.02510414]
 [ 0.16409379  0.1282753  -0.13091424  0.17192362 -0.10831039]
 [ 0.0615179   0.09249893 -0.00614712  0.01541615  0.02442793]]
  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值