函数原型
tf.nn.dynamic_rnn(
cell,
inputs,
sequence_length=None,
initial_state=None,
dtype=None,
parallel_iterations=None,
swap_memory=False,
time_major=False,
scope=None
)
参数讲解:
-
cell: RNNCell的一个实例.
-
inputs: RNN输入.
- 如果time_major == False(默认), 则是一个shape为[batch_size, max_time, input_size]的Tensor,或者这些元素的嵌套元组。
- 如果time_major == True,则是一个shape为[max_time, batch_size, input_size]的Tensor,或这些元素的嵌套元组。
-
sequence_length: (可选)大小为[batch_size],数据的类型是int32/int64向量。如果当前时间步的index超过该序列的实际长度时,则该时间步不进行计算,RNN的state复制上一个时间步的,同时该时间步的输出全部为零。
-
initial_state: (可选)RNN的初始state(状态)。如果cell.state_size(一层的RNNCell)是一个整数,那么它必须是一个具有适当类型和形状的张量[batch_size,cell.state_size]。如果cell.state_size是一个元组(多层的RNNCell,如MultiRNNCell),那么它应该是一个张量元组,每个元素的形状为[batch_size,s] for s in cell.state_size。
-
time_major: inputs 和outputs 张量的形状格式。如果为True,则这些张量都应该是(都会是)[max_time, batch_size, depth]。如果为false,则这些张量都应该是(都会是)[batch_size,max_time, depth]。time_major=true说明输入和输出tensor的第一维是max_time。否则为batch_size。
使用time_major =True更有效,因为它避免了RNN计算开始和结束时的转置.但是,大多数TensorFlow数据都是batch-major,因此默认情况下,此函数接受输入并以batch-major形式发出输出.
返回值:
一对(outputs, state),其中:
-
outputs: RNN输出Tensor.
- 如果time_major == False(默认),这将是shape为[batch_size, max_time, cell.output_size]的Tensor.
- 如果time_major == True,这将是shape为[max_time, batch_size, cell.output_size]的Tensor.
-
state: 最终的状态.
- 一般情况下state的形状为 [batch_size, cell.output_size ]
- 如果cell是LSTMCells,则state将是包含每个单元格的LSTMStateTuple的元组,state的形状为[2,batch_size, cell.output_size ]
实列讲解
import tensorflow as tf
import numpy as np
n_steps = 2
n_inputs = 3
n_neurons = 5 # 也就是hidden_size
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
basic_cell = tf.contrib.rnn.BasicRNNCell(num_units=n_neurons)
seq_length = tf.placeholder(tf.int32, [None])
outputs, states = tf.nn.dynamic_rnn(basic_cell, X, dtype=tf.float32,
sequence_length=seq_length)
init = tf.global_variables_initializer()
X_batch = np.array([
# step 0 step 1
[[0, 1, 2], [9, 8, 7]], # instance 1
[[3, 4, 5], [0, 0, 0]], # instance 2
[[6, 7, 8], [6, 5, 4]], # instance 3
[[9, 0, 1], [3, 2, 1]], # instance 4
])
seq_length_batch = np.array([2, 1, 2, 2])
with tf.Session() as sess:
init.run()
outputs_val, states_val = sess.run(
[outputs, states], feed_dict={X: X_batch, seq_length: seq_length_batch})
print("outputs_val.shape:", outputs_val.shape, "states_val.shape:", states_val.shape)
print("outputs_val:", outputs_val, "states_val:", states_val)
输出
outputs_val.shape: (4, 2, 5) states_val.shape: (4, 5)
outputs_val:
[[[ 0.53073734 -0.61281306 -0.5437517 0.7320347 -0.6109526 ]
[ 0.99996936 0.99990636 -0.9867181 0.99726075 -0.99999976]]
[[ 0.9931584 0.5877845 -0.9100412 0.988892 -0.9982337 ]
[ 0. 0. 0. 0. 0. ]]
[[ 0.99992317 0.96815354 -0.985101 0.9995968 -0.9999936 ]
[ 0.99948144 0.9998127 -0.57493806 0.91015154 -0.99998355]]
[[ 0.99999255 0.9998929 0.26732785 0.36024097 -0.99991137]
[ 0.98875254 0.9922327 0.6505734 0.4732064 -0.9957567 ]]]
states_val:
[[ 0.99996936 0.99990636 -0.9867181 0.99726075 -0.99999976]
[ 0.9931584 0.5877845 -0.9100412 0.988892 -0.9982337 ]
[ 0.99948144 0.9998127 -0.57493806 0.91015154 -0.99998355]
[ 0.98875254 0.9922327 0.6505734 0.4732064 -0.9957567 ]]
上面代码搭建的RNN网络如下图所示
上图中:椭圆表示tensor,矩形表示RNN cell。
首先tf.nn.dynamic_rnn()
的time_major
是默认的false,故输入X应该是一个
[
b
a
t
c
h
_
s
i
z
e
,
s
t
e
p
,
i
n
p
u
t
_
s
i
z
e
]
=
[
4
,
2
,
3
]
[batch\_size,step,input\_size] = [4,2,3]
[batch_size,step,input_size]=[4,2,3]的tensor,注意我们这里调用的是BasicRNNCell
,只有一层循环网络,outputs是最后一层每个step的输出,它的结构是
[
b
a
t
c
h
_
s
i
z
e
,
s
t
e
p
,
n
_
n
e
u
r
o
n
s
]
=
[
4
,
2
,
5
]
[batch\_size,step,n\_neurons] = [4,2,5]
[batch_size,step,n_neurons]=[4,2,5],states是每一层的最后那个step的输出,由于本例中,我们的循环网络只有一个隐藏层,所以它就代表这一层的最后那个step的输出,因此它和step的大小是没有关系的,我们的X有4个样本组成,隐层神经元个数为n_neurons是5,因此states的结构就是
[
b
a
t
c
h
_
s
i
z
e
,
n
_
n
e
u
r
o
n
s
]
=
[
4
,
5
]
[batch\_size,n\_neurons] = [4,5]
[batch_size,n_neurons]=[4,5],最后我们观察数据,states的每条数据正好就是outputs的最后一个step的输出。
下面我们继续讲解多个隐藏层的情况,这里是三个隐藏层,注意我们这里仍然是调用BasicRNNCell
import tensorflow as tf
import numpy as np
n_steps = 2
n_inputs = 3
n_neurons = 5
n_layers = 3
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
seq_length = tf.placeholder(tf.int32, [None])
layers = [tf.contrib.rnn.BasicRNNCell(num_units=n_neurons,
activation=tf.nn.relu)
for layer in range(n_layers)]
multi_layer_cell = tf.contrib.rnn.MultiRNNCell(layers)
outputs, states = tf.nn.dynamic_rnn(multi_layer_cell, X, dtype=tf.float32, sequence_length=seq_length)
init = tf.global_variables_initializer()
X_batch = np.array([
# step 0 step 1
[[0, 1, 2], [9, 8, 7]], # instance 1
[[3, 4, 5], [0, 0, 0]], # instance 2 (padded with zero vectors)
[[6, 7, 8], [6, 5, 4]], # instance 3
[[9, 0, 1], [3, 2, 1]], # instance 4
])
seq_length_batch = np.array([2, 1, 2, 2])
with tf.Session() as sess:
init.run()
outputs_val, states_val = sess.run(
[outputs, states], feed_dict={X: X_batch, seq_length: seq_length_batch})
print("outputs_val.shape:", outputs, "states_val.shape:", states)
print("outputs_val:", outputs_val, "states_val:", states_val)
输出
outputs_val.shape:
Tensor("rnn/transpose_1:0", shape=(?, 2, 5), dtype=float32)
states_val.shape:
(<tf.Tensor 'rnn/while/Exit_3:0' shape=(?, 5) dtype=float32>,
<tf.Tensor 'rnn/while/Exit_4:0' shape=(?, 5) dtype=float32>,
<tf.Tensor 'rnn/while/Exit_5:0' shape=(?, 5) dtype=float32>)
outputs_val:
[[[0. 0. 0. 0. 0. ]
[0. 0.18740742 0. 0.2997518 0. ]]
[[0. 0.07222144 0. 0.11551574 0. ]
[0. 0. 0. 0. 0. ]]
[[0. 0.13463384 0. 0.21534224 0. ]
[0.03702604 0.18443246 0. 0.34539366 0. ]]
[[0. 0.54511094 0. 0.8718864 0. ]
[0.5382122 0. 0.04396425 0.4040263 0. ]]]
states_val:
(array([[0. , 0.83723307, 0. , 0. , 2.8518028 ],
[0. , 0.1996038 , 0. , 0. , 1.5456247 ],
[0. , 1.1372368 , 0. , 0. , 0.832613 ],
[0. , 0.7904129 , 2.4675028 , 0. , 0.36980057]],
dtype=float32),
array([[0.6524607 , 0. , 0. , 0. , 0. ],
[0.25143963, 0. , 0. , 0. , 0. ],
[0.5010576 , 0. , 0. , 0. , 0. ],
[0. , 0.3166597 , 0.4545995 , 0. , 0. ]],
dtype=float32),
array([[0. , 0.18740742, 0. , 0.2997518 , 0. ],
[0. , 0.07222144, 0. , 0.11551574, 0. ],
[0.03702604, 0.18443246, 0. , 0.34539366, 0. ],
[0.5382122 , 0. , 0.04396425, 0.4040263 , 0. ]],
dtype=float32))
多层的RNN网络如下图所示
我们说过,outputs是最后一层的输出,即
[
b
a
t
c
h
_
s
i
z
e
,
s
t
e
p
,
n
_
n
e
u
r
o
n
s
]
=
[
4
,
2
,
5
]
[batch\_size,step,n\_neurons] = [4,2,5]
[batch_size,step,n_neurons]=[4,2,5]
states是每一层的最后一个step的输出,即三个结构为
[
b
a
t
c
h
_
s
i
z
e
,
n
_
n
e
u
r
o
n
s
]
=
[
4
,
5
]
[batch\_size,n\_neurons] = [4,5]
[batch_size,n_neurons]=[4,5] 的tensor继续观察数据,states中的最后一个array,正好是outputs的最后那个step的输出。
下面我们继续讲当由BasicLSTMCell
构造单元工厂的时候,只讲多层的情况,我们只需要将上面的 BasicRNNCell
替换成BasicLSTMCell
就行了,打印信息如下:
outputs_val.shape:
Tensor("rnn/transpose_1:0", shape=(?, 2, 5), dtype=float32)
states_val.shape:
(LSTMStateTuple(c=<tf.Tensor 'rnn/while/Exit_3:0' shape=(?, 5) dtype=float32>,
h=<tf.Tensor 'rnn/while/Exit_4:0' shape=(?, 5) dtype=float32>),
LSTMStateTuple(c=<tf.Tensor 'rnn/while/Exit_5:0' shape=(?, 5) dtype=float32>,
h=<tf.Tensor 'rnn/while/Exit_6:0' shape=(?, 5) dtype=float32>),
LSTMStateTuple(c=<tf.Tensor 'rnn/while/Exit_7:0' shape=(?, 5) dtype=float32>,
h=<tf.Tensor 'rnn/while/Exit_8:0' shape=(?, 5) dtype=float32>))
outputs_val:
[[[1.2949290e-04 0.0000000e+00 2.7623639e-04 0.0000000e+00 0.0000000e+00]
[9.4675866e-05 0.0000000e+00 2.0214770e-04 0.0000000e+00 0.0000000e+00]]
[[4.3100454e-06 4.2123037e-07 1.4312843e-06 0.0000000e+00 0.0000000e+00]
[0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00]]
[[0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00]
[0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00]]
[[0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00]
[0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00]]]
states_val:
(LSTMStateTuple(
c=array([[0. , 0. , 0.04676079, 0.04284539, 0. ],
[0. , 0. , 0.0115245 , 0. , 0. ],
[0. , 0. , 0. , 0. , 0. ],
[0. , 0. , 0. , 0. , 0. ]],
dtype=float32),
h=array([[0. , 0. , 0.00035096, 0.04284406, 0. ],
[0. , 0. , 0.00142574, 0. , 0. ],
[0. , 0. , 0. , 0. , 0. ],
[0. , 0. , 0. , 0. , 0. ]],
dtype=float32)),
LSTMStateTuple(
c=array([[0.0000000e+00, 1.0477135e-02, 4.9871090e-03, 8.2785974e-04,
0.0000000e+00],
[0.0000000e+00, 2.3306280e-04, 0.0000000e+00, 9.9445322e-05,
5.9535629e-05],
[0.0000000e+00, 0.0000000e+00, 0.0000000e+00, 0.0000000e+00,
0.0000000e+00],
[0.0000000e+00, 0.0000000e+00, 0.0000000e+00, 0.0000000e+00,
0.0000000e+00]], dtype=float32),
h=array([[0.00000000e+00, 5.23016974e-03, 2.47756205e-03, 4.11730434e-04,
0.00000000e+00],
[0.00000000e+00, 1.16522635e-04, 0.00000000e+00, 4.97301044e-05,
2.97713632e-05],
[0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00],
[0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00]], dtype=float32)),
LSTMStateTuple(
c=array([[1.8937115e-04, 0.0000000e+00, 4.0442235e-04, 0.0000000e+00,
0.0000000e+00],
[8.6200516e-06, 8.4243663e-07, 2.8625946e-06, 0.0000000e+00,
0.0000000e+00],
[0.0000000e+00, 0.0000000e+00, 0.0000000e+00, 0.0000000e+00,
0.0000000e+00],
[0.0000000e+00, 0.0000000e+00, 0.0000000e+00, 0.0000000e+00,
0.0000000e+00]], dtype=float32),
h=array([[9.4675866e-05, 0.0000000e+00, 2.0214770e-04, 0.0000000e+00,
0.0000000e+00],
[4.3100454e-06, 4.2123037e-07, 1.4312843e-06, 0.0000000e+00,
0.0000000e+00],
[0.0000000e+00, 0.0000000e+00, 0.0000000e+00, 0.0000000e+00,
0.0000000e+00],
[0.0000000e+00, 0.0000000e+00, 0.0000000e+00, 0.0000000e+00,
0.0000000e+00]], dtype=float32)))
LSTM的网络结构如下图:
一个LSTM cell有两个状态
C
t
C_{t}
Ct和
h
t
h_{t}
ht,而不是像一个RNN cell一样只有
h
t
h_{t}
ht。
关于LSTM的讲解可以看博客:LSTM理论知识讲解
在tensorflow中,将一个LSTM cell的
C
t
C_{t}
Ct和
h
t
h_{t}
ht合在一起,称为LSTMStateTuple
。
因此我们的states包含三个LSTMStateTuple,每一个LSTMStateTuple表示每一层的最后一个step的输出,这个输出有两个信息,一个是
h
t
h_{t}
ht表示短期记忆信息,一个是
C
t
C_{t}
Ct表示长期记忆信息。维度都是[batch_size,n_neurons] = [4,5],states的最后一个LSTMStateTuple中的
h
t
h_{t}
ht就是outputs的最后一个step的输出
参考博客:https://blog.csdn.net/junjun150013652/article/details/81331448