先看下RNN的计算公式:
output = new_state =tanh( input∗W + state∗U + B)
或者
output=h1=f(x1∗W+h0∗U+B)
标准的RNN单元有三个可训练的参数 W,U,B,激活函数tanh,以及两个状态:x1输入状态,h0隐藏层状态
上代码说明:
output_size = 10
batch_size = 32
cell = tf.nn.rnn_cell.BasicRNNCell(num_units=output_size)
input = tf.placeholder(dtype=tf.float32,shape=[batch_size,150])
h0 = cell.zero_state(batch_size=batch_size,dtype=tf.float32)
output,h1 = cell.call(input,h0)
上面是一个RNN单元最简单的定义形式,可是每个参数又到底是什么含义呢?我们知道一个最基本的RNN单元中有三个可训练的参数W,U,B 以及两个输入变量。所以我们在构造RNN的时候就需要指定各个参数的维度了。可上面6行代码中,各个参数又是谁跟谁呢? 下图就是直接结果。
注,上图中的n表示的是输入的维度dim
结合着上图和代码,可以发现:
第一:第3行代码的num_units=output_size就告诉我们,最终输出的类别数是output_size(例如:10个数字的可能性;),以及参数W的第二个维度为output_size;
第二:第4行代码的shape=[batch_size,150]就告诉了我们余下所有参数的形状;
验证
从计算维度来验证
output_size = 20
batch_size = 64
cell = tf.nn.rnn_cell.BasicRNNCell(num_units=output_size)
print(cell.output_size)
input = tf.placeholder(dtype=tf.float32,shape=[batch_size,150])
print('input: ',input)
h0 = cell.zero_state(batch_size=batch_size,dtype=tf.float32)
print('h0: ', h0)
output,h1 = cell(input,h0)
print('output: ',output)
print('h1: ', h1)
output_size = 20 类别数
batch_size = 64 批次数
cell = tf.nn.rnn_cell.BasicRNNCell(num_units=output_size)
input = tf.placeholder(dtype=tf.float32,shape=[batch_size,150]) #输入形状:64,150
h0 = cell.zero_state(batch_size=batch_size,dtype=tf.float32) #隐藏层形状:64,20
output,h1 = cell.call(input,h0) # 输出形状:64,20 隐藏层输出形状:64,20
W:150 * 20, U:20 * 20, B:20
按照上面的推断各个参数的维度为:input: [32,150]; W: [150,10]; h0: [32,10]; U: [10,10] B: [10];所以最终输出的维度就应该为[32,10]
打印结果:
20
input: Tensor("Placeholder:0", shape=(64, 150), dtype=float32)
h0: Tensor("BasicRNNCellZeroState/zeros:0", shape=(64, 20), dtype=float32)
output: Tensor("basic_rnn_cell/Tanh:0", shape=(64, 20), dtype=float32)
h1: Tensor("basic_rnn_cell/Tanh:0", shape=(64, 20), dtype=float32)
从计算结果来验证
import tensorflow as tf
import numpy as np
output_size = 4
batch_size = 3
dim = 5
cell = tf.nn.rnn_cell.BasicRNNCell(num_units=output_size)
input = tf.placeholder(dtype=tf.float32, shape=[batch_size, dim])
h0 = cell.zero_state(batch_size=batch_size, dtype=tf.float32)
output, h1 = cell(input, h0)
x = np.array([[1, 2, 1, 1, 1], [2, 0, 0, 1, 1], [2, 1, 0, 1, 0]])
print('x: ',x.shape)
weights = tf.Variable(tf.random_normal(shape=[9, output_size],mean=0,stddev=1),name='weights')
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
o,h,w= sess.run([output,h1,weights],feed_dict={input:x})
print('output: ', o, 'dede:', o.shape)
print('h1:', h, 'shape: ', h.shape)
print("weights:")
print(w)
state = np.zeros(shape=(3,4))# shape = (3,4)
all_input = np.concatenate((x,state),axis=1)# shape = (3,9)
result = np.tanh(np.matmul(all_input,w))
print('result:')
print(result)
由以上代码可知:input: [3,5]; W: [5,4]; h0: [3,4]; U: [4,4] B: [4];所以最终输出的维度就应该为[3,4]
注:
1.Tensorflow在实现的时候把W,U合并成了一个矩阵,把input和h0也合并成了一个矩阵,所以weight的形状才为(9,4);
2.此处偏置为0;
结果:
>>
weights:
[[ 0.590749 0.31745368 -0.27975678 0.33500886]
[-0.02256793 -0.34533614 -0.09014118 -0.5189797 ]
[-0.24466929 0.17519772 0.20553339 -0.25595042]
[-0.48579523 0.67465043 0.62406075 -0.32061592]
[-0.0713594 0.3825792 0.6132684 0.00536895]
[ 0.43795645 0.55633724 0.31295568 -0.37173718]
[ 0.6170727 0.14996111 -0.321027 -0.624057 ]
[ 0.42747557 0.4424585 -0.59979784 0.23592204]
[-0.0294565 0.3372593 -0.14695019 0.07108325]]
output:
[[-0.2507479 0.69584984 0.7542856 -0.8549179 ]
[ 0.5541449 0.9344188 0.5900975 0.3405997 ]
[ 0.5870382 0.74615407 -0.0255884 -0.16797088]]
h1:
[[-0.2507479 0.69584984 0.7542856 -0.8549179 ]
[ 0.5541449 0.9344188 0.5900975 0.3405997 ]
[ 0.5870382 0.74615407 -0.0255884 -0.16797088]]
result:
[[-0.25074791 0.69584978 0.75428552 -0.85491791]
[ 0.55414493 0.93441886 0.59009744 0.34059968]
[ 0.58703823 0.74615404 -0.02558841 -0.16797091]]
参考:
https://blog.csdn.net/The_lastest/article/details/83544280
https://blog.csdn.net/zy_gyj/article/details/89059475