一个Recurrent Neural Network的简单示例:
import numpy as np
X = [1, 2]
state = [0.0,0.0]
weight_cell_state = np.array([[0.1,0.2],[0.3,0.4]])
weight_cell_input = np.array([0.5, 0.6])
bias_cell = np.array([0.1,-0.1])
weight_output = np.array([[1.0],[2.0]])
bias_output = np.array([0.1])
for i in range(len(X)):
before_activation = np.dot(state, weight_cell_state) + X[i] * weight_cell_input + bias_cell
state = np.tanh(before_activation)
final_output = np.dot(state, weight_output) + bias_output
print("before_activation: " + str(before_activation))
print("state: " + str(state))
print("final_output: " + str(final_output))
输出:
before_activation: [ 0.6 0.5]
state: [ 0.53704957 0.46211716]
final_output: [ 1.56128388]
before_activation: [ 1.2923401 1.39225678]
state: [ 0.85973818 0.88366641]
final_output: [ 2.72707101]
Reference
- 郑泽宇,梁博文,顾思宇.TensorFlow实战Google深度学习框架(第2版), 电子工业出版社,2018