Model LSTM

其他参考:

         LSTM Networks应用于股票市场探究   *****

         LSTM模型在问答系统中的应用  ***

         最全 LSTM 模型在量化交易中的应用汇总(代码+论文) ***

         分享一下你所了解到的LSTM/RNN的应用Case? *****(含有各种具体应用场景)

In a traditional recurrent neural network, during the gradient back-propagation phase, the gradient signal can end up being multiplied a large number of times (as many as the number of timesteps) by the weight matrix associated with the connections between the neurons of the recurrent hidden layer. This means that, the magnitude of weights in the transition matrix can have a strong impact on the learning process.

If the weights in this matrix are small (or, more formally, if the leading eigenvalue of the weight matrix is smaller than 1.0), it can lead to a situation called vanishing gradients where the gradient signal gets so small that learning either becomes very slow or stops working altogether. It can also make more difficult the task of learning long-term dependencies in the data. Conversely, if the weights in this matrix are large (or, again, more formally, if the leading eigenvalue of the weight matrix is larger than 1.0), it can lead to a situation where the gradient signal is so large that it can cause learning to diverge. This is often referred to as exploding gradients.

These issues are the main motivation behind the LSTM model which introduces a new structure called a memory cell (see Figure 1 below). A memory cell is composed of four main elements: an input gate, a neuron with a self-recurrent connection (a connection to itself), a forget gate and an output gate. The self-recurrent connection has a weight of 1.0 and ensures that, barring any outside interference, the state of a memory cell can remain constant from one timestep to another. The gates serve to modulate the interactions between the memory cell itself and its environment. The input gate can allow incoming signal to alter the state of the memory cell or block it. On the other hand, the output gate can allow the state of the memory cell to have an effect on other neurons or prevent it. Finally, the forget gate can modulate the memory cell’s self-recurrent connection, allowing the cell to remember or forget its previous state, as needed.

Figure 1: Illustration of an LSTM memory cell.

 

The equations below describe how a layer of memory cells is updated at every timestep t. In these equations:

  • x_t is the input to the memory cell layer at time t
  • W_iW_fW_cW_oU_iU_fU_cU_o and V_o are weight matrices
  • b_ib_fb_c and b_o are bias vectors

First, we compute the values for i_t, the input gate, and \widetilde{C_t} the candidate value for the states of the memory cells at time t:

(1)i_t = \sigma(W_i x_t + U_i h_{t-1} + b_i)

(2)\widetilde{C_t} = tanh(W_c x_t + U_c h_{t-1} + b_c)

Second, we compute the value for f_t, the activation of the memory cells’ forget gates at time t:

(3)f_t = \sigma(W_f x_t + U_f h_{t-1} + b_f)

Given the value of the input gate activation i_t, the forget gate activation f_t and the candidate state value \widetilde{C_t}, we can compute C_t the memory cells’ new state at time t:

(4)C_t = i_t * \widetilde{C_t} + f_t * C_{t-1}

With the new state of the memory cells, we can compute the value of their output gates and, subsequently, their outputs:

(5)o_t = \sigma(W_o x_t + U_o h_{t-1} + V_o C_t + b_o)

(6) h_t = o_t * tanh(C_t)

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值