RNN学习

资源一:参考字符预测的源码和理论讲解

源代码:https://github.com/weixsong/min-char-rnn

理论讲解:https://blog.csdn.net/watkinsong/article/details/51773524

困惑一:隐藏层的每个单元是一个向量还是只是一个二值神经元?

答案:只是一个二值神经元,并不是一个向量,所有的隐藏层神经元构成一个向量 

困惑二:隐藏层神经元的个数如何设置?

答案:没有找到合适的设置方式,但是长度为n的二元变量,最多可以表示的信息是2的n次幂,所以根据这个可以算出最少的隐藏层神经元个数

细节:

对于序列输入,需要定义序列的长度size,每次指定截取size个输入,向后挪动一位截取size个输出,这一对对应构成一个训练样本

问题:序列长度如何制定?对于单词预测,少于size的句子如何处理?

 

Tensorflow LSTM使用资源:

股票预测boke:

https://blog.csdn.net/mylove0414/article/details/55805974

https://blog.csdn.net/mylove0414/article/details/56969181

git代码:

https://github.com/LouisScorpio/datamining/blob/master/tensorflow-program/rnn/stock_predict/stock_predict_2.py

————Tensorflow中RNN的基本逻辑和CNN类似,可以参考CNN的使用方法进行学习

 

------

另一个例子:

https://blog.csdn.net/flying_sfeng/article/details/78852816

代码与上面类似,作者特别提到了要做归一化

详细的介绍了Tensor的参数的代码:

https://blog.csdn.net/jmh1996/article/details/78821216

https://blog.csdn.net/u014595019/article/details/52759104

相关原理等教程:

Recurrent Neural Networks Tutorial, Part 1 – Introduction to RNNs

http://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnns/

[译] 理解 LSTM 网络

https://www.jianshu.com/p/9dc9f41f0b29

代码加了一些注释

"""
Minimal character-level Vanilla RNN model. Written by Andrej Karpathy (@karpathy)
BSD License
"""

## add comments by weixsong
## reference page [The Unreasonable Effectiveness of Recurrent Neural Networks](http://karpathy.github.io/2015/05/21/rnn-effectiveness/)

## this is a 3 layers neuron network.
## input layer: one hot vector, dim: vocab * 1
## hidden layer: LSTM, hidden vector: hidden_size * 1
## output layer: Softmax, vocab * 1, the probabilities distribution of each character

import numpy as np

# data I/O
data = open('input.txt', 'r').read() # should be simple plain text file
#将文本文件解析为字符char

# use set() to count the vacab size
chars = list(set(data))
data_size, vocab_size = len(data), len(chars)
print ('data has %d characters, %d unique.' % (data_size, vocab_size))
print(chars)
#['F', 'e', 'C', 'x', 'Z', ':', 'y', 'K', 's', 'P', "'", 'Y', ';', 'g', 'Q', '$', 'a', 'k', 'H', 'r', 'B', 'u', 'f', 'm', 'S', 'q', '&', '3', 'l', ',', 'J', 'n', 'W', 't', 'c', 'z', 'v', '-', 'T', 'R', 'N', '!', 'h', 'M', '\n', 'o', 'w', 'p', 'U', 'i', 'A', 'd', 'I', 'b', 'D', 'O', 'V', 'G', 'X', '?', 'E', 'j', '.', ' ', 'L']
# 65个不同的char

# dictionary to convert char to idx, idx to char
char_to_ix = { ch:i for i,ch in enumerate(chars) }
ix_to_char = { i:ch for i,ch in enumerate(chars) }
#定义字典

# hyperparameters
hidden_size = 100 # size of hidden layer of neurons
#隐藏层神经网络

seq_length = 25 # number of steps to unroll the RNN for
#??每次的步数?
learning_rate = 1e-1

# model parameters
## RNN/LSTM
## this is not LSTM, is the simple basic RNN
## # update the hidden state
## self.h = np.tanh(np.dot(self.W_hh, self.h) + np.dot(self.W_xh, x))
## # compute the output vector
## y = np.dot(self.W_hy, self.h)
Wxh = np.random.randn(hidden_size, vocab_size)*0.01 # input to hidden
Whh = np.random.randn(hidden_size, hidden_size)*0.01 # hidden to hidden
Why = np.random.randn(vocab_size, hidden_size)*0.01 # hidden to output
bh = np.zeros((hidden_size, 1)) # hidden bias
by = np.zeros((vocab_size, 1)) # output bias
#————参数定义


## compute loss, derivative
## cross-entropy loss is used
## actually, here the author use cross-entropy as error,
## but in the backpropagation the author use sum of squared error (Quadratic cost) to do back propagation.
## be careful about this trick. 
## this is because the output layer is a linear layer.
## TRICK: Using the quadratic cost when we have linear neurons in the output layer, z[i] = a[i]
def lossFun(inputs, targets, hprev):
  """
  inputs,targets are both list of integers.
  hprev is Hx1 array of initial hidden state
  returns the loss, gradients on model parameters, and last hidden state
  """
  xs, hs, ys, ps = {}, {}, {}, {}
  ## record each hidden state of
  hs[-1] = np.copy(hprev)  ##————每次传入的应该都是0,因为最开始隐藏层是什么数据都没有的
  loss = 0
  # forward pass for each training data point
  #——前向传播,inputs的长度??代表时间序列的时间t
  for t in range(len(inputs)):
    xs[t] = np.zeros((vocab_size, 1)) # encode in 1-of-k representation
    xs[t][inputs[t]] = 1
    
    ## hidden state, using previous hidden state hs[t-1]
    hs[t] = np.tanh(np.dot(Wxh, xs[t]) + np.dot(Whh, hs[t-1]) + bh)
    ## unnormalized log probabilities for next chars
    ys[t] = np.dot(Why, hs[t]) + by
    ## probabilities for next chars, softmax
    ps[t] = np.exp(ys[t]) / np.sum(np.exp(ys[t]))  #softmax
    ## softmax (cross-entropy loss)
    loss += -np.log(ps[t][targets[t], 0])
    #————注意:ps是一个字典,就是map,访问可以用dict[key]访问,同时字典中是numpy.ndarray,所以后面是[a,b],因为是一个二位的
    #这和numpy的计算方法有关,<class 'numpy.ndarray'> -- [ 0.0212072]
    #加上0之后,<class 'numpy.ndarray'> -- 0.0300696745556,即numpy上面的结果是二维的
    # print(type(ps[t]),"--",ps[t][targets[t],0])

  # backward pass: compute gradients going backwards
  dWxh, dWhh, dWhy = np.zeros_like(Wxh), np.zeros_like(Whh), np.zeros_like(Why)
  dbh, dby = np.zeros_like(bh), np.zeros_like(by)
  dhnext = np.zeros_like(hs[0])
  # ---导数都先初始化为0
  for t in reversed(range(len(inputs))):
    #----reversed,说明这个t,是从大到小的,即是逆时间反向传播
    ## compute derivative of error w.r.t the output probabilites
    ## dE/dy[j] = y[j] - t[j]
    dy = np.copy(ps[t]) #---dy只是ps的一个维度
    # print('dy1',dy)
    dy[targets[t]] -= 1 # backprop into y


    ## output layer doesnot use activation function, so no need to compute the derivative of error with regard to the net input
    ## of output layer. 
    ## then, we could directly compute the derivative of error with regard to the weight between hidden layer and output layer.
    ## dE/dy[j]*dy[j]/dWhy[j,k] = dE/dy[j] * h[k]
    dWhy += np.dot(dy, hs[t].T)
    dby += dy
    
    ## backprop into h
    ## derivative of error with regard to the output of hidden layer
    ## derivative of H, come from output layer y and also come from H(t+1), the next time H
    dh = np.dot(Why.T, dy) + dhnext
    ## backprop through tanh nonlinearity
    ## derivative of error with regard to the input of hidden layer
    ## dtanh(x)/dx = 1 - tanh(x) * tanh(x)
    dhraw = (1 - hs[t] * hs[t]) * dh
    dbh += dhraw
    
    ## derivative of the error with regard to the weight between input layer and hidden layer
    dWxh += np.dot(dhraw, xs[t].T)
    dWhh += np.dot(dhraw, hs[t-1].T)
    ## derivative of the error with regard to H(t+1)
    ## or derivative of the error of H(t-1) with regard to H(t)
    dhnext = np.dot(Whh.T, dhraw)

  for dparam in [dWxh, dWhh, dWhy, dbh, dby]:
    np.clip(dparam, -5, 5, out=dparam) # clip to mitigate exploding gradients
      #----防止梯度爆炸

  return loss, dWxh, dWhh, dWhy, dbh, dby, hs[len(inputs)-1]



## given a hidden RNN state, and a input char id, predict the coming n chars
def sample(h, seed_ix, n):
  """ 
  sample a sequence of integers from the model
  h is memory state, seed_ix is seed letter for first time step
  """
  #h是隐藏层的状态
  #seed_id是输入的字符的id,预测后面输出的n个字符

  ## a one-hot vector
  x = np.zeros((vocab_size, 1))
  x[seed_ix] = 1

  ixes = []
  for t in range(n):
    ## self.h = np.tanh(np.dot(self.W_hh, self.h) + np.dot(self.W_xh, x))
    h = np.tanh(np.dot(Wxh, x) + np.dot(Whh, h) + bh)
    ## y = np.dot(self.W_hy, self.h)
    y = np.dot(Why, h) + by
    ## softmax
    p = np.exp(y) / np.sum(np.exp(y))
    ## sample according to probability distribution
    ix = np.random.choice(range(vocab_size), p=p.ravel())
    #根据概率选择输出

    ## update input x
    ## use the new sampled result as last input, then predict next char again.
    x = np.zeros((vocab_size, 1))
    x[ix] = 1

    ixes.append(ix)
    #第n个输出

  return ixes


## iterator counter
n = 0
## data pointer   数据的指针,训练数据是对字符的预测,所以这个是对字符的指针,从头开始直到最后
p = 0

mWxh, mWhh, mWhy = np.zeros_like(Wxh), np.zeros_like(Whh), np.zeros_like(Why)
mbh, mby = np.zeros_like(bh), np.zeros_like(by) # memory variables for Adagrad
smooth_loss = -np.log(1.0/vocab_size)*seq_length # loss at iteration 0

## main loop
while True:
  # prepare inputs (we're sweeping from left to right in steps seq_length long)
  if p + seq_length + 1 >= len(data) or n == 0:
    # reset RNN memory
    ## hprev is the hiddden state of RNN
    hprev = np.zeros((hidden_size, 1))
    #所以隐藏层只是一个长度为hiddensize的向量
    # go from start of data
    p = 0

  # 输入每次截取seq_length+25的长度,
  # 输出的长度相同,只是是从下一个开始预测
  # 输入和输出构成了训练对
  inputs = [char_to_ix[ch] for ch in data[p : p + seq_length]]
  targets = [char_to_ix[ch] for ch in data[p + 1 : p + seq_length + 1]]


  # sample from the model now and then
  if n % 100 == 0:
    sample_ix = sample(hprev, inputs[0], 200)
    txt = ''.join(ix_to_char[ix] for ix in sample_ix)
    print ('---- sample -----')
    print ('----\n %s \n----' % (txt, ))

  # forward seq_length characters through the net and fetch gradient
  loss, dWxh, dWhh, dWhy, dbh, dby, hprev = lossFun(inputs, targets, hprev)
  ## author using Adagrad(a kind of gradient descent)
  smooth_loss = smooth_loss * 0.999 + loss * 0.001
  if n % 100 == 0:
    print ('iter %d, loss: %f' % (n, smooth_loss)) # print progress
  
  # perform parameter update with Adagrad
  ## parameter update for Adagrad is different from gradient descent parameter update
  ## need to learn what is Adagrad exactly is.
  ## seems using weight matrix, derivative of weight matrix and a memory matrix, update memory matrix each iteration
  ## memory is the accumulation of each squared derivatives in each iteration.
  ## mem += dparam * dparam
  for param, dparam, mem in zip([Wxh, Whh, Why, bh, by],
                                [dWxh, dWhh, dWhy, dbh, dby],
                                [mWxh, mWhh, mWhy, mbh, mby]):
    mem += dparam * dparam
    ## learning_rate is adjusted by mem, if mem is getting bigger, then learning_rate will be small
    ## gradient descent of Adagrad
    param += -learning_rate * dparam / np.sqrt(mem + 1e-8) # adagrad update

  p += seq_length # move data pointer
  #???每次向前移动seq_length个位置,为什么不只移动一个呢?移动一个的样本应该也算是新的样本啊
  n += 1 # iteration counter 

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值