rnn
三枚目
这个作者很懒,什么都没留下…
展开
-
[rnn] tools
http://karpathy.github.io/2015/05/21/rnn-effectiveness/Recurrent Neural Networks Tutorial http://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnns/caffetorch(lua)原创 2016-03-30 15:48:39 · 709 阅读 · 0 评论 -
[caffe] Long-term Recurrent Convolutional Networks
imagePathPaper: http://jeffdonahue.com/lrcn/ Code: the “lstm_video_deploy” branch of Lisa Anne Hendricks’s Caffe forkPython Layertrain_test_lstm_RGB.prototxtname: "lstm_joints"layer { name: "data"原创 2016-03-04 13:44:42 · 4268 阅读 · 4 评论 -
[rnn]BPTT_梯度消失/爆炸问题
http://www.wildml.com/2015/10/recurrent-neural-networks-tutorial-part-3-backpropagation-through-time-and-vanishing-gradients/翻译: https://zhuanlan.zhihu.com/p/22338087随时间的反向传播(BPTT)让我们先迅速回忆一下RNN的基本公式,注转载 2016-10-18 14:20:16 · 13148 阅读 · 0 评论 -
[torch] rnn training
http://christopher5106.github.io/deep/learning/2016/07/14/element-research-torch-rnn-tutorial.html https://github.com/torch/optim/blob/master/doc/intro.md http://rnduja.github.io/2015/10/26/deep_lear转载 2017-03-24 14:17:36 · 292 阅读 · 0 评论 -
[torch]parameters(clone/copy/initialize)
https://github.com/torch/nn/blob/master/doc/module.md-- make an mlpmlp1=nn.Sequential();mlp1:add(nn.Linear(100,10));-- make a copy that shares the weights and biasesmlp2=mlp1:clone('weight','bias');转载 2017-03-24 15:14:23 · 875 阅读 · 0 评论 -
[torch]optim.sgd学习参数
https://stats.stackexchange.com/questions/29130/difference-between-neural-net-weight-decay-and-learning-ratelearning rateThe learning rate is a parameter that determines how much an updating step influ转载 2017-05-19 15:46:40 · 9426 阅读 · 0 评论