关于基于梯度的Deep架构训练方法的实用建议
Practical Recommendations for Gradient-Based Training of Deep Architectures
Yoshua Bengio
Université de Montréal
摘要:学习与神经网络、特别是深度学习相关的算法设计到很多被称为超参数(hyperparameters)的花哨东西。本章是一个实用指南,给出了对一些最常使用的超参数的建议,尤其针对采用梯度/基于梯度优化的反向传播学习算法。允许调节多个超参数时将获得很多有趣的结果,对这点也进行了讨论。总之,描述了成功有效训练和调试大规模同时往往也是深度多层的神经网络需要采用的各种要素。最后,以deeper架构训练困难的开放问题作为结束。
Abstract: Learning algorithms related to artificial neural networks and in particular for Deep Learning may seem to involve many bells and whistles,called hyper-parameters. This chapter is meant as a practical guide with recommendations for some of the most commonly used hyperparameters, in particular in the context of learning algorithms based on back-propagated gradient and gradient-based optimization. It also discusses how to deal with the fact that more interesting results can be obtained when allowing one to adjust many hyperparameters. Overall, it describes elements of the practice used to successfully and efficiently train and debug large-scale and often deep multi-layer neural networks. It closes with open questions about the training difficulties observed with deeper architectures.