10 Common Misconceptions about Neural Networks读后感
- 1. Neural networks are not models of the human brain
CS领域中的神经网络和脑神经学科有很大区别。我们平时所见的Neural network(包括最近的deep learning)更像统计方法里面的curve fitting 和regression analysis. Neural Network可以理解为输入和输出之间的一个高度复杂的非线性函数。 - 2. Neural networks are not a “weak form” of statistics
- 3. Neural networks come in many architectures
Partially recurrent networks, Boltzmann neural network, Deep neural networks, Adaptive neural networks, Radial basis networks - 4. Size matters, but bigger is not always better
网络中连接越多,模型越复杂,generalization通常越差。针对特定的问题,通常需要尝试多个不同结构的网络。 - 5. Many training algorithms exist for neural networks
神经网络的training通常采用SGD的方法。但SGD不是唯一的优化方法,除此之外还有Particle Swarm Optimization (PSO) 和Genetic Algorithm (GA)。 - 6. Neural networks do not always require a lot of data
举个例子,如DBM。DBM采用逐层无监督预训练的方法,再采用少量labelled data去做fine tuning。 - 7. Neural networks cannot be trained on any data
Data normalization, redundant removal , and outlier removal等预处理操作对训练神经网络都是非常有益的。 - 8. Neural networks may need to be retrained
举两个个例子:1,跨数据集之间的model transfer;2,对随时间变化的数据建立神经网络模型也需要动态地更新模型。 - 9. Neural networks are not black boxes
- 10. Neural networks are not hard to implement
引用
[1] http://www.stuartreid.co.za/misconceptions-about-neural-networks/