【读书1】【2017】MATLAB与深度学习——单层神经网络的局限性(4)

在这里插入图片描述
图2-26 回顾有监督学习Review of supervised learning

根据训练数据调整权重的方法称为学习规则。

The method used to adjust the weightaccording to the training data is called the learning rule.

有三种主要类型的误差计算方法:随机梯度下降、批处理和小批量处理。

There are three major types of errorcalculations: the stochastic gradient descent, batch, and mini batch.

增量规则是神经网络的典型学习规则。

The delta rule is the representativelearning rule of the neural network.

它的计算公式根据激活函数而变化。

Its formula varies depending on theactivation function.

在这里插入图片描述

增量规则是一种迭代方法,通过迭代逐渐达到最终解。

The delta rule is an iterative method thatgradually reaches the solution.

因此,神经网络应反复对训练数据进行训练,直到误差降低到令人满意的水平。

Therefore, the network should be repeatedlytrained with the training data until the error is reduced to the satisfactorylevel.

单层神经网络仅适用于特定类型的问题。

The single-layer neural network isapplicable only to specific types of problems.

因此,单层神经网络的应用非常有限。

Therefore, the single-layer neural networkhas very limited usages.

为了克服单层神经网络的本质局限性,开发了多层神经网络。

The multi-layer neural network has beendeveloped to overcome the essential limitations of the single layer neuralnetwork.

第三章 多层神经网络的训练(Training of Multi-Layer NeuralNetwork)

为了克服单层神经网络的实际局限性,神经网络进化为多层结构。

In an effort to overcome the practicallimitations of the single-layer, the neural network evolved into a multi-layerarchitecture.

然而,花费了将近30年的时间,才将隐藏层添加到单层神经网络中。

However, it has taken approximately 30years to just add on the hidden layer to the single-layer neural network.

很难理解为什么花费了这么长时间,但其中的问题涉及学习规则。

It’s not easy to understand why this tookso long, but the problem involved the learning rule.

由于训练过程是神经网络存储信息的唯一方法,不可训练的神经网络是无用的。

As the training process is the only methodfor the neural network to store information, untrainable neural networks areuseless.

多层神经网络的正确学习规则需要相当长的时间来研究。

A proper learning rule for the multi-layerneural network took quite some time to develop.

上一章中介绍的增量规则对于多层神经网络的训练是无效的。

The previously introduced delta rule isineffective for training of the multi-layer neural network.

这是因为训练中应用增量规则的基本元素在隐藏层产生的误差并没有定义。

This is because the error, the essentialelement for applying the delta rule for training, is not defined in the hiddenlayers.

输出节点的误差被定义为神经网络的正确输出与训练输出之间的差值。

The error of the output node is defined asthe difference between the correct output and the output of the neural network.

然而,训练数据不能为隐藏层节点提供正确的输出,因此不能使用相同的方法为输出节点计算误差。

However, the training data does not providecorrect outputs for the hidden layer nodes, and hence the error cannot becalculated using the same approach for the output nodes.

那么,怎么办呢?

Then, what?

真正的问题不是如何定义隐藏节点上的误差吗?

Isn’t the real problem how to define theerror at the hidden nodes?

你明白了。

You got it.

反向传播算法就是多层神经网络的代表性学习规则。

You just formulated the back-propagationalgorithm, the representative learning rule of the multi-layer neural network.

——本文译自Phil Kim所著的《Matlab Deep Learning》

更多精彩文章请关注微信号:在这里插入图片描述

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值