【读书1】【2017】MATLAB与深度学习——单层神经网络的局限性(3)

在这里插入图片描述
图2-24 采用增量规则训练的数据The delta rule trainingdata

在这种情况下,可以容易地找到划分0和1区域的直线边界线。

In this case, a straight border line thatdivides the regions of 0 and 1 can be found easily.

这是一个线性可分的问题(图2-25)。

This is a linearly separable problem(Figure 2-25).

在这里插入图片描述
图2-25 该数据是线性可分的This data presents alinearly separable problem

简单地说,单层神经网络只能解决线性可分问题。

To put it simply, the single-layer neuralnetwork can only solve linearly separable problems.

这是因为单层神经网络是一种线性划分输入数据空间的模型。

This is because the single-layer neuralnetwork is a model that linearly divides the input data space.

为了克服单层神经网络的局限性,我们需要更多的网络层。

In order to overcome this limitation of thesingle-layer neural network, we need more layers in the network.

这种需求导致了多层神经网络的出现,它可以实现单层神经网络所不能实现的功能。

This need has led to the appearance of themulti-layer neural network, which can achieve what the single-layer neuralnetwork cannot.

因为这是一个非常数学的问题,如果你对此不熟悉,可以跳过这一部分。

As this is rather mathematical; it is fineto skip this portion if you are not familiar with it.

请记住,单层神经网络只适用于特定类型的问题求解。

Just keep in mind that the single-layerneural network is applicable for specific problem types.

多层神经网络没有这样的局限性。

The multi-layer neural network has no suchlimitations.

小结(Summary)

本章涵盖以下概念:

This chapter covered the followingconcepts:

神经网络是由节点构成的网络,这些节点模仿大脑的神经元。

The neural network is a network of nodes,which imitate the neurons of the brain.

神经节点计算输入信号的加权和,并用加权和输出激活函数的结果。

The nodes calculate the weighted sum of theinput signals and output the result of the activation function with theweighted sum.

大多数神经网络都是由分层节点构成的。

Most neural networks are constructed withthe layered nodes.

对于分层神经网络,信号进入输入层,通过隐藏层后从输出层输出。

For the layered neural network, the signalenters through the input layer, passes through the hidden layer, and exitsthrough the output layer.

在实际应用中,线性函数不能用作隐藏层中的激活函数。

In practice, the linear functions cannot beused as the activation functions in the hidden layer.

这是因为线性函数抵消了隐藏层的影响。

This is because the linear function negatesthe effects of the hidden layer.

然而,在一些问题(如回归)上输出层节点可以采用线性函数。

However, in some problems such asregression, the output layer nodes may employ linear functions.

对于神经网络,有监督学习通过调整权重减少神经网络正确输出与训练输出之间的差异(图2-26)。

For the neural network, supervised learningimplements the process to adjust the weights and to reduce the discrepanciesbetween the correct output and output of the neural network (Figure 2-26).

——本文译自Phil Kim所著的《Matlab Deep Learning》

更多精彩文章请关注微信号:在这里插入图片描述

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值