人工神经网络 神经网络区别_了解人工神经网络

本文探讨了人工神经网络与传统神经网络之间的区别,源自一篇翻译自Towards Data Science的文章,帮助读者深入了解这两种网络的特性。
摘要由CSDN通过智能技术生成

人工神经网络 神经网络区别

Since around November 2013, the term ‘deep learning’ started gaining popularity, especially within the data science community. This trend comes shortly after the ‘big data’ boom in 2010 and the ‘data science’ boom in 2011. The upticks in interest are not surprising because companies now realized that they needed individuals who were capable of deciphering insights from the information tsunami that was now present.

自2013年11月左右以来,“深度学习”一词开始流行,特别是在数据科学界。 这种趋势是在2010年“大数据”热潮和2011年“数据科学”热潮之后不久出现的。兴趣的上升并不令人惊讶,因为公司现在意识到他们需要能够从信息海啸中解译见解的个人。现在存在。

With data science now being referred to “almost everything that has something to do with data”, the process of data utilization evolved beyond data collection and analysis. Now, it was possible to use large sets of data to accurately model events and create data applications. This surge made it possible to train computers and machines to perform all sorts of tasks, some of which might be taken as standard practices today.

如今,数据科学被称为“几乎所有与数据有关的事物”,因此数据利用的过程已经超越了数据收集和分析的范围。 现在,可以使用大量数据来准确地对事件建模并创建数据应用程序。 这种激增使得培训计算机和机器以执行各种任务成为可能,其中一些任务如今已成为标准做法。

Now, when a complexed task needs to be modelled, deep learning is almost the go-to method. This is a powerful process which enables a computer to mimic human behaviour by automatically extracting the most useful pieces of information needed to inform future decisions. It is a branch of Artificial Intelligence (AI) and Machine Learning (ML), and the goal of deep learning is to use computational methods which allow machines understand information directly from data without relying on a predetermined equation as a model.

现在,当需要对复杂的任务进行建模时,深度学习几乎是首选方法。 这是一个功能强大的过程,它使计算机能够通过自动提取最有用的信息来为将来的决策提供信息,从而模仿人类的行为。 它是人工智能(AI)和机器学习(ML)的一个分支,深度学习的目标是使用计算方法,该方法允许机器直接从数据中了解信息,而无需依赖预定的方程式作为模型。

Image for post
Image by Trist’n Joseph
图片由Trist'n Joseph

Both ML models and deep learning models are expected to learn something from data that can be used to help inform future decisions. However, deep learning models take ML just a bit further. The idea is that typical ML algorithms attempt to define a set of rules within data, and these rules are usually hand-engineered. Therefore, ML models can underperform when placed outside of a development environment. Deep learning models, on the other hand, learn complexed features directly from raw data and do not necessarily require a set of hand-engineered rules.

机器学习模型和深度学习模型都将从数据中学习一些东西,这些数据可用于帮助将来做出决策。 但是,深度学习模型使ML更进一步。 这个想法是典型的ML算法尝试在数据中定义一组规则,这些规则通常是手工设计的。 因此,当放置在开发环境之外时,ML模型可能会表现不佳。 另一方面,深度学习模型可直接从原始数据中学习复杂的功能,并且不一定需要一组手工设计的规则。

Deep learning models are built on the idea of ‘neural networks’, and this is what allows the models to learn from raw data. Recall that the goal of these types of models is to somehow get machines to mimic human behaviour. Therefore, think of the neural networks within deep learning models similar to the human brain. The brain consists of billions of cells called neurons. Each neuron has multiple connections carrying information towards it, and a single connection carrying information away from it.

深度学习模型是基于“神经网络”的思想构建的,这使模型可以从原始数据中学习。 回想一下,这些类型的模型的目标是以某种方式使机器模仿人类的行为。 因此,请思考类似于人脑的深度学习模型中的神经网络。 大脑由数十亿个称为神经元的细胞组成。 每个神经元具有多个向其承载信息的连接,以及单个向其承载信息的连接。

With these neural connections established, the brain develops, and humans learn through a process called neuroplasticity. This is the ability of the neural networks within the brain to change through growth and reorganization. When the reorganization occurs, new pathways between neurons are developed and the brain can delete connections that are no longer necessary or strengthen connections that are found to be more important. Essentially, the brain assigns a weighting along each pathway, where connections which were found to be more important receive a higher weighting than those which were found to be unimportant. Thus, a set of inputs are fed to a neuron along a path that has a particular weight, the information is processed, and then output to perform some task.

建立了这些神经联系后,大脑发育,人类通过称为神经可塑性的过程进行学习。 这是大脑中神经网络通过生长和重组而改变的能力。 当重组发生时,神经元之间会形成新的通路,大脑可以删除不再需要的连接,也可以增强被认为更重要的连接。 本质上,大脑会沿着每个路径分配权重,其中被发现更重要的连接比被认为不重要的连接具有更高的权重。 因此,一组输入沿着具有特定权重的路径被馈送到神经元,信息被处理,然后输出以执行某些任务。

Image for post
Image by Trist’n Joseph
图片由Trist'n Joseph

Similarly, an artificial neural network (ANN) contains a set of inputs that a fed along weighted paths, these inputs are then processed, and an output is produced to perform some task. Like with neuroplasticity, paths within an ANN can have higher weightings if they are found to be more important within a model.

类似地,人工神经网络(ANN)包含一组输入,这些输入沿加权路径馈入,然后对这些输入进行处理,并产生输出以执行某些任务。 像神经可塑性一样,如果发现ANN中的路径在模型中更重要,则可以具有较高的权重。

This idea helps build upon the foundation of neural networks; that is, the perceptron (or a single neuron). Information is propagated forward through this system by having a set of inputs, x, and each input has a corresponding weight, w. The input should also include a ‘bias term’ which is independent of x. The bias term is used to shift the function being used accordingly, given a problem at hand. Each corresponding input and weight are then multiplied, and the sum of products is calculated. The sum then passes through a non-linear activation function, and an output, y, is generated.

这个想法有助于建立在神经网络的基础上。 即感知器 (或单个神经元)。 通过具有一组输入x ,信息通过该系统向前传播,并且每个输入具有对应的权重w 。 输入还应包括一个独立于x的“偏差项”。 给定当前的问题,可以使用偏置项来相应地移动所使用的功能。 然后将每个相应的输入和权重相乘,然后计算出乘积之和。 然后,该和通过非线性激活函数,并生成输出y

The non-linear activation function can take on a variety of forms. However, a common one is a sigmoid function. This is an S-shaped curve which is bounded between 0 and 1, where negative input values are assigned output of less than 0.5 and positive input values are assigned an output greater than 0.5. The choice of the activation function is largely dependent on the situation at hand, especially because different functions contain different properties. The importance of this function is its non-linearity. In ‘real-world’ problems, a large portion of data is non-linear. Applying a linear form to a non-linear problem will undoubtedly result in poor performance.

非线性激活函数可以采用多种形式。 但是,常见的是S形函数 。 这是一条S形曲线,范围在0到1之间,其中负输入值的输出小于0.5,正输入值的输出大于0.5。 激活功能的选择在很大程度上取决于当前情况,尤其是因为不同的功能包含不同的属性。 此功能的重要性在于其非线性。 在“现实世界”问题中,很大一部分数据是非线性的。 将线性形式应用于非线性问题无疑会导致性能下降。

Image for post
Image by Trist’n Joseph
图片由Trist'n Joseph

Now that we understand how a perceptron works, a deep neural network is essentially formed by having multiple connected perceptrons. If all the inputs are densely connected to all the outputs, these layers are referred to as dense layers. But, unlike with the perceptron, a deep neural network can contain multiple hidden layers.

既然我们了解了感知器的工作原理,那么通过连接多个感知器就可以形成一个深度神经网络。 如果所有输入都密集连接到所有输出,则这些层称为密集层 。 但是,与感知器不同,深度神经网络可以包含多个隐藏层

The hidden layer is basically that point between the input and output of the neural network, where the activation function does a transformation on the information being fed in. It is referred to as a hidden layer because it is not directly observable from the system’s inputs and outputs. A neural network which has only one hidden layer is referred to as a single layer neural network, while those with more hidden layers are known as deep neural networks. The deeper the neural network, the more the network can recognize from data.

隐藏层基本上是神经网络输入和输出之间的那一点,其中激活函数对输入的信息进行转换。之所以称为隐藏层,是因为无法从系统的输入和输出中直接观察到它。输出。 仅具有一个隐藏层的神经网络称为单层神经网络 ,而具有更多隐藏层的神经网络称为深度神经网络。 神经网络越深,网络可以从数据中识别的越多。

It must be noted that although learning as much as possible from the data is the goal, deep learning models can also suffer from overfitting. This occurs when a model learns too much from the training data, including random noise. Models are then able to determine very intricate patterns within the data, but this negatively affects the performance on new data. The noise picked up in the training data does not apply to new or unseen data, and the model is unable to generalize the patterns found. Like with the perceptron, non-linearity is of high importance in deep learning models. Although the model will learn a lot from having multiple hidden layers, applying linear forms to non-linear problems will still result in poor performance.

必须注意,尽管目标是从数据中尽可能多地学习,但是深度学习模型也可能会遭受过度拟合的困扰。 当模型从训练数据(包括随机噪声)中学习太多时,就会发生这种情况。 然后,模型可以确定数据中非常复杂的模式,但这会对新数据的性能产生负面影响。 训练数据中拾取的噪声不适用于新数据或看不见的数据,并且该模型无法概括发现的模式。 与感知器一样,非线性在深度学习模型中也非常重要。 尽管该模型从具有多个隐藏层中可以学到很多东西,但是将线性形式应用于非线性问题仍然会导致性能下降。

Image for post
Image by Trist’n Joseph
图片由Trist'n Joseph

The importance of deep learning (and ANN’s by extension) is rooted in the idea that better data-driven decisions can be made from understanding more from data. Humans have the ability to spontaneously put information together by recognizing old patterns, developing new connections, and perceiving something that they have learnt in a new light to develop new and effective processes. But humans are not good and handling highly complexed situations, and most ‘real-world’ problems are highly complexed. Wouldn’t it be great if computers were more like the human brain?

深度学习(以及扩展的ANN)的重要性植根于这样的想法,即可以从对数据的更多了解中做出更好的数据驱动决策。 人类有能力通过识别旧模式,建立新的联系并感知以新的视角学习到的东西来开发新的有效流程,从而自发地将信息整合在一起。 但是人类并不擅长处理复杂的情况,而且大多数“现实世界”的问题也非常复杂。 如果计算机更像人的大脑,那不是很好吗?

digitaltrends.com/cool-tech/what-is-an-artificial-neural-network/

digitaltrends.com/cool-tech/what-is-an-artificial-neural-network/

deepai.org/machine-learning-glossary-and-terms/hidden-layer-machine-learning#:~:text=In%20neural%20networks%2C%20a%20hidden,inputs%20entered%20into%20the%20network.

deepai.org/machine-learning-glossary-and-terms/hidden-layer-machine-learning#:~:text=In%20neural%20networks%2C%20a%20hidden,input%20enter%20into%20the%20network。

ncbi.nlm.nih.gov/pmc/articles/PMC4960264/

ncbi.nlm.nih.gov/pmc/articles/PMC4960264/

towardsdatascience.com/introduction-to-artificial-neural-networks-ann-1aea15775ef9

向datascience.com/introduction-to-artificial-neural-networks-ann-1aea15775ef9

explainthatstuff.com/introduction-to-neural-networks.html

说明thatstuff.com/introduction-to-neural-networks.html

neuralnetworksanddeeplearning.com/

neuronetworksanddeeplearning.com/

Other Useful Materials:

其他有用的材料:

deeplearning.mit.edu/

deeplearning.mit.edu/

https://www.inertia7.com/tristn

https://www.inertia7.com/tristn

youtube.com/watch?v=aircAruvnKk

youtube.com/watch?v=aircAruvnKk

youtube.com/watch?v=bfmFfD2RIcg

youtube.com/watch?v=bfmFfD2RIcg

https://towardsdatascience.com/what-is-deep-learning-adf5d4de9afc

https://towardsdatascience.com/what-is-deep-learning-adf5d4de9afc

翻译自: https://towardsdatascience.com/understanding-artificial-neural-networks-3fc3cbcd397d

人工神经网络 神经网络区别

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值