人工神经网络基础_人工神经网络

人工神经网络基础

人工神经网络基础

人工神经网络 (Artificial Neural Networks)

The Artificial Neural Network, or just neural network for short, is not a new idea. It has been around for about 80 years.

人工神经网络,或者简称为神经网络,并不是一个新想法。 它已经存在了大约80年。

It was not until 2011, when Deep Neural Networks became popular with the use of new techniques, huge dataset availability, and powerful computers.

直到2011年,深度神经网络因使用新技术,巨大的数据集可用性和强大的计算机而变得流行。

A neural network mimics a neuron, which has dendrites, a nucleus, axon, and terminal axon.

神经网络模仿具有树突,核,轴突和末端轴突的神经元。

Terminal Axon

For a network, we need two neurons. These neurons transfer information via synapse between the dendrites of one and the terminal axon of another.

对于一个网络,我们需要两个神经元。 这些神经元通过突触在一个的树突和另一个的终轴突之间传递信息。

Neurons Transfer Information

A probable model of an artificial neuron looks like this −

人工神经元的可能模型看起来像这样-

Probable Model

A neural network will look like as shown below −

神经网络如下图所示-

Neural Network

The circles are neurons or nodes, with their functions on the data and the lines/edges connecting them are the weights/information being passed along.

圆圈是神经元或节点,它们在数据上具有功能,连接它们的线/边是传递的权重/信息。

Each column is a layer. The first layer of your data is the input layer. Then, all the layers between the input layer and the output layer are the hidden layers.

每列是一个层。 数据的第一层是输入层。 然后,输入层和输出层之间的所有层都是隐藏层。

If you have one or a few hidden layers, then you have a shallow neural network. If you have many hidden layers, then you have a deep neural network.

如果您有一个或几个隐藏层,那么您就拥有一个浅层的神经网络。 如果您有许多隐藏层,那么您将拥有一个深层的神经网络。

In this model, you have input data, you weight it, and pass it through the function in the neuron that is called threshold function or activation function.

在此模型中,您具有输入数据,对其进行加权,然后将其通过神经元中的函数(称为阈值函数或激活函数)传递。

Basically, it is the sum of all of the values after comparing it with a certain value. If you fire a signal, then the result is (1) out, or nothing is fired out, then (0). That is then weighted and passed along to the next neuron, and the same sort of function is run.

基本上,它是将它与某个特定值进行比较之后所有值的总和。 如果您发射信号,则结果为(1),否则没有结果,则为(0)。 然后将其加权并传递到下一个神经元,并运行相同类型的功能。

We can have a sigmoid (s-shape) function as the activation function.

我们可以将S型(s形)函数作为激活函数。

As for the weights, they are just random to start, and they are unique per input into the node/neuron.

至于权重,它们只是随机开始的,并且对于节点/神经元的每个输入都是唯一的。

In a typical "feed forward", the most basic type of neural network, you have your information pass straight through the network you created, and you compare the output to what you hoped the output would have been using your sample data.

在典型的“前馈”(神经网络的最基本类型)中,您的信息将直接通过创建的网络传递,然后将输出与希望使用示例数据获得的输出进行比较。

From here, you need to adjust the weights to help you get your output to match your desired output.

在这里,您需要调整权重以帮助您获得与所需输出匹配的输出。

The act of sending data straight through a neural network is called a feed forward neural network.

直接通过神经网络发送数据的行为称为前馈神经网络。

Our data goes from input, to the layers, in order, then to the output.

我们的数据从输入开始依次到各层,再到输出。

When we go backwards and begin adjusting weights to minimize loss/cost, this is called back propagation.

当我们倒退并开始调整权重以最小化损失/成本时,这称为反向传播。

This is an optimization problem. With the neural network, in real practice, we have to deal with hundreds of thousands of variables, or millions, or more.

这是一个优化问题。 使用神经网络,在实际中,我们必须处理成千上万个变量,甚至数百万个甚至更多。

The first solution was to use stochastic gradient descent as optimization method. Now, there are options like AdaGrad, Adam Optimizer and so on. Either way, this is a massive computational operation. That is why Neural Networks were mostly left on the shelf for over half a century. It was only very recently that we even had the power and architecture in our machines to even consider doing these operations, and the properly sized datasets to match.

第一个解决方案是使用随机梯度下降作为优化方法。 现在,有一些选项,例如AdaGrad,Adam Optimizer等。 无论哪种方式,这都是一个庞大的计算操作。 这就是为什么神经网络大部分被搁置了半个多世纪。 直到最近,我们甚至在机器中都拥有强大的功能和体系结构,甚至可以考虑执行这些操作,并选择合适大小的数据集进行匹配。

For simple classification tasks, the neural network is relatively close in performance to other simple algorithms like K Nearest Neighbors. The real utility of neural networks is realized when we have much larger data, and much more complex questions, both of which outperform other machine learning models.

对于简单的分类任务,神经网络在性能上与其他简单算法(例如K最近邻居)相对接近。 当我们拥有更大的数据和更复杂的问题时,神经网络才真正发挥作用,这两者都胜过其他机器学习模型。

翻译自: https://www.tutorialspoint.com/python_deep_learning/python_deep_learning_artificial_neural_networks.htm

人工神经网络基础

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值