如何仅用30行代码在JavaScript中创建神经网络

In this article, I’ll show you how to create and train a neural network using Synaptic.js, which allows you to do deep learning in Node.js and the browser.

在本文中,我将向您展示如何使用Synaptic.js创建和训练神经网络,该网络使您可以在Node.js和浏览器中进行深度学习。

We’ll be creating the simplest neural network possible: one that manages to solve the XOR equation.

我们将创建可能的最简单的神经网络:设法解决XOR方程的网络。

I’ve also created an interactive Scrimba tutorial on this example, so check that out as well:

我还在此示例上创建了一个交互式Scrimba教程,因此也请检查一下:

Or if you’re interested in a full course on neural networks in JavaScript, please check out our a free course on Brain.js at Scrimba.

或者,如果您对使用JavaScript的神经网络的完整课程感兴趣,请查看我们在Scrimba上有关Brain.js的免费课程。

But before we look at the code, let’s go through the very basics of neural networks.

但是在看代码之前,让我们先看一下神经网络的基础知识。

神经元和突触 (Neurons and synapses)

The first building block of a neural network is, well, neurons.

神经网络的第一个基础是神经元

A neuron is like a function, it takes a few inputs and returns an output.

神经元就像一个函数,它需要几个输入并返回一个输出。

There are many different types of neurons. Our network is going to use sigmoid neurons, which take any given number and squash it to a value between 0 and 1.

有许多不同类型的神经元。 我们的网络将使用S形神经元,它采用任意给定的数字并将其压缩为01之间的值。

The circle below illustrates a sigmoid neuron. Its input is 5 and its output is 1. The arrows are called synapses, which connects the neuron to other layers in the network.

下面的圆圈说明了乙状神经元。 输入为5 ,输出为1 。 箭头称为突触,将神经元连接到网络中的其他层。

So why is the red number 5? Because it’s the sum of the three synapses that are connecting to the neuron as shown by the three arrows at the left. Let’s unpack that.

那么为什么红色数字是5 ? 因为这是连接到神经元的三个突触的总和,如左侧的三个箭头所示。 让我们打开包装。

At the far left, we see two values plus a so-called bias value. The values are 1 and 0 which are the green numbers. The bias value is -2 which is the brown number.

在最左侧,我们看到两个值加上一个所谓的偏差值。 值为10 ,这是绿色数字。 偏差值为-2 ,即棕色数字。

First, the two inputs are multiplied with their weights, which are 7 and 3 as shown by the blue numbers.

首先,两个输入乘以它们的权重权重分别为73 ,如蓝色数字所示。

Finally, we add it up with the bias and end up with 5 or the red number. This is the input for our artificial neuron.

最后,我们将其与偏差相加,最后得到5或红色数字。 这是我们人工神经元的输入。

As this is a sigmoid neuron which squashes any value to between 0 and 1, the output gets squeezed down to 1.

由于这是一个S形神经元,将任何值压缩为0到1之间,因此输出被压缩为1

If you connect a network of these neurons together, you have a neural network. This propagates forward from input to output, via neurons which are connected to each other through synapses. Like on the image below:

如果将这些神经元的网络连接在一起,那么您将拥有一个神经网络。 这通过经由突触彼此连接的神经元从输入传播到输出。 就像下面的图片:

The goal of a neural network is to train it to do generalizations, such as recognizing handwritten digits or email spam. And being good at generalizing is a matter of having the right weights and bias values across the network. Like with the blue and brown numbers in our example above.

神经网络的目标是训练它进行概括,例如识别手写数字或电子邮件垃圾邮件。 擅长归纳是在整个网络上拥有正确的权重偏差值。 就像上面示例中的蓝色和棕色数字一样。

When training the network, you’re simply showing it loads of examples such as handwritten digits, and getting the network to predict the right answer.

在训练网络时,您只是向其显示大量示例(例如手写数字),并让网络预测正确的答案。

After each prediction, you’ll calculate how wrong  the prediction was, and adjust the weights and bias values so that the network will guess a little bit more correct the next time around. This learning process is called backpropagation. Do this for thousands of times and your network will soon become good at generalizing.

每次预测之后,您将计算预测的错误程度,并调整权重和偏差值,以使网络下次能够猜测出更多正确性。 这种学习过程称为反向传播。 这样做已经有数千次了,您的网络很快就会变得通用。

How backpropagation works technically is outside the scope of this tutorial, but here are the three best sources I’ve found for understanding it:

从技术上讲,反向传播的工作原理不在本教程的讨论范围之内,但以下是我为理解它而找到的三个最佳资源:

代码 (The code)

Now that you’ve gotten a basic intro, let’s jump into the code. The first thing we need to do is to create the layers. We do this with the new Layer() function in synaptic. The number passed to the function dictates how many neurons each layer should have.

现在您已经有了一个基本的介绍,让我们进入代码。 我们需要做的第一件事是创建图层。 我们通过突触中的new Layer()函数来完成此操作。 传递给函数的数字指示每层应具有多少个神经元。

If you’re confused about what a layer is, check out the screencast above.

如果您对图层是什么感到困惑,请查看上面的截屏视频

const { Layer, Network } = window.synaptic;

const {Layer,Network} = window.synaptic;

var inputLayer = new Layer(2);var hiddenLayer = new Layer(3);var outputLayer = new Layer(1);

var inputLayer = new Layer(2); var hiddenLayer = new Layer(3); var outputLayer = new Layer(1);

Next up we’ll connect these layers together and instantiate a new network, like this:

接下来,我们将这些层连接在一起并实例化一个新网络,如下所示:

inputLayer.project(hiddenLayer);hiddenLayer.project(outputLayer);

inputLayer.project(hiddenLayer); hiddenLayer.project(outputLayer);

var myNetwork = new Network({input: inputLayer,hidden: [hiddenLayer],output: outputLayer});

var myNetwork = new Network({input:inputLayer,hidden:[hiddenLayer],output:outputLayer});

So this is a 2–3–1 network, which can be visualized like this:

因此,这是一个2-3-1网络,可以像这样可视化:

Now let’s train the network:

现在让我们训练网络:

// train the network - learn XOR

var learningRate = .3;

for (var i = 0; i < 20000; i++) {  
  // 0,0 => 0  
  myNetwork.activate([0,0]);  
  myNetwork.propagate(learningRate, [0]);

  // 0,1 => 1  
  myNetwork.activate([0,1]);  
  myNetwork.propagate(learningRate, [1]);

  // 1,0 => 1  
  myNetwork.activate([1,0]);  
  myNetwork.propagate(learningRate, [1]);

  // 1,1 => 0  
  myNetwork.activate([1,1]);  
  myNetwork.propagate(learningRate, [0]);  
}

Here we’re running the network 20,000 times. Each time we propagate forward and backwards four times, passing in the four possible inputs for this network: [0,0] [0,1] [1,0] [1,1] .

在这里,我们将网络运行20,000次。 每次我们向前和向后传播四次,传入此网络的四个可能的输入: [0,0] [0,1] [1,0] [1,1]

We start by doing myNetwork.activate([0,0]) , where [0,0] is the data point we’re sending into the network. This is the forward propagation, also called activating  the network. After each forward propagation, we need to do a backpropagation, where the network updates it’s own weights and biases.

我们首先进行myNetwork.activate([0,0]) ,其中[0,0]是我们要发送到网络中的数据点。 这是前向传播,也称为激活网络。 每次正向传播之后,我们都需要进行反向传播,网络会在此进行自身权重和偏差的更新。

The backpropagation is done with this line of code: myNetwork.propagate(learningRate, [0]), where the learningRate is a constant that tells the network how much it should adjust its weights each time. The second parameter 0 represents the correct output given the input [0,0].

反向传播使用以下代码行完成: myNetwork.propagate(learningRate, [0]) ,其中learningRate是一个常数,它告诉网络每次应调整权重的数量。 第二个参数0代表给定输入[0,0]的正确输出。

The network then compares its own prediction to the correct label. This tells it how right or wrong it was.

然后,网络将自己的预测与正确的标签进行比较。 这告诉它是对是错。

It uses the comparison as a basis for correcting its own weights and bias values so that it will guess a little bit more correct the next time.

它使用比较作为校正自己的权重和偏差值的基础,以便下次猜测会更正确。

After it has done this process 20,000 times, we can check how well our network has learned by activating the network with all four possible inputs:

完成此过程20,000次后,我们可以通过使用所有四个可能的输入激活网络来检查网络的学习情况:

console.log(myNetwork.activate([0,0]));   
// -> [0.015020775950893527]

console.log(myNetwork.activate([0,1]));  
// -> [0.9815816381088985]

console.log(myNetwork.activate([1,0]));  
// ->  [0.9871822457132193]

console.log(myNetwork.activate([1,1]));  
// -> [0.012950087641929467]

If we round these values to the closest integer, we’ll get the correct answers for the XOR equation. Hurray!

如果将这些值四舍五入为最接近的整数,则将获得XOR方程的正确答案。 欢呼!

And that’s about it. Even though we’ve just scratched the surface of neural networks, this should give you enough to start playing with Synaptic for yourself and continue learning on your own. Their wiki contains a lot of good tutorials.

就是这样。 即使我们刚刚接触了神经网络的表面,这也应该给您足够的知识,让您自己开始使用Synaptic并继续自己学习。 他们的Wiki包含许多很好的教程。

Finally, be sure to share your knowledge by creating a Scrimba screencast or writing an article when you learn something new! :)

最后,一定要通过创建Scrimba截屏视频或在学习新知识时写文章来分享您的知识! :)

PS:我们为您提供更多免费课程! (PS: We have more free courses for you!)

If you’re looking for your next challenge, we have several other free courses you can check out at Scrimba.com. Here are three might be relevant for you:

如果您正在寻找下一个挑战,我们还有其他几门免费课程,您可以在Scrimba.com上查看。 以下是与您相关的三个:

Happy coding!

编码愉快!

翻译自: https://www.freecodecamp.org/news/how-to-create-a-neural-network-in-javascript-in-only-30-lines-of-code-343dafc50d49/

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值