简单神经网络训练数据集_训练神经网络简单解释

本文深入浅出地介绍了神经网络的训练过程,重点解析了用于训练的简单数据集。通过对中等复杂度的数据集进行实例讲解,帮助读者理解如何在Python环境中,利用TensorFlow和Keras等深度学习库来训练和优化神经网络模型。
摘要由CSDN通过智能技术生成

简单神经网络训练数据集

In this post we will explore the mechanism of neural network training, but I’ll do my best to avoid rigorous mathematical discussions and keep it intuitive.

在这篇文章中,我们将探讨神经网络训练的机制,但是我将尽力避免进行严格的数学讨论并使之直观。

Consider the following task: you receive an image, and want an algorithm that returns (predicts) the correct number of people in the image.

考虑以下任务:您收到一张图像,并且想要一种算法来返回(预测)图像中的正确人数。

We start by assuming that there is, indeed, some mathematical function out there that relates the collection of all possible images, with the collection of integer values describing the number of people in each image. We accept the fact that we will never know the actual function, but we hope to learn a model with finite complexity that approximates this function well enough.

我们首先假设确实存在一些数学函数,该函数将所有可能的图像的集合关联在一起,其中整数值的集合描述了每个图像中的人数。 我们接受这样一个事实,即我们永远不知道实际函数,但是我们希望学习一个有限复杂度的模型,该模型足够好地近似该函数。

Let’s assume that you’ve constructed some kind of neural network to perform this task. For the sake of this discussion, it’s not really important how many layers are there in the network or the nature of mathematical manipulations carried out in each layer. What is important, however, is that in the end there is one output neuron that predicts a (non-negative, hopefully integer) value.

假设您已经构建了某种神经网络来执行此任务。 为了便于讨论,网络中有多少层或每一层中进行的数学运算的性质并不重要。 但是,重要的是,最终会有一个输出神经元预测一个(非负数,希望是整数)值。

The mathematical operation of the network can be expressed as a function:

网络的数学运算可以表示为一个函数:

f(x, w) = y

f(x,w)= y

where x is the input image (we can think of it as a vector containing all the pixel values), y is the network’s prediction, and w is a vector containing all the internal parameters of the function (e.g. in f(x, w) = a + bx + exp(c*x) = y, the values of a, b and c are the parameters of the function).

其中x是输入图像(我们可以将其视为包含所有像素值的向量),y是网络的预测,而w是包含函数的所有内部参数的向量(例如,在f(x, w )中= a + bx + exp(c * x)= y,a,b和c的值是该函数的参数)。

As we saw in the post on perceptrons, during training we want some kind of mechanism that:

正如我们在感知器帖子中所看到的,在训练期间,我们需要某种机制来:

  1. Evaluates the network’s prediction on a given input,

    根据给定的输入评估网络的预测,
  2. Compares it to the desired answer (the ground truth),

    将其与所需的答案(基本事实)进行比较,
  3. Produces a feedback that corresponds to the magnitude of the error,

    产生与误差大小相对应的反馈,
  4. And finally, modifies the network parameters in a way that improves its prediction (decreases the error magnitude).

    最后,以改善网络预测(降低误差幅度)的方式修改网络参数。

And thanks to some clever minds — we have such mechanism. In order to understand it we need to cover two topics:

还要感谢一些聪明的人-我们有这样的机制。 为了理解它,我们需要涵盖两个主题:

  1. Loss function

    损失函数

  2. Backpropagation

    反向传播

Image for post
Uri Almog Instagram Uri Almog Instagram

损失函数 (Loss Function)

Simply put, the loss function is the error magnitude. In more details, a good loss function should be a metric i.e. it defines a distance between points in the space of prediction values. — You can read more about distance functions

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值