深度神经网络回归_深度神经网络

深度神经网络回归

深度神经网络回归

深度神经网络 (Deep Neural Networks)

A deep neural network (DNN) is an ANN with multiple hidden layers between the input and output layers. Similar to shallow ANNs, DNNs can model complex non-linear relationships.

深度神经网络(DNN)是在输入和输出层之间具有多个隐藏层的ANN。 与浅层ANN相似,DNN可以对复杂的非线性关系建模。

The main purpose of a neural network is to receive a set of inputs, perform progressively complex calculations on them, and give output to solve real world problems like classification. We restrict ourselves to feed forward neural networks.

神经网络的主要目的是接收一组输入,对其进行渐进的复杂计算,并提供输出以解决诸如分类之类的现实问题。 我们限制自己前馈神经网络。

We have an input, an output, and a flow of sequential data in a deep network.

在深度网络中,我们具有输入,输出和顺序数据流。

Deep Network

Neural networks are widely used in supervised learning and reinforcement learning problems. These networks are based on a set of layers connected to each other.

神经网络广泛用于监督学习和强化学习问题。 这些网络基于彼此连接的一组层。

In deep learning, the number of hidden layers, mostly non-linear, can be large; say about 1000 layers.

在深度学习中,隐藏层的数量(大多数是非线性的)可能很大; 说大约1000层。

DL models produce much better results than normal ML networks.

DL模型比普通的ML网络产生更好的结果。

We mostly use the gradient descent method for optimizing the network and minimising the loss function.

我们主要使用梯度下降法来优化网络并最小化损失函数。

We can use the Imagenet, a repository of millions of digital images to classify a dataset into categories like cats and dogs. DL nets are increasingly used for dynamic images apart from static ones and for time series and text analysis.

我们可以使用Imagenet (数百万个数字图像的存储库)将数据集分类为猫和狗等类别。 DL网络越来越多地用于除静态图像之外的动态图像以及时间序列和文本分析。

Training the data sets forms an important part of Deep Learning models. In addition, Backpropagation is the main algorithm in training DL models.

训练数据集是深度学习模型的重要组成部分。 另外,反向传播是训练DL模型的主要算法。

DL deals with training large neural networks with complex input output transformations.

DL处理具有复杂输入输出转换的大型神经网络的训练。

One example of DL is the mapping of a photo to the name of the person(s) in photo as they do on social networks and describing a picture with a phrase is another recent application of DL.

DL的一个示例是将照片映射到照片中的人的名字,就像他们在社交网络上所做的那样,并用短语描述图片是DL的另一项最新应用。

DL Mapping

Neural networks are functions that have inputs like x1,x2,x3…that are transformed to outputs like z1,z2,z3 and so on in two (shallow networks) or several intermediate operations also called layers (deep networks).

神经网络是具有x1,x2,x3等输入的函数,这些函数在两个(浅网络)或几个中间操作(也称为层)(深层网络)中转换为z1,z2,z3等输出。

The weights and biases change from layer to layer. ‘w’ and ‘v’ are the weights or synapses of layers of the neural networks.

权重和偏差会随层的不同而变化。 “ w”和“ v”是神经网络各层的权重或突触。

The best use case of deep learning is the supervised learning problem.Here,we have large set of data inputs with a desired set of outputs.

深度学习的最佳用例是有监督的学习问题。在这里,我们有大量的数据输入和所需的一组输出。

Backpropagation Algorithm

Here we apply back propagation algorithm to get correct output prediction.

在这里,我们应用反向传播算法来获得正确的输出预测。

The most basic data set of deep learning is the MNIST, a dataset of handwritten digits.

深度学习的最基本数据集是MNIST,这是手写数字的数据集。

We can train deep a Convolutional Neural Network with Keras to classify images of handwritten digits from this dataset.

我们可以使用Keras深度训练卷积神经网络,以对该数据集中的手写数字图像进行分类。

The firing or activation of a neural net classifier produces a score. For example,to classify patients as sick and healthy,we consider parameters such as height, weight and body temperature, blood pressure etc.

触发或激活神经网​​络分类器会产生一个分数。 例如,为了将患者分类为健康患者,我们考虑身高,体重和体温,血压等参数。

A high score means patient is sick and a low score means he is healthy.

高分表示患者生病,低分表示患者健康。

Each node in output and hidden layers has its own classifiers. The input layer takes inputs and passes on its scores to the next hidden layer for further activation and this goes on till the output is reached.

输出层和隐藏层中的每个节点都有自己的分类器。 输入层接受输入并将其分数传递到下一个隐藏层以进行进一步激活,并一直进行到达到输出为止。

This progress from input to output from left to right in the forward direction is called forward propagation.

从输入到输出从左到右在向前方向上的这种进展称为前向传播。

Credit assignment path (CAP) in a neural networ

  • 0
    点赞
  • 8
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值