Python深度学习-基础

Python深度学习-基础 (Python Deep Learning - Fundamentals)

In this chapter, we will look into the fundamentals of Python Deep Learning.

在本章中,我们将研究Python深度学习的基础知识。

深度学习模型/算法 (Deep learning models/algorithms)

Let us now learn about the different deep learning models/ algorithms.

现在让我们了解不同的深度学习模型/算法。

Some of the popular models within deep learning are as follows −

深度学习中的一些流行模型如下-

  • Convolutional neural networks

    卷积神经网络
  • Recurrent neural networks

    递归神经网络
  • Deep belief networks

    深度信仰网络
  • Generative adversarial networks

    生成对抗网络
  • Auto-encoders and so on

    自动编码器等

The inputs and outputs are represented as vectors or tensors. For example, a neural network may have the inputs where individual pixel RGB values in an image are represented as vectors.

输入和输出表示为矢量或张量。 例如,神经网络可以具有输入,其中图像中的各个像素RGB值表示为矢量。

The layers of neurons that lie between the input layer and the output layer are called hidden layers. This is where most of the work happens when the neural net tries to solve problems. Taking a closer look at the hidden layers can reveal a lot about the features the network has learned to extract from the data.

位于输入层和输出层之间的神经元层称为隐藏层。 这是神经网络试图解决问题时大部分工作的地方。 仔细研究隐藏层可以揭示很多有关网络已学会从数据中提取的功能的信息。

Different architectures of neural networks are formed by choosing which neurons to connect to the other neurons in the next layer.

通过选择哪些神经元连接到下一层中的其他神经元,可以形成神经网络的不同体系结构。

用于计算输出的伪代码 (Pseudocode for calculating output)

Following is the pseudocode for calculating output of Forward-propagating Neural Network

以下是用于计算前向传播神经网络输出的伪代码-

  • # node[] := array of topologically sorted nodes

    #node []:=拓扑排序节点的数组
  • # An edge from a to b means a is to the left of b

    #从a到b的边表示a在b的左侧
  • # If the Neural Network has R inputs and S outputs,

    #如果神经网络具有R输入和S输出,
  • # then first R nodes are input nodes and last S nodes are output nodes.

    #然后,第一个R节点是输入节点,最后一个S节点是输出节点。
  • # incoming[x] := nodes connected to node x

    #入站[x]:=连接到节点x的节点
  • # weight[x] := weights of incoming edges to x

    #weight [x]:=输入边到x的权重

For each neuron x, from left to right −

对于每个神经元x,从左到右-

  • if x <= R: do nothing # its an input node

    如果x <= R:不执行任何操作#其输入节点
  • inputs[x] = [output[i] for i in incoming[x]]

    输入[x] = [输入[x]中i的输出[i]]
  • weighted_sum = dot_product(weights[x], inputs[x])

    weighted_sum = dot_product(权重[x],输入[x])
  • output[x] = Activation_function(weighted_sum)

    输出[x] =激活功能(加权和)

翻译自: https://www.tutorialspoint.com/python_deep_learning/python_deep_learning_fundamentals.htm

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值