CNTK-神经网络(NN)概念

CNTK-神经网络(NN)概念 (CNTK - Neural Network (NN) Concepts )

This chapter deals with concepts of Neural Network with regards to CNTK.

本章讨论有关CNTK的神经网络概念。

As we know that, several layers of neurons are used for making a neural network. But, the question arises that in CNTK how we can model the layers of a NN? It can be done with the help of layer functions defined in the layer module.

众所周知,多层神经元被用于制造神经网络。 但是,出现的问题是,在CNTK中,我们如何对NN的各个层进行建模? 可以借助图层模块中定义的图层功能来完成。

图层功能 (Layer function)

Actually, in CNTK, working with the layers has a distinct functional programming feel to it. Layer function looks like a regular function and it produces a mathematical function with a set of predefined parameters. Let’s see how we can create the most basic layer type, Dense, with the help of layer function.

实际上,在CNTK中,使用这些层具有独特的功能编程感觉。 图层函数看起来像常规函数,它会生成带有一组预定义参数的数学函数。 让我们看看如何在图层功能的帮助下创建最基本的图层类型Dense。

(Example)

With the help of following basic steps, we can create the most basic layer type −

借助以下基本步骤,我们可以创建最基本的图层类型-

Step 1 − First, we need to import the Dense layer function from the layers’ package of CNTK.

步骤1-首先,我们需要从CNTK的图层包中导入Dense图层功能。


from cntk.layers import Dense

Step 2 − Next from the CNTK root package, we need to import the input_variable function.

步骤2-接下来从CNTK根包中,我们需要导入input_variable函数。


from cntk import input_variable

Step 3 − Now, we need to create a new input variable using the input_variable function. We also need to provide the its size.

步骤3-现在,我们需要使用input_variable函数创建一个新的输入变量。 我们还需要提供其大小。


feature = input_variable(100)

Step 4 − At last, we will create a new layer using Dense function along with providing the number of neurons we want.

步骤4-最后,我们将使用Dense函数创建一个新层,并提供所需的神经元数量。


layer = Dense(40)(feature)

Now, we can invoke the configured Dense layer function to connect the Dense layer to the input.

现在,我们可以调用已配置的Dense层函数以将Dense层连接到输入。

完整的实施示例 (Complete implementation example)


from cntk.layers import Dense
from cntk import input_variable
feature= input_variable(100)
layer = Dense(40)(feature)

自定义图层 (Customizing layers)

As we have seen CNTK provides us with a pretty good set of defaults for building NNs. Based on activation function and other settings we choose, the behavior as well as performance of the NN is different. It is another very useful stemming algorithm. That’s the reason, it is good to understand what we can configure.

正如我们所看到的,CNTK为我们提供了一组很好的默认值,用于构建NN。 根据激活功能和我们选择的其他设置,NN的行为和性能是不同的。 这是另一个非常有用的词干算法。 这就是原因,很高兴了解我们可以配置的内容。

配置密集层的步骤 (Steps to configure a Dense layer)

Each layer in NN has its unique configuration options and when we talk about Dense layer, we have following important settings to define −

NN中的每一层都有其独特的配置选项,当我们谈论密集层时,我们具有以下重要设置来定义-

  • shape − As name implies, it defines the output shape of the layer which further determines the number of neurons in that layer.

    形状 -顾名思义,它定义了该层的输出形状,该形状进一步确定了该层中神经元的数量。

  • activation − It defines the activation function of that layer, so it can transform the input data.

    激活 -它定义了该层的激活功能,因此它可以转换输入数据。

  • init − It defines the initialisation function of that layer. It will initialise the parameters of the layer when we start training the NN.

    init-定义该层的初始化功能。 当我们开始训练NN时,它将初始化该层的参数。

Let’s see the steps with the help of which we can configure a Dense layer −

让我们看看可以配置密集层的步骤-

Step1 − First, we need to import the Dense layer function from the layers’ package of CNTK.

步骤 1-首先,我们需要从CNTK的图层包中导入Dense图层功能。


from cntk.layers import Dense

Step2 − Next from the CNTK ops package, we need to import the sigmoid operator. It will be used to configure as an activation function.

步骤 2-接下来从CNTK ops包中,我们需要导入sigmoid运算符 。 它将被配置为激活功能。


from cntk.ops import sigmoid

Step3 − Now, from initializer package, we need to import the glorot_uniform initializer.

步骤 3-现在,从初始化程序包中,我们需要导入glorot_uniform初始化程序。


from cntk.initializer import glorot_uniform

Step4 − At last, we will create a new layer using Dense function along with providing the number of neurons as the first argument. Also, provide the sigmoid operator as activation function and the glorot_uniform as the init function for the layer.

步骤4-最后,我们将使用Dense函数创建一个新层,并提供神经元数量作为第一个参数。 同样,提供Sigmoid运算符作为激活函数,并提供glorot_uniform作为该层的初始化函数。


layer = Dense(50, activation = sigmoid, init = glorot_uniform)

完整的实现示例- (Complete implementation example −)


from cntk.layers import Dense
from cntk.ops import sigmoid
from cntk.initializer import glorot_uniform
layer = Dense(50, activation = sigmoid, init = glorot_uniform)

优化参数 (Optimizing the parameters)

Till now, we have seen how to create the structure of a NN and how to configure various settings. Here, we will see, how we can optimise the parameters of a NN. With the help of the combination of two components namely learners and trainers, we can optimise the parameters of a NN.

到目前为止,我们已经了解了如何创建NN的结构以及如何配置各种设置。 在这里,我们将看到如何优化NN的参数。 借助于学习者培训者两个组件的组合,我们可以优化神经网络的参数。

培训师组成 (trainer component)

The first component which is used to optimise the parameters of a NN is trainer component. It basically implements the backpropagation process. If we talk about its working, it passes the data through the NN to obtain a prediction.

用于优化NN参数的第一个组件是Trainer组件。 它基本上实现了反向传播过程。 如果我们谈论它的工作原理,它将数据通过NN传递以获得预测。

After that, it uses another component called learner in order to obtain the new values for the parameters in a NN. Once it obtains the new values, it applies these new values and repeat the process until an exit criterion is met.

此后,它使用另一个称为学习器的组件来获取NN中参数的新值。 一旦获得新值,它将应用这些新值并重复该过程,直到满足退出标准为止。

学习者组件 (learner component)

The second component which is used to optimise the parameters of a NN is learner component, which is basically responsible for performing the gradient descent algorithm.

用于优化NN参数的第二个组件是学习器组件,它主要负责执行梯度下降算法。

CNTK库中包含的学习者 (Learners included in the CNTK library)

Following is the list of some of the interesting learners included in CNTK library −

以下是CNTK库中包含的一些有趣的学习者的列表-

  • Stochastic Gradient Descent (SGD) − This learner represents the basic stochastic gradient descent, without any extras.

    随机梯度下降(SGD) -该学习器表示基本的随机梯度下降,没有任何额外的功能。

  • Momentum Stochastic Gradient Descent (MomentumSGD) − With SGD, this learner applies the momentum to overcome the problem of local maxima.

    动量随机梯度下降(MomentumSGD) -通过SGD,该学习者可以利用动量来克服局部极大值的问题。

  • RMSProp − This learner, in order to control the rate of descent, uses decaying learning rates.

    RMSProp-该学习者为了控制下降速度,使用递减的学习速度。

  • Adam − This learner, in order to decrease the rate of descent over time, uses decaying momentum.

    亚当 -为了降低随时间下降的速度,该学习者使用了衰减的动量。

  • Adagrad − This learner, for frequently as well as infrequently occurring features, uses different learning rates.

    Adagrad-该学习者对于频繁使用和不频繁使用的功能都使用不同的学习率。

翻译自: https://www.tutorialspoint.com/microsoft_cognitive_toolkit/microsoft_cognitive_toolkit_neural_network_nn_concepts.htm

CMSIS-NN神经网络教程是一个关于使用CMSIS-NN神经网络库的教程。该教程的第二版已于2021年9月20日发布,并且与DSP数字信号处理教程同步更新。在更新中,CMSIS-DSP和CMSIS-NN库将被独立出来,不再包含在CMSIS软件包中,以便更方便地进行独立更新和发布。同时,该教程还将介绍CMSIS-RTOS V2和CMSIS-DAP的最新更新内容。该教程旨在完善第一版教程中的内容,并分享在信号处理应用方面的经验和待解决问题。此外,教程还提到了单片机人工智能(AI)的前景,ARM正在推进单片机AI的发展,并在2020年的路线图中计划发布机器学习库和更强大的DSP库。总的来说,CMSIS-NN神经网络教程是一个重要的教程,为使用CMSIS-NN库进行神经网络开发提供了指导和示例。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* *3* [事隔五年之后,开启第2版DSP数字信号处理和CMSIS-NN神经网络教程,同步开启三代示波器,前50章发布(2021-...](https://blog.csdn.net/Simon223/article/details/105049607)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v92^chatsearchT0_1"}}] [.reference_item style="max-width: 50%"] - *2* [CMSIS软件包V5.8.0发布,所有组件全面更新,CMSIS-NN神经网络接口函数开始兼容TensorFlow Lite](https://blog.csdn.net/Simon223/article/details/118363349)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v92^chatsearchT0_1"}}] [.reference_item style="max-width: 50%"] [ .reference_list ]
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值