python神经网络框架_NNoM是一个专门为了神经网络在 MCU 上运行的框架

Neural Network on Microcontroller (NNoM)

68747470733a2f2f7472617669732d63692e6f72672f6d616a69616e6a69612f6e6e6f6d2e7376673f6272616e63683d6d617374657268747470733a2f2f696d672e736869656c64732e696f2f62616467652f4c6963656e73652d417061636865253230322e302d626c75652e737667

NNoM is a high-level linference Neural Network library specifically for microcontrollers.

Highlights

Deploy Keras model to NNoM model with one line of code.

User-friendly interfaces.

Support complex structures; Inception, ResNet, DenseNet, Octave Convolution...

High-performance backend selections.

Onboard (MCU) evaluation tools; Runtime analysis, Top-k, Confusion matrix...

The structure of NNoM is shown below: nnom_structure.png

More detail avaialble in Development Guide

Discussions welcome using issues. Pull request welcome. QQ/TIM group: 763089399.

Licenses

NNoM is released under Apache License 2.0 since nnom-V0.2.0. License and copyright information can be found within the code.

Why NNoM?

The aims of NNoM is to provide a light-weight, user-friendly and flexible interface for fast deploying.

Nowadays, neural networks are wider, deeper, and denser. nnom_wdd.png

[1] Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., ... & Rabinovich, A. (2015). Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1-9).

[2] He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).

[3] Huang, G., Liu, Z., Van Der Maaten, L., & Weinberger, K. Q. (2017). Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4700-4708).

After 2014, the development of Neural Networks are more focus on structure optimising to improve efficiency and performance, which is more important to the small footprint platforms such as MCUs. However, the available NN libs for MCU are too low-level which make it sooooo difficult to use with these complex strucures.

Therefore, we build NNoM to help embedded developers for faster and simpler deploying NN model directly to MCU.

NNoM will manage the strucutre, memory and everything else for the developer. All you need to do is feeding your new measurements and getting the results.

NNoM is now working closely with Keras (You can easily learn Keras in 30 seconds!). There is no need to learn TensorFlow/Lite or other libs.

Documentations

Guides

Examples

Documented examples

Please check examples and choose one to start with.

Available Operations

*Notes: NNoM now supports both HWC and CHW formats. Some operation might not support both format currently. Please check the tables for the current status. *

Core Layers

Layers

HWC

CHW

Layer API

Comments

Convolution

Conv2D()

Support 1/2D

Depthwise Conv

DW_Conv2D()

Support 1/2D

Fully-connected

Dense()

Lambda

Lambda()

single input / single output anonymous operation

Batch Normalization

N/A

This layer is merged to the last Conv by the script

Flatten

Flatten()

SoftMax

SoftMax()

Softmax only has layer API

Activation

Activation()

A layer instance for activation

Input/Output

Input()/Output()

Up Sampling

UpSample()

Zero Padding

ZeroPadding()

Cropping

Cropping()

RNN Layers

Layers

Status

Layer API

Comments

Recurrent NN

Under Dev.

RNN()

Under Developpment

Simple RNN

Under Dev.

SimpleCell()

Under Developpment

Gated Recurrent Network (GRU)

Under Dev.

GRUCell()

Under Developpment

Activations

Activation can be used by itself as layer, or can be attached to the previous layer as "actail" to reduce memory cost.

Actrivation

HWC

CHW

Layer API

Activation API

Comments

ReLU

ReLU()

act_relu()

TanH

TanH()

act_tanh()

Sigmoid

Sigmoid()

act_sigmoid()

Pooling Layers

Pooling

HWC

CHW

Layer API

Comments

Max Pooling

MaxPool()

Average Pooling

AvgPool()

Sum Pooling

SumPool()

Global Max Pooling

GlobalMaxPool()

Global Average Pooling

GlobalAvgPool()

Global Sum Pooling

GlobalSumPool()

A better alternative to Global average pooling in MCU before Softmax

Matrix Operations Layers

Matrix

HWC

CHW

Layer API

Comments

Concatenate

Concat()

Concatenate through any axis

Multiple

Mult()

Addition

Add()

Substraction

Sub()

Dependencies

NNoM now use the local pure C backend implementation by default. Thus, there is no special dependency needed.

Optimization

CMSIS-NN/DSP is an optimized backend for ARM-Cortex-M4/7/33/35P. You can select it for up to 5x performance compared to the default C backend. NNoM will use the equivalent method in CMSIS-NN if the condition met.

Known Issues

Converter do not support implicitly defined activations

The script currently does not support implicit act:

Dense(32, activation="relu")

Use the explicit activation instead.

Dense(32)

Relu()

Contacts

Jianjia Ma

Citation Required

Please contact us using above details.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值