linux编译tiny cnn,wyh/xilinx-tiny-cnn

tiny-cnn: A header only, dependency-free deep learning framework in C++11

Xilinx changes from original tiny-cnn:

added batchnorm layer (currently feedforward only, no training)

support for offloaded layer

interleave layer

binarized layers

Linux/Mac OS

Windows

tiny-cnn.svg?branch=master

s4mow1544tvoqeeu?svg=true

tiny-cnn is a C++11 implementation of deep learning. It is suitable for deep learning on limited computational resource, embedded systems and IoT devices.

see Wiki Pages for more info.

Features

fast, without GPU

with TBB threading and SSE/AVX vectorization

98.8% accuracy on MNIST in 13 minutes training (@Core i7-3520M)

header only

Just include tiny_cnn.h and write your model in c++. There is nothing to install.

small dependency & simple implementation

Comparison with other libraries

Prerequisites

Nothing(Optional:TBB,OpenMP)

BLAS,Boost,protobuf,glog,gflags,hdf5, (Optional:CUDA,OpenCV,lmdb,leveldb etc)

Numpy,Scipy,BLAS,(optional:nose,Sphinx,CUDA etc)

numpy,six,protobuf,(optional:CUDA,Bazel)

Modeling By

C++ code

Config File

Python Code

Python Code

GPU Support

No

Yes

Yes

Yes

Installing

Unnecessary

Necessary

Necessary

Necessary

Windows Support

Yes

No*

Yes

No*

Pre-Trained Model

Yes(via caffe-converter)

Yes

No*

No*

*unofficial version is available

Supported networks

layer-types

fully-connected layer

convolutional layer

average pooling layer

max-pooling layer

contrast normalization layer

dropout layer

linear operation layer

activation functions

tanh

sigmoid

softmax

rectified linear(relu)

leaky relu

identity

exponential linear units(elu)

loss functions

cross-entropy

mean-squared-error

optimization algorithm

stochastic gradient descent (with/without L2 normalization and momentum)

stochastic gradient levenberg marquardt

adagrad

rmsprop

adam

Dependencies

Minimum requirements

Nothing.All you need is a C++11 compiler.

Requirements to build sample/test programs

Build

tiny-cnn is header-ony, so there's nothing to build. If you want to execute sample program or unit tests, you need to install cmake and type the following commands:

cmake .

Then open .sln file in visual studio and build(on windows/msvc), or type make command(on linux/mac/windows-mingw).

Some cmake options are available:

options

description

default

additional requirements to use

USE_TBB

Use Intel TBB for parallelization

OFF*

USE_OMP

Use OpenMP for parallelization

OFF*

USE_SSE

Use Intel SSE instruction set

ON

Intel CPU which supports SSE

USE_AVX

Use Intel AVX instruction set

ON

Intel CPU which supports AVX

BUILD_TESTS

Build unist tests

OFF

-**

BUILD_EXAMPLES

Build example projects

ON

-

*tiny-cnn use c++11 standard library for parallelization by default

**to build tests, type git submodule update --init before build

For example, type the following commands if you want to use intel TBB and build tests:

cmake -DUSE_TBB=ON -DBUILD_EXAMPLES=ON .

Customize configurations

You can edit include/config.h to customize default behavior.

Examples

construct convolutional neural networks

#include "tiny_cnn/tiny_cnn.h"

using namespace tiny_cnn;

using namespace tiny_cnn::activation;

void construct_cnn() {

using namespace tiny_cnn;

// specify loss-function and optimization-algorithm

network net;

//network net;

// add layers

net << convolutional_layer(32, 32, 5, 1, 6) // 32x32in, conv5x5, 1-6 f-maps

<< average_pooling_layer(28, 28, 6, 2) // 28x28in, 6 f-maps, pool2x2

<< fully_connected_layer(14 * 14 * 6, 120)

<< fully_connected_layer(120, 10);

assert(net.in_dim() == 32 * 32);

assert(net.out_dim() == 10);

// load MNIST dataset

std::vector train_labels;

std::vector train_images;

parse_mnist_labels("train-labels.idx1-ubyte", &train_labels);

parse_mnist_images("train-images.idx3-ubyte", &train_images);

// train (50-epoch, 30-minibatch)

net.train(train_images, train_labels, 30, 50);

// save

std::ofstream ofs("weights");

ofs << net;

// load

// std::ifstream ifs("weights");

// ifs >> net;

}

construct multi-layer perceptron(mlp)

#include "tiny_cnn/tiny_cnn.h"

using namespace tiny_cnn;

using namespace tiny_cnn::activation;

void construct_mlp() {

network net;

net << fully_connected_layer(32 * 32, 300)

<< fully_connected_layer(300, 10);

assert(net.in_dim() == 32 * 32);

assert(net.out_dim() == 10);

}

another way to construct mlp

#include "tiny_cnn/tiny_cnn.h"

using namespace tiny_cnn;

using namespace tiny_cnn::activation;

void construct_mlp() {

auto mynet = make_mlp({ 32 * 32, 300, 10 });

assert(mynet.in_dim() == 32 * 32);

assert(mynet.out_dim() == 10);

}

more sample, read examples/main.cpp or MNIST example page.

References

[2] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, Gradient-based learning applied to document recognition.

Proceedings of the IEEE, 86, 2278-2324.

other useful reference lists:

License

The BSD 3-Clause License

Mailing list

google group for questions and discussions:

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值