** Vision_layers.hpp (和视觉有关的layers)
ConvolutionLayer, InnerProductLayer, PoolingLayer
EltwiseLayer, Im2colLayer, LRNLayer,
EltwiseLayer:
好几个bottom,求和或者乘积( op_ =0 or 1 )
setup 里先保证所有bottom的size是一样的
backward 里按比例分配
** Common_Layers.hpp
SoftmaxLayer
ArgmaxLayer, ConcatLayer, FlattenLayer, SplitLayer, SliceLayer
ArgmaxLayer (backward还没有implement)
找到每个num中最大的 top 个数据, 输出序号
if (out_max_val_) , 输出序号和数据
ConcatLayer
连接bottom, (concat=0, num or concat=1,channels)
FlattenLayer
把每个num里的数据,都变成 channels
(num,channel, height, width) -> (num, channel*height*width, 1, 1)
SplitLayer
对每个top, 都复制 bottom 一次
SliceLayer
对blob切片,目前只能对num和channel
if (不设定slice_point_), 按 top 的个数平均分
if (设定slice_point_), 按照slice_point_分配
** Data_Layers.hpp
DataLayer,
DummyDataLayer, HDF5DataLayer, HDF5OutoutLayer, ImageDataLayer, MemoryDataLayer, WindowDataLayer
** Filler.hpp
Filler, ConstantFiller, UniformFiller, GaussianFiller, PositiveUnitballFiller, XavierFiller, GetFiller
** Loss_Layers.hpp
AccuraceLayer,
LossLayer, EuclideanLossLayer, HingeLossLayer, InfogianLossLayer, MultinomialLogisticLossLayer, SigmoidCrossEntropyLossLayer, SoftmaxLayer,
SoftmaxWithLossLayer
** Neuron_Layers.hpp
NeuronLayer, BNLLLayer, DropoutLayer, PowerLayer, ReluLayer, SigmoidLayer, TanHLayer, ThresholdLayer