caffe之小小的CNN网络跑起来

把前面做好的mdb跑起来,网络嘛就是简单的lenet_lr网络.呼呼

网络设置:

name: "LeNet"
layer {
  name: "mnist"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TRAIN
  }
  transform_param {
    scale: 0.00390625
  }
  data_param {
    source: "examples/my_imagenet/mnist_train_lmdb"
    batch_size: 64
    backend: LMDB
  }
}
layer {
  name: "mnist"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TEST
  }
  transform_param {
    scale: 0.00390625
  }
  data_param {
    source: "examples/my_imagenet/mnist_val_lmdb"
    batch_size: 100
    backend: LMDB
  }
}
layer {
  name: "conv1"
  type: "Convolution"
  bottom: "data"
  top: "conv1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 20
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "pool1"
  type: "Pooling"
  bottom: "conv1"
  top: "pool1"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "conv2"
  type: "Convolution"
  bottom: "pool1"
  top: "conv2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 50
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "pool2"
  type: "Pooling"
  bottom: "conv2"
  top: "pool2"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "ip1"
  type: "InnerProduct"
  bottom: "pool2"
  top: "ip1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 500
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "relu1"
  type: "ReLU"
  bottom: "ip1"
  top: "ip1"
}
layer {
  name: "ip2"
  type: "InnerProduct"
  bottom: "ip1"
  top: "ip2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 10
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "accuracy"
  type: "Accuracy"
  bottom: "ip2"
  bottom: "label"
  top: "accuracy"
  include {
    phase: TEST
  }
}
layer {
  name: "loss"
  type: "SoftmaxWithLoss"
  bottom: "ip2"
  bottom: "label"
  top: "loss"
}
网络参数:

#网络设置
net: "examples/my_imagenet/lenet_lr.prototxt"
#个测试图片
test_iter: 100
#每迭代次数500次测试一次
test_interval: 500
#学习率
base_lr: 0.01
#动量
momentum: 0.9
#权重的递减
weight_decay: 0.0005
#学习策略
lr_policy: "inv"
gamma: 0.01
power: 0.75
#每迭代100次显示一次
display: 100
#最大迭代次数10000次
max_iter: 10000
#每5000次迭代存储一次数据到电脑
snapshot: 5000
#存储数据名
snapshot_prefix: "examples/my_imagenet/lenet"
#数据训练方式
solver_mode: CPU

sh来跑起来

#!/usr/bin/env sh
set -e
cd
cd caffe-master
build/tools/caffe train --solver=examples/my_imagenet/lenet_lr_solver.prototxt $@

预测用sh:

./build/tools/caffe.bin test \
    -model examples/my_imagenet/lenet_lr.prototxt \
    -weights examples/my_imagenet/lenet_iter_10000.caffemodel \
    -iterations 1
    
#表示只做预测,不进行参数更新
#指定模型描述文本文件
#指定模型预先训练好的权值文件
#指定测试迭代次数.参与测试样例数目 = 迭代次数 * batch_size

分类用sh:

./build/examples/cpp_classification/classification.bin \
  examples/my_imagenet/lenet_lr.prototxt \
  examples/my_imagenet/lenet_iter_10000.caffemodel \
  data/mydata/imagenet_mean.binaryproto \
  data/mydata/train.txt \
  examples/my_imagenet/train14.JPEG

还有个做平均值的,好像是用来维持网络均衡的吧:

#!/usr/bin/env sh
EXAMPLE=examples/my_imagenet
DATA=data/mydata
TOOLS=build/tools

$TOOLS/compute_image_mean \
    $EXAMPLE/mnist_train_lmdb \
    $DATA/imagenet_mean.binaryproto
    
echo "Done."

呼呼,

计算中的样子:

好像图像中出现了我的贱手,2333

等一会儿再看结果把.程序还在跑


网络结构:

pic h x w x n

data 32 x 32 xn

conv1 5 x 5 x 20 ,1

data 18 x 18 x 20 x n

pool1  2 x 2 ,2

data  9 x 9 x 20 x n

conv2 5 x 5 x 50 ,1

data  5 x 5 x (20 x 50) x n

pool2 2 x 2,2

data 2 x 2 x (20 x 50) x n

ip1 1 x 1 x 500

data 500 x n

relu1

ip2  1 x 1 x 10

data  10 x n

accuracy1 x n


评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

RtZero

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值