caffe中mnist数据集的运行

1、首先进入caffe的安装目录

    

CAFFE_ROOT='/home/lxc/caffe/'

2、运行脚本文件,数据集

cd CAFFE_ROOT
./data/mnist/gte_mnist.sh

3、吧数据集转换成caffe可读入的格式

./examples/mnist/create_mnist.sh

4、训练数据集,得到训练模型

./examples/mnist/train_lenet.sh

运行的过程:

407563 (* 1 = 0.00407563 loss)
I0314 09:58:05.226999 25029 sgd_solver.cpp:106] Iteration 7400, lr = 0.00660067
I0314 09:58:06.644390 25029 solver.cpp:337] Iteration 7500, Testing net (#0)
I0314 09:58:07.555140 25029 solver.cpp:404]     Test net output #0: accuracy = 0.9909
I0314 09:58:07.555181 25029 solver.cpp:404]     Test net output #1: loss = 0.0300942 (* 1 = 0.0300942 loss)
I0314 09:58:07.563323 25029 solver.cpp:228] Iteration 7500, loss = 0.00139727
I0314 09:58:07.563347 25029 solver.cpp:244]     Train net output #0: loss = 0.00139717 (* 1 = 0.00139717 loss)
I0314 09:58:07.563361 25029 sgd_solver.cpp:106] Iteration 7500, lr = 0.00657236
I0314 09:58:09.002777 25029 solver.cpp:228] Iteration 7600, loss = 0.00737701
I0314 09:58:09.002853 25029 solver.cpp:244]     Train net output #0: loss = 0.0073769 (* 1 = 0.0073769 loss)
I0314 09:58:09.002867 25029 sgd_solver.cpp:106] Iteration 7600, lr = 0.00654433
I0314 09:58:10.444634 25029 solver.cpp:228] Iteration 7700, loss = 0.0348312
I0314 09:58:10.444681 25029 solver.cpp:244]     Train net output #0: loss = 0.0348311 (* 1 = 0.0348311 loss)
I0314 09:58:10.444694 25029 sgd_solver.cpp:106] Iteration 7700, lr = 0.00651658
I0314 09:58:11.927762 25029 solver.cpp:228] Iteration 7800, loss = 0.00242758
I0314 09:58:11.927819 25029 solver.cpp:244]     Train net output #0: loss = 0.00242747 (* 1 = 0.00242747 loss)
I0314 09:58:11.927831 25029 sgd_solver.cpp:106] Iteration 7800, lr = 0.00648911
I0314 09:58:13.402936 25029 solver.cpp:228] Iteration 7900, loss = 0.00951368
I0314 09:58:13.402987 25029 solver.cpp:244]     Train net output #0: loss = 0.00951358 (* 1 = 0.00951358 loss)
I0314 09:58:13.403002 25029 sgd_solver.cpp:106] Iteration 7900, lr = 0.0064619
I0314 09:58:14.867146 25029 solver.cpp:337] Iteration 8000, Testing net (#0)
I0314 09:58:15.800400 25029 solver.cpp:404]     Test net output #0: accuracy = 0.9904
I0314 09:58:15.800444 25029 solver.cpp:404]     Test net output #1: loss = 0.0276247 (* 1 = 0.0276247 loss)
I0314 09:58:15.808650 25029 solver.cpp:228] Iteration 8000, loss = 0.00521269
I0314 09:58:15.808675 25029 solver.cpp:244]     Train net output #0: loss = 0.00521257 (* 1 = 0.00521257 loss)
I0314 09:58:15.808689 25029 sgd_solver.cpp:106] Iteration 8000, lr = 0.00643496
I0314 09:58:17.274581 25029 solver.cpp:228] Iteration 8100, loss = 0.00926444
I0314 09:58:17.274636 25029 solver.cpp:244]     Train net output #0: loss = 0.00926433 (* 1 = 0.00926433 loss)
I0314 09:58:17.274651 25029 sgd_solver.cpp:106] Iteration 8100, lr = 0.00640827
I0314 09:58:18.732739 25029 solver.cpp:228] Iteration 8200, loss = 0.00703852
I0314 09:58:18.732786 25029 solver.cpp:244]     Train net output #0: loss = 0.00703842 (* 1 = 0.00703842 loss)
I0314 09:58:18.732800 25029 sgd_solver.cpp:106] Iteration 8200, lr = 0.00638185
I0314 09:58:20.189698 25029 solver.cpp:228] Iteration 8300, loss = 0.0678537
I0314 09:58:20.189746 25029 solver.cpp:244]     Train net output #0: loss = 0.0678536 (* 1 = 0.0678536 loss)
I0314 09:58:20.189759 25029 sgd_solver.cpp:106] Iteration 8300, lr = 0.00635567
I0314 09:58:21.628165 25029 solver.cpp:228] Iteration 8400, loss = 0.00610364
I0314 09:58:21.628206 25029 solver.cpp:244]     Train net output #0: loss = 0.00610354 (* 1 = 0.00610354 loss)
I0314 09:58:21.628218 25029 sgd_solver.cpp:106] Iteration 8400, lr = 0.00632975
I0314 09:58:23.054644 25029 solver.cpp:337] Iteration 8500, Testing net (#0)
I0314 09:58:23.975236 25029 solver.cpp:404]     Test net output #0: accuracy = 0.9913
I0314 09:58:23.975289 25029 solver.cpp:404]     Test net output #1: loss = 0.0276445 (* 1 = 0.0276445 loss)
I0314 09:58:23.983639 25029 solver.cpp:228] Iteration 8500, loss = 0.00722562
I0314 09:58:23.983686 25029 solver.cpp:244]     Train net output #0: loss = 0.00722551 (* 1 = 0.00722551 loss)
I0314 09:58:23.983702 25029 sgd_solver.cpp:106] Iteration 8500, lr = 0.00630407
I0314 09:58:25.423691 25029 solver.cpp:228] Iteration 8600, loss = 0.000844742
I0314 09:58:25.423733 25029 solver.cpp:244]     Train net output #0: loss = 0.000844626 (* 1 = 0.000844626 loss)
I0314 09:58:25.423746 25029 sgd_solver.cpp:106] Iteration 8600, lr = 0.00627864
I0314 09:58:26.858505 25029 solver.cpp:228] Iteration 8700, loss = 0.00262191
I0314 09:58:26.858546 25029 solver.cpp:244]     Train net output #0: loss = 0.00262179 (* 1 = 0.00262179 loss)
I0314 09:58:26.858559 25029 sgd_solver.cpp:106] Iteration 8700, lr = 0.00625344
I0314 09:58:28.296435 25029 solver.cpp:228] Iteration 8800, loss = 0.00161585
I0314 09:58:28.296476 25029 solver.cpp:244]     Train net output #0: loss = 0.00161573 (* 1 = 0.00161573 loss)
I0314 09:58:28.296489 25029 sgd_solver.cpp:106] Iteration 8800, lr = 0.00622847
I0314 09:58:29.741562 25029 solver.cpp:228] Iteration 8900, loss = 0.000348777
I0314 09:58:29.741605 25029 solver.cpp:244]     Train net output #0: loss = 0.000348661 (* 1 = 0.000348661 loss)
I0314 09:58:29.741631 25029 sgd_solver.cpp:106] Iteration 8900, lr = 0.00620374
I0314 09:58:31.165171 25029 solver.cpp:337] Iteration 9000, Testing net (#0)
I0314 09:58:32.077903 25029 solver.cpp:404]     Test net output #0: accuracy = 0.9909
I0314 09:58:32.077941 25029 solver.cpp:404]     Test net output #1: loss = 0.0273136 (* 1 = 0.0273136 loss)
I0314 09:58:32.086107 25029 solver.cpp:228] Iteration 9000, loss = 0.0154975
I0314 09:58:32.086132 25029 solver.cpp:244]     Train net output #0: loss = 0.0154974 (* 1 = 0.0154974 loss)
I0314 09:58:32.086145 25029 sgd_solver.cpp:106] Iteration 9000, lr = 0.00617924
I0314 09:58:33.524173 25029 solver.cpp:228] Iteration 9100, loss = 0.00757405
I0314 09:58:33.524216 25029 solver.cpp:244]     Train net output #0: loss = 0.00757394 (* 1 = 0.00757394 loss)
I0314 09:58:33.524230 25029 sgd_solver.cpp:106] Iteration 9100, lr = 0.00615496
I0314 09:58:34.966588 25029 solver.cpp:228] Iteration 9200, loss = 0.00248411
I0314 09:58:34.966630 25029 solver.cpp:244]     Train net output #0: loss = 0.00248399 (* 1 = 0.00248399 loss)
I0314 09:58:34.966644 25029 sgd_solver.cpp:106] Iteration 9200, lr = 0.0061309
I0314 09:58:36.405937 25029 solver.cpp:228] Iteration 9300, loss = 0.00742113
I0314 09:58:36.405982 25029 solver.cpp:244]     Train net output #0: loss = 0.007421 (* 1 = 0.007421 loss)
I0314 09:58:36.405995 25029 sgd_solver.cpp:106] Iteration 9300, lr = 0.00610706
I0314 09:58:37.850505 25029 solver.cpp:228] Iteration 9400, loss = 0.0306143
I0314 09:58:37.850548 25029 solver.cpp:244]     Train net output #0: loss = 0.0306142 (* 1 = 0.0306142 loss)
I0314 09:58:37.850560 25029 sgd_solver.cpp:106] Iteration 9400, lr = 0.00608343
I0314 09:58:39.285089 25029 solver.cpp:337] Iteration 9500, Testing net (#0)
I0314 09:58:40.197036 25029 solver.cpp:404]     Test net output #0: accuracy = 0.9902
I0314 09:58:40.197090 25029 solver.cpp:404]     Test net output #1: loss = 0.0315135 (* 1 = 0.0315135 loss)
I0314 09:58:40.205340 25029 solver.cpp:228] Iteration 9500, loss = 0.0047698
I0314 09:58:40.205364 25029 solver.cpp:244]     Train net output #0: loss = 0.00476967 (* 1 = 0.00476967 loss)
I0314 09:58:40.205379 25029 sgd_solver.cpp:106] Iteration 9500, lr = 0.00606002
I0314 09:58:41.670626 25029 solver.cpp:228] Iteration 9600, loss = 0.00133942
I0314 09:58:41.670670 25029 solver.cpp:244]     Train net output #0: loss = 0.00133929 (* 1 = 0.00133929 loss)
I0314 09:58:41.670685 25029 sgd_solver.cpp:106] Iteration 9600, lr = 0.00603682
I0314 09:58:43.134160 25029 solver.cpp:228] Iteration 9700, loss = 0.0014465
I0314 09:58:43.134203 25029 solver.cpp:244]     Train net output #0: loss = 0.00144637 (* 1 = 0.00144637 loss)
I0314 09:58:43.134217 25029 sgd_solver.cpp:106] Iteration 9700, lr = 0.00601382
I0314 09:58:44.587978 25029 solver.cpp:228] Iteration 9800, loss = 0.0134798
I0314 09:58:44.588019 25029 solver.cpp:244]     Train net output #0: loss = 0.0134797 (* 1 = 0.0134797 loss)
I0314 09:58:44.588033 25029 sgd_solver.cpp:106] Iteration 9800, lr = 0.00599102
I0314 09:58:46.046571 25029 solver.cpp:228] Iteration 9900, loss = 0.00367283
I0314 09:58:46.046613 25029 solver.cpp:244]     Train net output #0: loss = 0.0036727 (* 1 = 0.0036727 loss)
I0314 09:58:46.046627 25029 sgd_solver.cpp:106] Iteration 9900, lr = 0.00596843
I0314 09:58:47.491849 25029 solver.cpp:454] Snapshotting to binary proto file examples/mnist/lenet_iter_10000.caffemodel
I0314 09:58:47.502293 25029 sgd_solver.cpp:273] Snapshotting solver state to binary proto file examples/mnist/lenet_iter_10000.solverstate
I0314 09:58:47.510488 25029 solver.cpp:317] Iteration 10000, loss = 0.00222019
I0314 09:58:47.510517 25029 solver.cpp:337] Iteration 10000, Testing net (#0)
I0314 09:58:48.453008 25029 solver.cpp:404]     Test net output #0: accuracy = 0.9917
I0314 09:58:48.453052 25029 solver.cpp:404]     Test net output #1: loss = 0.0266635 (* 1 = 0.0266635 loss)
I0314 09:58:48.453065 25029 solver.cpp:322] Optimization Done.
I0314 09:58:48.453073 25029 caffe.cpp:222] Optimization Done.


最终会得到两个模型

caffemodel文件

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值