Caffe的安装过程总结

先来对Caffe安装流程有个大致的了解。安装Caffe,首先需要安装一些依赖包,比如boost、protobuf等等,其次是安装CUDA驱动、cudnn,还有至关重要的一项,那就是OpenCV了,版本根据自己的需要自行选择。上述步骤准备好之后,就可以下载caffe源码,编译安装了。安装好caffe的基本组件之后,使用自带的mnist测试是否安装成功。另外,如果要用到pycaffe,可以继续编译pycaffe进行安装。

需要注意的是,不论是CentOS还是Ubuntu,都预装了开源的nouveau显卡驱动(SUSE没有这种问题),如果不禁用,则CUDA驱动不能正确安装。

参考链接:

Linux安装NVIDIA显卡驱动的正确姿势

ubuntu 禁用自带的nouveau显卡驱动,安装NVIDIA显卡驱动

干掉Nouveau安装Linux NVIDIA显卡驱动

安装时参考下面几个博文链接,一般情况下即可安装成功。

Caffe官网安装指南

Caffe Installation安装笔记

[Caffe]:Caffe Installation

Ubuntu16.04 Caffe 安装步骤记录(超详尽)一文中的安装步骤最为详细,建议参考。

参考以上的资料,即可成功安装Caffe。

运行 make runtest -j8 命令,效果图如下,表示caffe已成功安装。

使用caffe自带的mnist测试,测试是否可以正常训练。

~/caffe/examples/mnist$ ./train_lenet.sh

 注意,在运行上面的命令之前,首先需要进行下载数据等工作,具体参考Caffe训练示例

结果如下,表示可以正常使用。

I0916 11:13:58.632489 20069 caffe.cpp:204] Using GPUs 0
I0916 11:14:00.907739 20069 caffe.cpp:209] GPU 0: GeForce GTX TITAN X
I0916 11:14:01.308300 20069 solver.cpp:45] Initializing solver from parameters: 
test_iter: 100
test_interval: 500
base_lr: 0.01
display: 100
max_iter: 10000
lr_policy: "inv"
gamma: 0.0001
power: 0.75
momentum: 0.9
weight_decay: 0.0005
snapshot: 5000
snapshot_prefix: "../../examples/mnist/lenet"
solver_mode: GPU
device_id: 0
net: "../../examples/mnist/lenet_train_test.prototxt"
train_state {
  level: 0
  stage: ""
}
I0916 11:14:01.308569 20069 solver.cpp:102] Creating training net from net file: ../../examples/mnist/lenet_train_test.prototxt
I0916 11:14:01.308807 20069 net.cpp:296] The NetState phase (0) differed from the phase (1) specified by a rule in layer mnist
I0916 11:14:01.308825 20069 net.cpp:296] The NetState phase (0) differed from the phase (1) specified by a rule in layer accuracy
I0916 11:14:01.308902 20069 net.cpp:53] Initializing net from parameters: 
name: "LeNet"
state {
  phase: TRAIN
  level: 0
  stage: ""
}
layer {
  name: "mnist"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TRAIN
  }
  transform_param {
    scale: 0.00390625
  }
  data_param {
    source: "../../examples/mnist/mnist_train_lmdb"
    batch_size: 64
    backend: LMDB
  }
}
layer {
  name: "conv1"
  type: "Convolution"
  bottom: "data"
  top: "conv1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 20
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "pool1"
  type: "Pooling"
  bottom: "conv1"
  top: "pool1"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "conv2"
  type: "Convolution"
  bottom: "pool1"
  top: "conv2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 50
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "pool2"
  type: "Pooling"
  bottom: "conv2"
  top: "pool2"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "ip1"
  type: "InnerProduct"
  bottom: "pool2"
  top: "ip1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 500
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "relu1"
  type: "ReLU"
  bottom: "ip1"
  top: "ip1"
}
layer {
  name: "ip2"
  type: "InnerProduct"
  bottom: "ip1"
  top: "ip2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 10
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "loss"
  type: "SoftmaxWithLoss"
  bottom: "ip2"
  bottom: "label"
  top: "loss"
}
I0916 11:14:01.309090 20069 layer_factory.hpp:77] Creating layer mnist
I0916 11:14:01.309270 20069 db_lmdb.cpp:35] Opened lmdb ../../examples/mnist/mnist_train_lmdb
I0916 11:14:01.309311 20069 net.cpp:86] Creating Layer mnist
I0916 11:14:01.309322 20069 net.cpp:382] mnist -> data
I0916 11:14:01.309350 20069 net.cpp:382] mnist -> label
I0916 11:14:01.310145 20069 data_layer.cpp:45] output data size: 64,1,28,28
I0916 11:14:01.311700 20069 net.cpp:124] Setting up mnist
I0916 11:14:01.311723 20069 net.cpp:131] Top shape: 64 1 28 28 (50176)
I0916 11:14:01.311729 20069 net.cpp:131] Top shape: 64 (64)
I0916 11:14:01.311733 20069 net.cpp:139] Memory required for data: 200960
I0916 11:14:01.311744 20069 layer_factory.hpp:77] Creating layer conv1
I0916 11:14:01.311764 20069 net.cpp:86] Creating Layer conv1
I0916 11:14:01.311772 20069 net.cpp:408] conv1 <- data
I0916 11:14:01.311794 20069 net.cpp:382] conv1 -> conv1
I0916 11:14:01.975757 20069 net.cpp:124] Setting up conv1
I0916 11:14:01.975805 20069 net.cpp:131] Top shape: 64 20 24 24 (737280)
I0916 11:14:01.975811 20069 net.cpp:139] Memory required for data: 3150080
I0916 11:14:01.975844 20069 layer_factory.hpp:77] Creating layer pool1
I0916 11:14:01.975899 20069 net.cpp:86] Creating Layer pool1
I0916 11:14:01.975908 20069 net.cpp:408] pool1 <- conv1
I0916 11:14:01.975917 20069 net.cpp:382] pool1 -> pool1
I0916 11:14:01.975984 20069 net.cpp:124] Setting up pool1
I0916 11:14:01.975996 20069 net.cpp:131] Top shape: 64 20 12 12 (184320)
I0916 11:14:01.976001 20069 net.cpp:139] Memory required for data: 3887360
I0916 11:14:01.976008 20069 layer_factory.hpp:77] Creating layer conv2
I0916 11:14:01.976025 20069 net.cpp:86] Creating Layer conv2
I0916 11:14:01.976032 20069 net.cpp:408] conv2 <- pool1
I0916 11:14:01.976043 20069 net.cpp:382] conv2 -> conv2
I0916 11:14:01.978507 20069 net.cpp:124] Setting up conv2
I0916 11:14:01.978526 20069 net.cpp:131] Top shape: 64 50 8 8 (204800)
I0916 11:14:01.978531 20069 net.cpp:139] Memory required for data: 4706560
I0916 11:14:01.978543 20069 layer_factory.hpp:77] Creating layer pool2
I0916 11:14:01.978550 20069 net.cpp:86] Creating Layer pool2
I0916 11:14:01.978557 20069 net.cpp:408] pool2 <- conv2
I0916 11:14:01.978565 20069 net.cpp:382] pool2 -> pool2
I0916 11:14:01.978610 20069 net.cpp:124] Setting up pool2
I0916 11:14:01.978617 20069 net.cpp:131] Top shape: 64 50 4 4 (51200)
I0916 11:14:01.978626 20069 net.cpp:139] Memory required for data: 4911360
I0916 11:14:01.978632 20069 layer_factory.hpp:77] Creating layer ip1
I0916 11:14:01.978644 20069 net.cpp:86] Creating Layer ip1
I0916 11:14:01.978651 20069 net.cpp:408] ip1 <- pool2
I0916 11:14:01.978665 20069 net.cpp:382] ip1 -> ip1
I0916 11:14:01.981726 20069 net.cpp:124] Setting up ip1
I0916 11:14:01.981741 20069 net.cpp:131] Top shape: 64 500 (32000)
I0916 11:14:01.981745 20069 net.cpp:139] Memory required for data: 5039360
I0916 11:14:01.981756 20069 layer_factory.hpp:77] Creating layer relu1
I0916 11:14:01.981766 20069 net.cpp:86] Creating Layer relu1
I0916 11:14:01.981773 20069 net.cpp:408] relu1 <- ip1
I0916 11:14:01.981779 20069 net.cpp:369] relu1 -> ip1 (in-place)
I0916 11:14:01.982556 20069 net.cpp:124] Setting up relu1
I0916 11:14:01.982570 20069 net.cpp:131] Top shape: 64 500 (32000)
I0916 11:14:01.982574 20069 net.cpp:139] Memory required for data: 5167360
I0916 11:14:01.982579 20069 layer_factory.hpp:77] Creating layer ip2
I0916 11:14:01.982589 20069 net.cpp:86] Creating Layer ip2
I0916 11:14:01.982594 20069 net.cpp:408] ip2 <- ip1
I0916 11:14:01.982605 20069 net.cpp:382] ip2 -> ip2
I0916 11:14:01.983521 20069 net.cpp:124] Setting up ip2
I0916 11:14:01.983534 20069 net.cpp:131] Top shape: 64 10 (640)
I0916 11:14:01.983539 20069 net.cpp:139] Memory required for data: 5169920
I0916 11:14:01.983547 20069 layer_factory.hpp:77] Creating layer loss
I0916 11:14:01.983558 20069 net.cpp:86] Creating Layer loss
I0916 11:14:01.983564 20069 net.cpp:408] loss <- ip2
I0916 11:14:01.983572 20069 net.cpp:408] loss <- label
I0916 11:14:01.983583 20069 net.cpp:382] loss -> loss
I0916 11:14:01.983604 20069 layer_factory.hpp:77] Creating layer loss
I0916 11:14:01.984472 20069 net.cpp:124] Setting up loss
I0916 11:14:01.984488 20069 net.cpp:131] Top shape: (1)
I0916 11:14:01.984493 20069 net.cpp:134]     with loss weight 1
I0916 11:14:01.984534 20069 net.cpp:139] Memory required for data: 5169924
I0916 11:14:01.984540 20069 net.cpp:200] loss needs backward computation.
I0916 11:14:01.984556 20069 net.cpp:200] ip2 needs backward computation.
I0916 11:14:01.984562 20069 net.cpp:200] relu1 needs backward computation.
I0916 11:14:01.984572 20069 net.cpp:200] ip1 needs backward computation.
I0916 11:14:01.984580 20069 net.cpp:200] pool2 needs backward computation.
I0916 11:14:01.984586 20069 net.cpp:200] conv2 needs backward computation.
I0916 11:14:01.984591 20069 net.cpp:200] pool1 needs backward computation.
I0916 11:14:01.984598 20069 net.cpp:200] conv1 needs backward computation.
I0916 11:14:01.984604 20069 net.cpp:202] mnist does not need backward computation.
I0916 11:14:01.984611 20069 net.cpp:244] This network produces output loss
I0916 11:14:01.984624 20069 net.cpp:257] Network initialization done.
I0916 11:14:01.984799 20069 solver.cpp:190] Creating test net (#0) specified by net file: ../../examples/mnist/lenet_train_test.prototxt
I0916 11:14:01.984839 20069 net.cpp:296] The NetState phase (1) differed from the phase (0) specified by a rule in layer mnist
I0916 11:14:01.984923 20069 net.cpp:53] Initializing net from parameters: 
name: "LeNet"
state {
  phase: TEST
}
layer {
  name: "mnist"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TEST
  }
  transform_param {
    scale: 0.00390625
  }
  data_param {
    source: "../../examples/mnist/mnist_test_lmdb"
    batch_size: 100
    backend: LMDB
  }
}
layer {
  name: "conv1"
  type: "Convolution"
  bottom: "data"
  top: "conv1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 20
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "pool1"
  type: "Pooling"
  bottom: "conv1"
  top: "pool1"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "conv2"
  type: "Convolution"
  bottom: "pool1"
  top: "conv2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 50
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "pool2"
  type: "Pooling"
  bottom: "conv2"
  top: "pool2"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "ip1"
  type: "InnerProduct"
  bottom: "pool2"
  top: "ip1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 500
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "relu1"
  type: "ReLU"
  bottom: "ip1"
  top: "ip1"
}
layer {
  name: "ip2"
  type: "InnerProduct"
  bottom: "ip1"
  top: "ip2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 10
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "accuracy"
  type: "Accuracy"
  bottom: "ip2"
  bottom: "label"
  top: "accuracy"
  include {
    phase: TEST
  }
}
layer {
  name: "loss"
  type: "SoftmaxWithLoss"
  bottom: "ip2"
  bottom: "label"
  top: "loss"
}
I0916 11:14:01.985045 20069 layer_factory.hpp:77] Creating layer mnist
I0916 11:14:01.985116 20069 db_lmdb.cpp:35] Opened lmdb ../../examples/mnist/mnist_test_lmdb
I0916 11:14:01.985136 20069 net.cpp:86] Creating Layer mnist
I0916 11:14:01.985146 20069 net.cpp:382] mnist -> data
I0916 11:14:01.985159 20069 net.cpp:382] mnist -> label
I0916 11:14:01.985260 20069 data_layer.cpp:45] output data size: 100,1,28,28
I0916 11:14:01.987718 20069 net.cpp:124] Setting up mnist
I0916 11:14:01.987736 20069 net.cpp:131] Top shape: 100 1 28 28 (78400)
I0916 11:14:01.987742 20069 net.cpp:131] Top shape: 100 (100)
I0916 11:14:01.987747 20069 net.cpp:139] Memory required for data: 314000
I0916 11:14:01.987752 20069 layer_factory.hpp:77] Creating layer label_mnist_1_split
I0916 11:14:01.987761 20069 net.cpp:86] Creating Layer label_mnist_1_split
I0916 11:14:01.987767 20069 net.cpp:408] label_mnist_1_split <- label
I0916 11:14:01.987850 20069 net.cpp:382] label_mnist_1_split -> label_mnist_1_split_0
I0916 11:14:01.987900 20069 net.cpp:382] label_mnist_1_split -> label_mnist_1_split_1
I0916 11:14:01.988090 20069 net.cpp:124] Setting up label_mnist_1_split
I0916 11:14:01.988113 20069 net.cpp:131] Top shape: 100 (100)
I0916 11:14:01.988124 20069 net.cpp:131] Top shape: 100 (100)
I0916 11:14:01.988132 20069 net.cpp:139] Memory required for data: 314800
I0916 11:14:01.988147 20069 layer_factory.hpp:77] Creating layer conv1
I0916 11:14:01.988183 20069 net.cpp:86] Creating Layer conv1
I0916 11:14:01.988195 20069 net.cpp:408] conv1 <- data
I0916 11:14:01.988214 20069 net.cpp:382] conv1 -> conv1
I0916 11:14:01.994087 20069 net.cpp:124] Setting up conv1
I0916 11:14:01.994125 20069 net.cpp:131] Top shape: 100 20 24 24 (1152000)
I0916 11:14:01.994135 20069 net.cpp:139] Memory required for data: 4922800
I0916 11:14:01.994195 20069 layer_factory.hpp:77] Creating layer pool1
I0916 11:14:01.994221 20069 net.cpp:86] Creating Layer pool1
I0916 11:14:01.994235 20069 net.cpp:408] pool1 <- conv1
I0916 11:14:01.994251 20069 net.cpp:382] pool1 -> pool1
I0916 11:14:01.994359 20069 net.cpp:124] Setting up pool1
I0916 11:14:01.994379 20069 net.cpp:131] Top shape: 100 20 12 12 (288000)
I0916 11:14:01.994390 20069 net.cpp:139] Memory required for data: 6074800
I0916 11:14:01.994401 20069 layer_factory.hpp:77] Creating layer conv2
I0916 11:14:01.994428 20069 net.cpp:86] Creating Layer conv2
I0916 11:14:01.994442 20069 net.cpp:408] conv2 <- pool1
I0916 11:14:01.994464 20069 net.cpp:382] conv2 -> conv2
I0916 11:14:02.000331 20069 net.cpp:124] Setting up conv2
I0916 11:14:02.000365 20069 net.cpp:131] Top shape: 100 50 8 8 (320000)
I0916 11:14:02.000377 20069 net.cpp:139] Memory required for data: 7354800
I0916 11:14:02.000408 20069 layer_factory.hpp:77] Creating layer pool2
I0916 11:14:02.000452 20069 net.cpp:86] Creating Layer pool2
I0916 11:14:02.000470 20069 net.cpp:408] pool2 <- conv2
I0916 11:14:02.000495 20069 net.cpp:382] pool2 -> pool2
I0916 11:14:02.000617 20069 net.cpp:124] Setting up pool2
I0916 11:14:02.000636 20069 net.cpp:131] Top shape: 100 50 4 4 (80000)
I0916 11:14:02.000648 20069 net.cpp:139] Memory required for data: 7674800
I0916 11:14:02.000659 20069 layer_factory.hpp:77] Creating layer ip1
I0916 11:14:02.000679 20069 net.cpp:86] Creating Layer ip1
I0916 11:14:02.000690 20069 net.cpp:408] ip1 <- pool2
I0916 11:14:02.000707 20069 net.cpp:382] ip1 -> ip1
I0916 11:14:02.006860 20069 net.cpp:124] Setting up ip1
I0916 11:14:02.006891 20069 net.cpp:131] Top shape: 100 500 (50000)
I0916 11:14:02.006901 20069 net.cpp:139] Memory required for data: 7874800
I0916 11:14:02.006923 20069 layer_factory.hpp:77] Creating layer relu1
I0916 11:14:02.006943 20069 net.cpp:86] Creating Layer relu1
I0916 11:14:02.006956 20069 net.cpp:408] relu1 <- ip1
I0916 11:14:02.006971 20069 net.cpp:369] relu1 -> ip1 (in-place)
I0916 11:14:02.008585 20069 net.cpp:124] Setting up relu1
I0916 11:14:02.008615 20069 net.cpp:131] Top shape: 100 500 (50000)
I0916 11:14:02.008625 20069 net.cpp:139] Memory required for data: 8074800
I0916 11:14:02.008635 20069 layer_factory.hpp:77] Creating layer ip2
I0916 11:14:02.008658 20069 net.cpp:86] Creating Layer ip2
I0916 11:14:02.008678 20069 net.cpp:408] ip2 <- ip1
I0916 11:14:02.008698 20069 net.cpp:382] ip2 -> ip2
I0916 11:14:02.009007 20069 net.cpp:124] Setting up ip2
I0916 11:14:02.009022 20069 net.cpp:131] Top shape: 100 10 (1000)
I0916 11:14:02.009032 20069 net.cpp:139] Memory required for data: 8078800
I0916 11:14:02.009048 20069 layer_factory.hpp:77] Creating layer ip2_ip2_0_split
I0916 11:14:02.009063 20069 net.cpp:86] Creating Layer ip2_ip2_0_split
I0916 11:14:02.009073 20069 net.cpp:408] ip2_ip2_0_split <- ip2
I0916 11:14:02.009089 20069 net.cpp:382] ip2_ip2_0_split -> ip2_ip2_0_split_0
I0916 11:14:02.009109 20069 net.cpp:382] ip2_ip2_0_split -> ip2_ip2_0_split_1
I0916 11:14:02.009187 20069 net.cpp:124] Setting up ip2_ip2_0_split
I0916 11:14:02.009202 20069 net.cpp:131] Top shape: 100 10 (1000)
I0916 11:14:02.009214 20069 net.cpp:131] Top shape: 100 10 (1000)
I0916 11:14:02.009225 20069 net.cpp:139] Memory required for data: 8086800
I0916 11:14:02.009238 20069 layer_factory.hpp:77] Creating layer accuracy
I0916 11:14:02.009256 20069 net.cpp:86] Creating Layer accuracy
I0916 11:14:02.009268 20069 net.cpp:408] accuracy <- ip2_ip2_0_split_0
I0916 11:14:02.009281 20069 net.cpp:408] accuracy <- label_mnist_1_split_0
I0916 11:14:02.009297 20069 net.cpp:382] accuracy -> accuracy
I0916 11:14:02.009315 20069 net.cpp:124] Setting up accuracy
I0916 11:14:02.009328 20069 net.cpp:131] Top shape: (1)
I0916 11:14:02.009340 20069 net.cpp:139] Memory required for data: 8086804
I0916 11:14:02.009351 20069 layer_factory.hpp:77] Creating layer loss
I0916 11:14:02.009363 20069 net.cpp:86] Creating Layer loss
I0916 11:14:02.009375 20069 net.cpp:408] loss <- ip2_ip2_0_split_1
I0916 11:14:02.009388 20069 net.cpp:408] loss <- label_mnist_1_split_1
I0916 11:14:02.009423 20069 net.cpp:382] loss -> loss
I0916 11:14:02.009440 20069 layer_factory.hpp:77] Creating layer loss
I0916 11:14:02.011113 20069 net.cpp:124] Setting up loss
I0916 11:14:02.011144 20069 net.cpp:131] Top shape: (1)
I0916 11:14:02.011154 20069 net.cpp:134]     with loss weight 1
I0916 11:14:02.011171 20069 net.cpp:139] Memory required for data: 8086808
I0916 11:14:02.011185 20069 net.cpp:200] loss needs backward computation.
I0916 11:14:02.011199 20069 net.cpp:202] accuracy does not need backward computation.
I0916 11:14:02.011211 20069 net.cpp:200] ip2_ip2_0_split needs backward computation.
I0916 11:14:02.011224 20069 net.cpp:200] ip2 needs backward computation.
I0916 11:14:02.011234 20069 net.cpp:200] relu1 needs backward computation.
I0916 11:14:02.011245 20069 net.cpp:200] ip1 needs backward computation.
I0916 11:14:02.011256 20069 net.cpp:200] pool2 needs backward computation.
I0916 11:14:02.011267 20069 net.cpp:200] conv2 needs backward computation.
I0916 11:14:02.011278 20069 net.cpp:200] pool1 needs backward computation.
I0916 11:14:02.011291 20069 net.cpp:200] conv1 needs backward computation.
I0916 11:14:02.011302 20069 net.cpp:202] label_mnist_1_split does not need backward computation.
I0916 11:14:02.011313 20069 net.cpp:202] mnist does not need backward computation.
I0916 11:14:02.011324 20069 net.cpp:244] This network produces output accuracy
I0916 11:14:02.011337 20069 net.cpp:244] This network produces output loss
I0916 11:14:02.011365 20069 net.cpp:257] Network initialization done.
I0916 11:14:02.011448 20069 solver.cpp:57] Solver scaffolding done.
I0916 11:14:02.012079 20069 caffe.cpp:239] Starting Optimization
I0916 11:14:02.012092 20069 solver.cpp:289] Solving LeNet
I0916 11:14:02.012104 20069 solver.cpp:290] Learning Rate Policy: inv
I0916 11:14:02.013131 20069 solver.cpp:347] Iteration 0, Testing net (#0)
I0916 11:14:02.022018 20069 blocking_queue.cpp:49] Waiting for data
I0916 11:14:02.118471 20094 data_layer.cpp:73] Restarting data prefetching from start.
I0916 11:14:02.119292 20069 solver.cpp:414]     Test net output #0: accuracy = 0.1025
I0916 11:14:02.119328 20069 solver.cpp:414]     Test net output #1: loss = 2.33779 (* 1 = 2.33779 loss)
I0916 11:14:02.123703 20069 solver.cpp:239] Iteration 0 (310.167 iter/s, 0.111551s/100 iters), loss = 2.3073
I0916 11:14:02.123744 20069 solver.cpp:258]     Train net output #0: loss = 2.3073 (* 1 = 2.3073 loss)
I0916 11:14:02.123773 20069 sgd_solver.cpp:112] Iteration 0, lr = 0.01
I0916 11:14:02.340458 20069 solver.cpp:239] Iteration 100 (461.489 iter/s, 0.21669s/100 iters), loss = 0.229789
I0916 11:14:02.340520 20069 solver.cpp:258]     Train net output #0: loss = 0.229789 (* 1 = 0.229789 loss)
I0916 11:14:02.340536 20069 sgd_solver.cpp:112] Iteration 100, lr = 0.00992565
I0916 11:14:02.548533 20069 solver.cpp:239] Iteration 200 (480.769 iter/s, 0.208s/100 iters), loss = 0.165238
I0916 11:14:02.548593 20069 solver.cpp:258]     Train net output #0: loss = 0.165238 (* 1 = 0.165238 loss)
I0916 11:14:02.548609 20069 sgd_solver.cpp:112] Iteration 200, lr = 0.00985258
I0916 11:14:02.757786 20069 solver.cpp:239] Iteration 300 (478.054 iter/s, 0.209182s/100 iters), loss = 0.201672
I0916 11:14:02.757844 20069 solver.cpp:258]     Train net output #0: loss = 0.201672 (* 1 = 0.201672 loss)
I0916 11:14:02.757861 20069 sgd_solver.cpp:112] Iteration 300, lr = 0.00978075
I0916 11:14:02.965809 20069 solver.cpp:239] Iteration 400 (480.877 iter/s, 0.207953s/100 iters), loss = 0.0808177
I0916 11:14:02.965870 20069 solver.cpp:258]     Train net output #0: loss = 0.0808176 (* 1 = 0.0808176 loss)
I0916 11:14:02.965886 20069 sgd_solver.cpp:112] Iteration 400, lr = 0.00971013
I0916 11:14:03.171439 20069 solver.cpp:347] Iteration 500, Testing net (#0)
I0916 11:14:03.279948 20094 data_layer.cpp:73] Restarting data prefetching from start.
I0916 11:14:03.280737 20069 solver.cpp:414]     Test net output #0: accuracy = 0.9735
I0916 11:14:03.280781 20069 solver.cpp:414]     Test net output #1: loss = 0.0870308 (* 1 = 0.0870308 loss)
I0916 11:14:03.282971 20069 solver.cpp:239] Iteration 500 (315.372 iter/s, 0.317086s/100 iters), loss = 0.10291
I0916 11:14:03.283036 20069 solver.cpp:258]     Train net output #0: loss = 0.10291 (* 1 = 0.10291 loss)
I0916 11:14:03.283059 20069 sgd_solver.cpp:112] Iteration 500, lr = 0.00964069
I0916 11:14:03.491451 20069 solver.cpp:239] Iteration 600 (479.84 iter/s, 0.208403s/100 iters), loss = 0.0846899
I0916 11:14:03.491509 20069 solver.cpp:258]     Train net output #0: loss = 0.0846898 (* 1 = 0.0846898 loss)
I0916 11:14:03.491526 20069 sgd_solver.cpp:112] Iteration 600, lr = 0.0095724
I0916 11:14:03.697022 20069 solver.cpp:239] Iteration 700 (486.62 iter/s, 0.205499s/100 iters), loss = 0.110679
I0916 11:14:03.697095 20069 solver.cpp:258]     Train net output #0: loss = 0.110678 (* 1 = 0.110678 loss)
I0916 11:14:03.697118 20069 sgd_solver.cpp:112] Iteration 700, lr = 0.00950522
I0916 11:14:03.905139 20069 solver.cpp:239] Iteration 800 (480.681 iter/s, 0.208038s/100 iters), loss = 0.17802
I0916 11:14:03.905205 20069 solver.cpp:258]     Train net output #0: loss = 0.17802 (* 1 = 0.17802 loss)
I0916 11:14:03.905230 20069 sgd_solver.cpp:112] Iteration 800, lr = 0.00943913
I0916 11:14:04.111789 20069 solver.cpp:239] Iteration 900 (484.077 iter/s, 0.206578s/100 iters), loss = 0.134004
I0916 11:14:04.111855 20069 solver.cpp:258]     Train net output #0: loss = 0.134004 (* 1 = 0.134004 loss)
I0916 11:14:04.111874 20069 sgd_solver.cpp:112] Iteration 900, lr = 0.00937411
I0916 11:14:04.180398 20093 data_layer.cpp:73] Restarting data prefetching from start.
I0916 11:14:04.316640 20069 solver.cpp:347] Iteration 1000, Testing net (#0)
I0916 11:14:04.428764 20094 data_layer.cpp:73] Restarting data prefetching from start.
I0916 11:14:04.429632 20069 solver.cpp:414]     Test net output #0: accuracy = 0.9806
I0916 11:14:04.429682 20069 solver.cpp:414]     Test net output #1: loss = 0.0611224 (* 1 = 0.0611224 loss)
I0916 11:14:04.431766 20069 solver.cpp:239] Iteration 1000 (312.595 iter/s, 0.319903s/100 iters), loss = 0.0933033
I0916 11:14:04.431826 20069 solver.cpp:258]     Train net output #0: loss = 0.0933033 (* 1 = 0.0933033 loss)
I0916 11:14:04.431847 20069 sgd_solver.cpp:112] Iteration 1000, lr = 0.00931012
I0916 11:14:04.640311 20069 solver.cpp:239] Iteration 1100 (479.67 iter/s, 0.208477s/100 iters), loss = 0.00609653
I0916 11:14:04.640377 20069 solver.cpp:258]     Train net output #0: loss = 0.0060965 (* 1 = 0.0060965 loss)
I0916 11:14:04.640403 20069 sgd_solver.cpp:112] Iteration 1100, lr = 0.00924715
I0916 11:14:04.845319 20069 solver.cpp:239] Iteration 1200 (487.96 iter/s, 0.204935s/100 iters), loss = 0.0118229
I0916 11:14:04.845381 20069 solver.cpp:258]     Train net output #0: loss = 0.0118228 (* 1 = 0.0118228 loss)
I0916 11:14:04.845398 20069 sgd_solver.cpp:112] Iteration 1200, lr = 0.00918515
I0916 11:14:05.054190 20069 solver.cpp:239] Iteration 1300 (478.93 iter/s, 0.208799s/100 iters), loss = 0.017764
I0916 11:14:05.054246 20069 solver.cpp:258]     Train net output #0: loss = 0.0177639 (* 1 = 0.0177639 loss)
I0916 11:14:05.054261 20069 sgd_solver.cpp:112] Iteration 1300, lr = 0.00912412
I0916 11:14:05.261400 20069 solver.cpp:239] Iteration 1400 (482.763 iter/s, 0.207141s/100 iters), loss = 0.00820902
I0916 11:14:05.261466 20069 solver.cpp:258]     Train net output #0: loss = 0.00820898 (* 1 = 0.00820898 loss)
I0916 11:14:05.261484 20069 sgd_solver.cpp:112] Iteration 1400, lr = 0.00906403
I0916 11:14:05.465870 20069 solver.cpp:347] Iteration 1500, Testing net (#0)
I0916 11:14:05.575207 20094 data_layer.cpp:73] Restarting data prefetching from start.
I0916 11:14:05.576028 20069 solver.cpp:414]     Test net output #0: accuracy = 0.9848
I0916 11:14:05.576074 20069 solver.cpp:414]     Test net output #1: loss = 0.0479937 (* 1 = 0.0479937 loss)
I0916 11:14:05.578016 20069 solver.cpp:239] Iteration 1500 (315.917 iter/s, 0.316539s/100 iters), loss = 0.0852713
I0916 11:14:05.578075 20069 solver.cpp:258]     Train net output #0: loss = 0.0852713 (* 1 = 0.0852713 loss)
I0916 11:14:05.578116 20069 sgd_solver.cpp:112] Iteration 1500, lr = 0.00900485
I0916 11:14:05.785501 20069 solver.cpp:239] Iteration 1600 (482.122 iter/s, 0.207416s/100 iters), loss = 0.163501
I0916 11:14:05.785560 20069 solver.cpp:258]     Train net output #0: loss = 0.163501 (* 1 = 0.163501 loss)
I0916 11:14:05.785578 20069 sgd_solver.cpp:112] Iteration 1600, lr = 0.00894657
I0916 11:14:05.992123 20069 solver.cpp:239] Iteration 1700 (484.143 iter/s, 0.206551s/100 iters), loss = 0.0221153
I0916 11:14:05.992182 20069 solver.cpp:258]     Train net output #0: loss = 0.0221153 (* 1 = 0.0221153 loss)
I0916 11:14:05.992199 20069 sgd_solver.cpp:112] Iteration 1700, lr = 0.00888916
I0916 11:14:06.197464 20069 solver.cpp:239] Iteration 1800 (487.164 iter/s, 0.20527s/100 iters), loss = 0.0198398
I0916 11:14:06.197525 20069 solver.cpp:258]     Train net output #0: loss = 0.0198398 (* 1 = 0.0198398 loss)
I0916 11:14:06.197542 20069 sgd_solver.cpp:112] Iteration 1800, lr = 0.0088326
I0916 11:14:06.343518 20093 data_layer.cpp:73] Restarting data prefetching from start.
I0916 11:14:06.405211 20069 solver.cpp:239] Iteration 1900 (481.521 iter/s, 0.207675s/100 iters), loss = 0.104915
I0916 11:14:06.405272 20069 solver.cpp:258]     Train net output #0: loss = 0.104915 (* 1 = 0.104915 loss)
I0916 11:14:06.405288 20069 sgd_solver.cpp:112] Iteration 1900, lr = 0.00877687
I0916 11:14:06.611179 20069 solver.cpp:347] Iteration 2000, Testing net (#0)
I0916 11:14:06.725069 20094 data_layer.cpp:73] Restarting data prefetching from start.
I0916 11:14:06.725961 20069 solver.cpp:414]     Test net output #0: accuracy = 0.9854
I0916 11:14:06.726007 20069 solver.cpp:414]     Test net output #1: loss = 0.0435378 (* 1 = 0.0435378 loss)
I0916 11:14:06.728015 20069 solver.cpp:239] Iteration 2000 (309.858 iter/s, 0.322728s/100 iters), loss = 0.0274193
I0916 11:14:06.728088 20069 solver.cpp:258]     Train net output #0: loss = 0.0274194 (* 1 = 0.0274194 loss)
I0916 11:14:06.728112 20069 sgd_solver.cpp:112] Iteration 2000, lr = 0.00872196
I0916 11:14:06.936975 20069 solver.cpp:239] Iteration 2100 (478.753 iter/s, 0.208876s/100 iters), loss = 0.0191959
I0916 11:14:06.937033 20069 solver.cpp:258]     Train net output #0: loss = 0.019196 (* 1 = 0.019196 loss)
I0916 11:14:06.937049 20069 sgd_solver.cpp:112] Iteration 2100, lr = 0.00866784
I0916 11:14:07.146100 20069 solver.cpp:239] Iteration 2200 (478.347 iter/s, 0.209053s/100 iters), loss = 0.0127037
I0916 11:14:07.146160 20069 solver.cpp:258]     Train net output #0: loss = 0.0127038 (* 1 = 0.0127038 loss)
I0916 11:14:07.146178 20069 sgd_solver.cpp:112] Iteration 2200, lr = 0.0086145
I0916 11:14:07.353894 20069 solver.cpp:239] Iteration 2300 (481.411 iter/s, 0.207723s/100 iters), loss = 0.0990722
I0916 11:14:07.353953 20069 solver.cpp:258]     Train net output #0: loss = 0.0990722 (* 1 = 0.0990722 loss)
I0916 11:14:07.353969 20069 sgd_solver.cpp:112] Iteration 2300, lr = 0.00856192
I0916 11:14:07.559118 20069 solver.cpp:239] Iteration 2400 (487.444 iter/s, 0.205152s/100 iters), loss = 0.0108868
I0916 11:14:07.559180 20069 solver.cpp:258]     Train net output #0: loss = 0.0108868 (* 1 = 0.0108868 loss)
I0916 11:14:07.559198 20069 sgd_solver.cpp:112] Iteration 2400, lr = 0.00851008
I0916 11:14:07.764180 20069 solver.cpp:347] Iteration 2500, Testing net (#0)
I0916 11:14:07.869740 20094 data_layer.cpp:73] Restarting data prefetching from start.
I0916 11:14:07.870539 20069 solver.cpp:414]     Test net output #0: accuracy = 0.9877
I0916 11:14:07.870582 20069 solver.cpp:414]     Test net output #1: loss = 0.0416115 (* 1 = 0.0416115 loss)
I0916 11:14:07.872674 20069 solver.cpp:239] Iteration 2500 (319.002 iter/s, 0.313478s/100 iters), loss = 0.0191136
I0916 11:14:07.872741 20069 solver.cpp:258]     Train net output #0: loss = 0.0191137 (* 1 = 0.0191137 loss)
I0916 11:14:07.872768 20069 sgd_solver.cpp:112] Iteration 2500, lr = 0.00845897
I0916 11:14:08.078209 20069 solver.cpp:239] Iteration 2600 (486.717 iter/s, 0.205458s/100 iters), loss = 0.0565111
I0916 11:14:08.078269 20069 solver.cpp:258]     Train net output #0: loss = 0.0565112 (* 1 = 0.0565112 loss)
I0916 11:14:08.078285 20069 sgd_solver.cpp:112] Iteration 2600, lr = 0.00840857
I0916 11:14:08.286028 20069 solver.cpp:239] Iteration 2700 (481.353 iter/s, 0.207748s/100 iters), loss = 0.0529907
I0916 11:14:08.286088 20069 solver.cpp:258]     Train net output #0: loss = 0.0529908 (* 1 = 0.0529908 loss)
I0916 11:14:08.286105 20069 sgd_solver.cpp:112] Iteration 2700, lr = 0.00835886
I0916 11:14:08.494213 20069 solver.cpp:239] Iteration 2800 (480.507 iter/s, 0.208114s/100 iters), loss = 0.000682779
I0916 11:14:08.494271 20069 solver.cpp:258]     Train net output #0: loss = 0.000682886 (* 1 = 0.000682886 loss)
I0916 11:14:08.494289 20069 sgd_solver.cpp:112] Iteration 2800, lr = 0.00830984
I0916 11:14:08.511106 20093 data_layer.cpp:73] Restarting data prefetching from start.
I0916 11:14:08.703241 20069 solver.cpp:239] Iteration 2900 (478.565 iter/s, 0.208958s/100 iters), loss = 0.0226226
I0916 11:14:08.703299 20069 solver.cpp:258]     Train net output #0: loss = 0.0226227 (* 1 = 0.0226227 loss)
I0916 11:14:08.703317 20069 sgd_solver.cpp:112] Iteration 2900, lr = 0.00826148
I0916 11:14:08.909173 20069 solver.cpp:347] Iteration 3000, Testing net (#0)
I0916 11:14:09.017496 20094 data_layer.cpp:73] Restarting data prefetching from start.
I0916 11:14:09.018358 20069 solver.cpp:414]     Test net output #0: accuracy = 0.9878
I0916 11:14:09.018409 20069 solver.cpp:414]     Test net output #1: loss = 0.0379487 (* 1 = 0.0379487 loss)
I0916 11:14:09.020241 20069 solver.cpp:239] Iteration 3000 (315.527 iter/s, 0.31693s/100 iters), loss = 0.00583271
I0916 11:14:09.020284 20069 solver.cpp:258]     Train net output #0: loss = 0.00583282 (* 1 = 0.00583282 loss)
I0916 11:14:09.020296 20069 sgd_solver.cpp:112] Iteration 3000, lr = 0.00821377
I0916 11:14:09.208230 20069 solver.cpp:239] Iteration 3100 (532.108 iter/s, 0.187932s/100 iters), loss = 0.0119398
I0916 11:14:09.208290 20069 solver.cpp:258]     Train net output #0: loss = 0.0119399 (* 1 = 0.0119399 loss)
I0916 11:14:09.208307 20069 sgd_solver.cpp:112] Iteration 3100, lr = 0.0081667
I0916 11:14:09.416234 20069 solver.cpp:239] Iteration 3200 (480.926 iter/s, 0.207932s/100 iters), loss = 0.0124318
I0916 11:14:09.416294 20069 solver.cpp:258]     Train net output #0: loss = 0.0124319 (* 1 = 0.0124319 loss)
I0916 11:14:09.416311 20069 sgd_solver.cpp:112] Iteration 3200, lr = 0.00812025
I0916 11:14:09.623724 20069 solver.cpp:239] Iteration 3300 (482.119 iter/s, 0.207418s/100 iters), loss = 0.0372033
I0916 11:14:09.623782 20069 solver.cpp:258]     Train net output #0: loss = 0.0372034 (* 1 = 0.0372034 loss)
I0916 11:14:09.623800 20069 sgd_solver.cpp:112] Iteration 3300, lr = 0.00807442
I0916 11:14:09.828052 20069 solver.cpp:239] Iteration 3400 (489.578 iter/s, 0.204258s/100 iters), loss = 0.0173514
I0916 11:14:09.828110 20069 solver.cpp:258]     Train net output #0: loss = 0.0173515 (* 1 = 0.0173515 loss)
I0916 11:14:09.828127 20069 sgd_solver.cpp:112] Iteration 3400, lr = 0.00802918
I0916 11:14:10.030884 20069 solver.cpp:347] Iteration 3500, Testing net (#0)
I0916 11:14:10.134272 20094 data_layer.cpp:73] Restarting data prefetching from start.
I0916 11:14:10.135042 20069 solver.cpp:414]     Test net output #0: accuracy = 0.9845
I0916 11:14:10.135082 20069 solver.cpp:414]     Test net output #1: loss = 0.0434356 (* 1 = 0.0434356 loss)
I0916 11:14:10.136993 20069 solver.cpp:239] Iteration 3500 (323.761 iter/s, 0.30887s/100 iters), loss = 0.00600256
I0916 11:14:10.137035 20069 solver.cpp:258]     Train net output #0: loss = 0.00600261 (* 1 = 0.00600261 loss)
I0916 11:14:10.137048 20069 sgd_solver.cpp:112] Iteration 3500, lr = 0.00798454
I0916 11:14:10.338027 20069 solver.cpp:239] Iteration 3600 (497.559 iter/s, 0.200981s/100 iters), loss = 0.0328909
I0916 11:14:10.338088 20069 solver.cpp:258]     Train net output #0: loss = 0.0328909 (* 1 = 0.0328909 loss)
I0916 11:14:10.338104 20069 sgd_solver.cpp:112] Iteration 3600, lr = 0.00794046
I0916 11:14:10.544780 20069 solver.cpp:239] Iteration 3700 (483.841 iter/s, 0.206679s/100 iters), loss = 0.0217668
I0916 11:14:10.544834 20069 solver.cpp:258]     Train net output #0: loss = 0.0217669 (* 1 = 0.0217669 loss)
I0916 11:14:10.544908 20069 sgd_solver.cpp:112] Iteration 3700, lr = 0.00789695
I0916 11:14:10.638439 20093 data_layer.cpp:73] Restarting data prefetching from start.
I0916 11:14:10.751991 20069 solver.cpp:239] Iteration 3800 (482.761 iter/s, 0.207142s/100 iters), loss = 0.0136239
I0916 11:14:10.752048 20069 solver.cpp:258]     Train net output #0: loss = 0.013624 (* 1 = 0.013624 loss)
I0916 11:14:10.752066 20069 sgd_solver.cpp:112] Iteration 3800, lr = 0.007854
I0916 11:14:10.959348 20069 solver.cpp:239] Iteration 3900 (482.421 iter/s, 0.207288s/100 iters), loss = 0.0339397
I0916 11:14:10.959406 20069 solver.cpp:258]     Train net output #0: loss = 0.0339397 (* 1 = 0.0339397 loss)
I0916 11:14:10.959422 20069 sgd_solver.cpp:112] Iteration 3900, lr = 0.00781158
I0916 11:14:11.163570 20069 solver.cpp:347] Iteration 4000, Testing net (#0)
I0916 11:14:11.273806 20094 data_layer.cpp:73] Restarting data prefetching from start.
I0916 11:14:11.274765 20069 solver.cpp:414]     Test net output #0: accuracy = 0.9899
I0916 11:14:11.274821 20069 solver.cpp:414]     Test net output #1: loss = 0.0315311 (* 1 = 0.0315311 loss)
I0916 11:14:11.276940 20069 solver.cpp:239] Iteration 4000 (314.946 iter/s, 0.317515s/100 iters), loss = 0.0226457
I0916 11:14:11.277004 20069 solver.cpp:258]     Train net output #0: loss = 0.0226457 (* 1 = 0.0226457 loss)
I0916 11:14:11.277034 20069 sgd_solver.cpp:112] Iteration 4000, lr = 0.0077697
I0916 11:14:11.484562 20069 solver.cpp:239] Iteration 4100 (481.821 iter/s, 0.207546s/100 iters), loss = 0.0218009
I0916 11:14:11.484622 20069 solver.cpp:258]     Train net output #0: loss = 0.0218009 (* 1 = 0.0218009 loss)
I0916 11:14:11.484639 20069 sgd_solver.cpp:112] Iteration 4100, lr = 0.00772833
I0916 11:14:11.690666 20069 solver.cpp:239] Iteration 4200 (485.362 iter/s, 0.206032s/100 iters), loss = 0.00811676
I0916 11:14:11.690726 20069 solver.cpp:258]     Train net output #0: loss = 0.00811681 (* 1 = 0.00811681 loss)
I0916 11:14:11.690742 20069 sgd_solver.cpp:112] Iteration 4200, lr = 0.00768748
I0916 11:14:11.899252 20069 solver.cpp:239] Iteration 4300 (479.582 iter/s, 0.208515s/100 iters), loss = 0.0383323
I0916 11:14:11.899314 20069 solver.cpp:258]     Train net output #0: loss = 0.0383324 (* 1 = 0.0383324 loss)
I0916 11:14:11.899332 20069 sgd_solver.cpp:112] Iteration 4300, lr = 0.00764712
I0916 11:14:12.107410 20069 solver.cpp:239] Iteration 4400 (480.573 iter/s, 0.208085s/100 iters), loss = 0.0232511
I0916 11:14:12.107470 20069 solver.cpp:258]     Train net output #0: loss = 0.0232512 (* 1 = 0.0232512 loss)
I0916 11:14:12.107487 20069 sgd_solver.cpp:112] Iteration 4400, lr = 0.00760726
I0916 11:14:12.314344 20069 solver.cpp:347] Iteration 4500, Testing net (#0)
I0916 11:14:12.420365 20094 data_layer.cpp:73] Restarting data prefetching from start.
I0916 11:14:12.421187 20069 solver.cpp:414]     Test net output #0: accuracy = 0.9891
I0916 11:14:12.421231 20069 solver.cpp:414]     Test net output #1: loss = 0.0346691 (* 1 = 0.0346691 loss)
I0916 11:14:12.423203 20069 solver.cpp:239] Iteration 4500 (316.737 iter/s, 0.315719s/100 iters), loss = 0.00533392
I0916 11:14:12.423269 20069 solver.cpp:258]     Train net output #0: loss = 0.00533396 (* 1 = 0.00533396 loss)
I0916 11:14:12.423291 20069 sgd_solver.cpp:112] Iteration 4500, lr = 0.00756788
I0916 11:14:12.631721 20069 solver.cpp:239] Iteration 4600 (479.76 iter/s, 0.208438s/100 iters), loss = 0.0121763
I0916 11:14:12.631785 20069 solver.cpp:258]     Train net output #0: loss = 0.0121763 (* 1 = 0.0121763 loss)
I0916 11:14:12.631803 20069 sgd_solver.cpp:112] Iteration 4600, lr = 0.00752897
I0916 11:14:12.806645 20093 data_layer.cpp:73] Restarting data prefetching from start.
I0916 11:14:12.841397 20069 solver.cpp:239] Iteration 4700 (477.099 iter/s, 0.2096s/100 iters), loss = 0.0058343
I0916 11:14:12.841460 20069 solver.cpp:258]     Train net output #0: loss = 0.00583432 (* 1 = 0.00583432 loss)
I0916 11:14:12.841480 20069 sgd_solver.cpp:112] Iteration 4700, lr = 0.00749052
I0916 11:14:13.049765 20069 solver.cpp:239] Iteration 4800 (480.09 iter/s, 0.208294s/100 iters), loss = 0.0114301
I0916 11:14:13.049880 20069 solver.cpp:258]     Train net output #0: loss = 0.0114301 (* 1 = 0.0114301 loss)
I0916 11:14:13.049899 20069 sgd_solver.cpp:112] Iteration 4800, lr = 0.00745253
I0916 11:14:13.259860 20069 solver.cpp:239] Iteration 4900 (476.257 iter/s, 0.209971s/100 iters), loss = 0.00433916
I0916 11:14:13.259929 20069 solver.cpp:258]     Train net output #0: loss = 0.00433919 (* 1 = 0.00433919 loss)
I0916 11:14:13.259948 20069 sgd_solver.cpp:112] Iteration 4900, lr = 0.00741498
I0916 11:14:13.467123 20069 solver.cpp:464] Snapshotting to binary proto file ../../examples/mnist/lenet_iter_5000.caffemodel
I0916 11:14:13.485219 20069 sgd_solver.cpp:284] Snapshotting solver state to binary proto file ../../examples/mnist/lenet_iter_5000.solverstate
I0916 11:14:13.492003 20069 solver.cpp:347] Iteration 5000, Testing net (#0)
I0916 11:14:13.556593 20069 blocking_queue.cpp:49] Waiting for data
I0916 11:14:13.599431 20094 data_layer.cpp:73] Restarting data prefetching from start.
I0916 11:14:13.600282 20069 solver.cpp:414]     Test net output #0: accuracy = 0.9897
I0916 11:14:13.600333 20069 solver.cpp:414]     Test net output #1: loss = 0.0299979 (* 1 = 0.0299979 loss)
I0916 11:14:13.602566 20069 solver.cpp:239] Iteration 5000 (291.868 iter/s, 0.34262s/100 iters), loss = 0.0317117
I0916 11:14:13.602630 20069 solver.cpp:258]     Train net output #0: loss = 0.0317117 (* 1 = 0.0317117 loss)
I0916 11:14:13.602653 20069 sgd_solver.cpp:112] Iteration 5000, lr = 0.00737788
I0916 11:14:13.809875 20069 solver.cpp:239] Iteration 5100 (482.545 iter/s, 0.207235s/100 iters), loss = 0.0161712
I0916 11:14:13.809936 20069 solver.cpp:258]     Train net output #0: loss = 0.0161713 (* 1 = 0.0161713 loss)
I0916 11:14:13.809953 20069 sgd_solver.cpp:112] Iteration 5100, lr = 0.0073412
I0916 11:14:14.016712 20069 solver.cpp:239] Iteration 5200 (483.644 iter/s, 0.206764s/100 iters), loss = 0.0135071
I0916 11:14:14.016772 20069 solver.cpp:258]     Train net output #0: loss = 0.0135071 (* 1 = 0.0135071 loss)
I0916 11:14:14.016790 20069 sgd_solver.cpp:112] Iteration 5200, lr = 0.00730495
I0916 11:14:14.223058 20069 solver.cpp:239] Iteration 5300 (484.792 iter/s, 0.206274s/100 iters), loss = 0.00167324
I0916 11:14:14.223119 20069 solver.cpp:258]     Train net output #0: loss = 0.00167326 (* 1 = 0.00167326 loss)
I0916 11:14:14.223136 20069 sgd_solver.cpp:112] Iteration 5300, lr = 0.00726911
I0916 11:14:14.430176 20069 solver.cpp:239] Iteration 5400 (482.983 iter/s, 0.207047s/100 iters), loss = 0.00704626
I0916 11:14:14.430236 20069 solver.cpp:258]     Train net output #0: loss = 0.00704628 (* 1 = 0.00704628 loss)
I0916 11:14:14.430255 20069 sgd_solver.cpp:112] Iteration 5400, lr = 0.00723368
I0916 11:14:14.634405 20069 solver.cpp:347] Iteration 5500, Testing net (#0)
I0916 11:14:14.741753 20094 data_layer.cpp:73] Restarting data prefetching from start.
I0916 11:14:14.742591 20069 solver.cpp:414]     Test net output #0: accuracy = 0.9889
I0916 11:14:14.742641 20069 solver.cpp:414]     Test net output #1: loss = 0.0339525 (* 1 = 0.0339525 loss)
I0916 11:14:14.744617 20069 solver.cpp:239] Iteration 5500 (318.105 iter/s, 0.314362s/100 iters), loss = 0.0132875
I0916 11:14:14.744675 20069 solver.cpp:258]     Train net output #0: loss = 0.0132875 (* 1 = 0.0132875 loss)
I0916 11:14:14.744693 20069 sgd_solver.cpp:112] Iteration 5500, lr = 0.00719865
I0916 11:14:14.950417 20069 solver.cpp:239] Iteration 5600 (486.071 iter/s, 0.205731s/100 iters), loss = 0.000960406
I0916 11:14:14.950475 20069 solver.cpp:258]     Train net output #0: loss = 0.000960437 (* 1 = 0.000960437 loss)
I0916 11:14:14.950492 20069 sgd_solver.cpp:112] Iteration 5600, lr = 0.00716402
I0916 11:14:14.992729 20093 data_layer.cpp:73] Restarting data prefetching from start.
I0916 11:14:15.157232 20069 solver.cpp:239] Iteration 5700 (483.694 iter/s, 0.206742s/100 iters), loss = 0.00181894
I0916 11:14:15.157289 20069 solver.cpp:258]     Train net output #0: loss = 0.00181896 (* 1 = 0.00181896 loss)
I0916 11:14:15.157307 20069 sgd_solver.cpp:112] Iteration 5700, lr = 0.00712977
I0916 11:14:15.365227 20069 solver.cpp:239] Iteration 5800 (480.945 iter/s, 0.207924s/100 iters), loss = 0.0379721
I0916 11:14:15.365285 20069 solver.cpp:258]     Train net output #0: loss = 0.0379721 (* 1 = 0.0379721 loss)
I0916 11:14:15.365301 20069 sgd_solver.cpp:112] Iteration 5800, lr = 0.0070959
I0916 11:14:15.572705 20069 solver.cpp:239] Iteration 5900 (482.148 iter/s, 0.207405s/100 iters), loss = 0.00599759
I0916 11:14:15.572758 20069 solver.cpp:258]     Train net output #0: loss = 0.0059976 (* 1 = 0.0059976 loss)
I0916 11:14:15.572774 20069 sgd_solver.cpp:112] Iteration 5900, lr = 0.0070624
I0916 11:14:15.773730 20069 solver.cpp:347] Iteration 6000, Testing net (#0)
I0916 11:14:15.880473 20094 data_layer.cpp:73] Restarting data prefetching from start.
I0916 11:14:15.881258 20069 solver.cpp:414]     Test net output #0: accuracy = 0.9907
I0916 11:14:15.881299 20069 solver.cpp:414]     Test net output #1: loss = 0.0271727 (* 1 = 0.0271727 loss)
I0916 11:14:15.883219 20069 solver.cpp:239] Iteration 6000 (322.119 iter/s, 0.310444s/100 iters), loss = 0.00507444
I0916 11:14:15.883283 20069 solver.cpp:258]     Train net output #0: loss = 0.00507445 (* 1 = 0.00507445 loss)
I0916 11:14:15.883302 20069 sgd_solver.cpp:112] Iteration 6000, lr = 0.00702927
I0916 11:14:16.082283 20069 solver.cpp:239] Iteration 6100 (502.544 iter/s, 0.198988s/100 iters), loss = 0.00156937
I0916 11:14:16.082334 20069 solver.cpp:258]     Train net output #0: loss = 0.00156939 (* 1 = 0.00156939 loss)
I0916 11:14:16.082348 20069 sgd_solver.cpp:112] Iteration 6100, lr = 0.0069965
I0916 11:14:16.288625 20069 solver.cpp:239] Iteration 6200 (484.79 iter/s, 0.206275s/100 iters), loss = 0.00635677
I0916 11:14:16.288682 20069 solver.cpp:258]     Train net output #0: loss = 0.00635679 (* 1 = 0.00635679 loss)
I0916 11:14:16.288698 20069 sgd_solver.cpp:112] Iteration 6200, lr = 0.00696408
I0916 11:14:16.497016 20069 solver.cpp:239] Iteration 6300 (480.029 iter/s, 0.208321s/100 iters), loss = 0.00802546
I0916 11:14:16.497076 20069 solver.cpp:258]     Train net output #0: loss = 0.0080255 (* 1 = 0.0080255 loss)
I0916 11:14:16.497093 20069 sgd_solver.cpp:112] Iteration 6300, lr = 0.00693201
I0916 11:14:16.704742 20069 solver.cpp:239] Iteration 6400 (481.571 iter/s, 0.207654s/100 iters), loss = 0.00511562
I0916 11:14:16.704802 20069 solver.cpp:258]     Train net output #0: loss = 0.00511564 (* 1 = 0.00511564 loss)
I0916 11:14:16.704818 20069 sgd_solver.cpp:112] Iteration 6400, lr = 0.00690029
I0916 11:14:16.909473 20069 solver.cpp:347] Iteration 6500, Testing net (#0)
I0916 11:14:17.017139 20094 data_layer.cpp:73] Restarting data prefetching from start.
I0916 11:14:17.017932 20069 solver.cpp:414]     Test net output #0: accuracy = 0.99
I0916 11:14:17.017971 20069 solver.cpp:414]     Test net output #1: loss = 0.0308742 (* 1 = 0.0308742 loss)
I0916 11:14:17.019846 20069 solver.cpp:239] Iteration 6500 (317.428 iter/s, 0.315032s/100 iters), loss = 0.013028
I0916 11:14:17.019904 20069 solver.cpp:258]     Train net output #0: loss = 0.0130281 (* 1 = 0.0130281 loss)
I0916 11:14:17.019922 20069 sgd_solver.cpp:112] Iteration 6500, lr = 0.0068689
I0916 11:14:17.134550 20093 data_layer.cpp:73] Restarting data prefetching from start.
I0916 11:14:17.221037 20069 solver.cpp:239] Iteration 6600 (497.225 iter/s, 0.201116s/100 iters), loss = 0.0292603
I0916 11:14:17.221097 20069 solver.cpp:258]     Train net output #0: loss = 0.0292603 (* 1 = 0.0292603 loss)
I0916 11:14:17.221117 20069 sgd_solver.cpp:112] Iteration 6600, lr = 0.00683784
I0916 11:14:17.428619 20069 solver.cpp:239] Iteration 6700 (481.906 iter/s, 0.207509s/100 iters), loss = 0.0078646
I0916 11:14:17.428678 20069 solver.cpp:258]     Train net output #0: loss = 0.00786463 (* 1 = 0.00786463 loss)
I0916 11:14:17.428694 20069 sgd_solver.cpp:112] Iteration 6700, lr = 0.00680711
I0916 11:14:17.633713 20069 solver.cpp:239] Iteration 6800 (487.757 iter/s, 0.20502s/100 iters), loss = 0.00361207
I0916 11:14:17.633774 20069 solver.cpp:258]     Train net output #0: loss = 0.00361209 (* 1 = 0.00361209 loss)
I0916 11:14:17.633838 20069 sgd_solver.cpp:112] Iteration 6800, lr = 0.0067767
I0916 11:14:17.838778 20069 solver.cpp:239] Iteration 6900 (487.824 iter/s, 0.204992s/100 iters), loss = 0.00324868
I0916 11:14:17.838835 20069 solver.cpp:258]     Train net output #0: loss = 0.0032487 (* 1 = 0.0032487 loss)
I0916 11:14:17.838853 20069 sgd_solver.cpp:112] Iteration 6900, lr = 0.0067466
I0916 11:14:18.040395 20069 solver.cpp:347] Iteration 7000, Testing net (#0)
I0916 11:14:18.148777 20094 data_layer.cpp:73] Restarting data prefetching from start.
I0916 11:14:18.149569 20069 solver.cpp:414]     Test net output #0: accuracy = 0.9907
I0916 11:14:18.149610 20069 solver.cpp:414]     Test net output #1: loss = 0.0280258 (* 1 = 0.0280258 loss)
I0916 11:14:18.151465 20069 solver.cpp:239] Iteration 7000 (319.882 iter/s, 0.312615s/100 iters), loss = 0.00603334
I0916 11:14:18.151525 20069 solver.cpp:258]     Train net output #0: loss = 0.00603336 (* 1 = 0.00603336 loss)
I0916 11:14:18.151545 20069 sgd_solver.cpp:112] Iteration 7000, lr = 0.00671681
I0916 11:14:18.349222 20069 solver.cpp:239] Iteration 7100 (505.85 iter/s, 0.197687s/100 iters), loss = 0.0105174
I0916 11:14:18.349274 20069 solver.cpp:258]     Train net output #0: loss = 0.0105175 (* 1 = 0.0105175 loss)
I0916 11:14:18.349288 20069 sgd_solver.cpp:112] Iteration 7100, lr = 0.00668733
I0916 11:14:18.550263 20069 solver.cpp:239] Iteration 7200 (497.573 iter/s, 0.200976s/100 iters), loss = 0.00709869
I0916 11:14:18.550318 20069 solver.cpp:258]     Train net output #0: loss = 0.0070987 (* 1 = 0.0070987 loss)
I0916 11:14:18.550333 20069 sgd_solver.cpp:112] Iteration 7200, lr = 0.00665815
I0916 11:14:18.756577 20069 solver.cpp:239] Iteration 7300 (484.861 iter/s, 0.206245s/100 iters), loss = 0.0175614
I0916 11:14:18.756637 20069 solver.cpp:258]     Train net output #0: loss = 0.0175614 (* 1 = 0.0175614 loss)
I0916 11:14:18.756654 20069 sgd_solver.cpp:112] Iteration 7300, lr = 0.00662927
I0916 11:14:18.962849 20069 solver.cpp:239] Iteration 7400 (484.969 iter/s, 0.206199s/100 iters), loss = 0.00308368
I0916 11:14:18.962906 20069 solver.cpp:258]     Train net output #0: loss = 0.00308369 (* 1 = 0.00308369 loss)
I0916 11:14:18.962924 20069 sgd_solver.cpp:112] Iteration 7400, lr = 0.00660067
I0916 11:14:19.161937 20093 data_layer.cpp:73] Restarting data prefetching from start.
I0916 11:14:19.169123 20069 solver.cpp:347] Iteration 7500, Testing net (#0)
I0916 11:14:19.278120 20094 data_layer.cpp:73] Restarting data prefetching from start.
I0916 11:14:19.278905 20069 solver.cpp:414]     Test net output #0: accuracy = 0.9903
I0916 11:14:19.278944 20069 solver.cpp:414]     Test net output #1: loss = 0.03102 (* 1 = 0.03102 loss)
I0916 11:14:19.280875 20069 solver.cpp:239] Iteration 7500 (314.51 iter/s, 0.317955s/100 iters), loss = 0.00102793
I0916 11:14:19.280918 20069 solver.cpp:258]     Train net output #0: loss = 0.00102794 (* 1 = 0.00102794 loss)
I0916 11:14:19.280930 20069 sgd_solver.cpp:112] Iteration 7500, lr = 0.00657236
I0916 11:14:19.480131 20069 solver.cpp:239] Iteration 7600 (502.011 iter/s, 0.199199s/100 iters), loss = 0.00761624
I0916 11:14:19.480201 20069 solver.cpp:258]     Train net output #0: loss = 0.00761626 (* 1 = 0.00761626 loss)
I0916 11:14:19.480221 20069 sgd_solver.cpp:112] Iteration 7600, lr = 0.00654433
I0916 11:14:19.693158 20069 solver.cpp:239] Iteration 7700 (469.6 iter/s, 0.212947s/100 iters), loss = 0.0239126
I0916 11:14:19.693223 20069 solver.cpp:258]     Train net output #0: loss = 0.0239126 (* 1 = 0.0239126 loss)
I0916 11:14:19.693245 20069 sgd_solver.cpp:112] Iteration 7700, lr = 0.00651658
I0916 11:14:19.901561 20069 solver.cpp:239] Iteration 7800 (480.011 iter/s, 0.208328s/100 iters), loss = 0.00245418
I0916 11:14:19.901623 20069 solver.cpp:258]     Train net output #0: loss = 0.00245419 (* 1 = 0.00245419 loss)
I0916 11:14:19.901638 20069 sgd_solver.cpp:112] Iteration 7800, lr = 0.00648911
I0916 11:14:20.109514 20069 solver.cpp:239] Iteration 7900 (481.042 iter/s, 0.207882s/100 iters), loss = 0.00337548
I0916 11:14:20.109617 20069 solver.cpp:258]     Train net output #0: loss = 0.0033755 (* 1 = 0.0033755 loss)
I0916 11:14:20.109639 20069 sgd_solver.cpp:112] Iteration 7900, lr = 0.0064619
I0916 11:14:20.315078 20069 solver.cpp:347] Iteration 8000, Testing net (#0)
I0916 11:14:20.422358 20094 data_layer.cpp:73] Restarting data prefetching from start.
I0916 11:14:20.423147 20069 solver.cpp:414]     Test net output #0: accuracy = 0.991
I0916 11:14:20.423189 20069 solver.cpp:414]     Test net output #1: loss = 0.0289546 (* 1 = 0.0289546 loss)
I0916 11:14:20.425290 20069 solver.cpp:239] Iteration 8000 (316.8 iter/s, 0.315657s/100 iters), loss = 0.00319943
I0916 11:14:20.425356 20069 solver.cpp:258]     Train net output #0: loss = 0.00319945 (* 1 = 0.00319945 loss)
I0916 11:14:20.425374 20069 sgd_solver.cpp:112] Iteration 8000, lr = 0.00643496
I0916 11:14:20.632995 20069 solver.cpp:239] Iteration 8100 (481.629 iter/s, 0.207629s/100 iters), loss = 0.0140026
I0916 11:14:20.633051 20069 solver.cpp:258]     Train net output #0: loss = 0.0140026 (* 1 = 0.0140026 loss)
I0916 11:14:20.633067 20069 sgd_solver.cpp:112] Iteration 8100, lr = 0.00640827
I0916 11:14:20.837589 20069 solver.cpp:239] Iteration 8200 (488.939 iter/s, 0.204524s/100 iters), loss = 0.00604289
I0916 11:14:20.837649 20069 solver.cpp:258]     Train net output #0: loss = 0.00604291 (* 1 = 0.00604291 loss)
I0916 11:14:20.837666 20069 sgd_solver.cpp:112] Iteration 8200, lr = 0.00638185
I0916 11:14:21.046576 20069 solver.cpp:239] Iteration 8300 (478.667 iter/s, 0.208914s/100 iters), loss = 0.0214689
I0916 11:14:21.046634 20069 solver.cpp:258]     Train net output #0: loss = 0.0214689 (* 1 = 0.0214689 loss)
I0916 11:14:21.046651 20069 sgd_solver.cpp:112] Iteration 8300, lr = 0.00635567
I0916 11:14:21.253724 20069 solver.cpp:239] Iteration 8400 (482.908 iter/s, 0.207079s/100 iters), loss = 0.00671176
I0916 11:14:21.253782 20069 solver.cpp:258]     Train net output #0: loss = 0.00671179 (* 1 = 0.00671179 loss)
I0916 11:14:21.253798 20069 sgd_solver.cpp:112] Iteration 8400, lr = 0.00632975
I0916 11:14:21.322952 20093 data_layer.cpp:73] Restarting data prefetching from start.
I0916 11:14:21.458909 20069 solver.cpp:347] Iteration 8500, Testing net (#0)
I0916 11:14:21.568796 20094 data_layer.cpp:73] Restarting data prefetching from start.
I0916 11:14:21.569550 20069 solver.cpp:414]     Test net output #0: accuracy = 0.9912
I0916 11:14:21.569581 20069 solver.cpp:414]     Test net output #1: loss = 0.0294384 (* 1 = 0.0294384 loss)
I0916 11:14:21.571470 20069 solver.cpp:239] Iteration 8500 (314.787 iter/s, 0.317675s/100 iters), loss = 0.00746528
I0916 11:14:21.571508 20069 solver.cpp:258]     Train net output #0: loss = 0.0074653 (* 1 = 0.0074653 loss)
I0916 11:14:21.571521 20069 sgd_solver.cpp:112] Iteration 8500, lr = 0.00630407
I0916 11:14:21.768465 20069 solver.cpp:239] Iteration 8600 (507.762 iter/s, 0.196943s/100 iters), loss = 0.00149618
I0916 11:14:21.768525 20069 solver.cpp:258]     Train net output #0: loss = 0.0014962 (* 1 = 0.0014962 loss)
I0916 11:14:21.768546 20069 sgd_solver.cpp:112] Iteration 8600, lr = 0.00627864
I0916 11:14:21.972366 20069 solver.cpp:239] Iteration 8700 (490.606 iter/s, 0.20383s/100 iters), loss = 0.00364517
I0916 11:14:21.972450 20069 solver.cpp:258]     Train net output #0: loss = 0.00364519 (* 1 = 0.00364519 loss)
I0916 11:14:21.972470 20069 sgd_solver.cpp:112] Iteration 8700, lr = 0.00625344
I0916 11:14:22.178491 20069 solver.cpp:239] Iteration 8800 (485.363 iter/s, 0.206032s/100 iters), loss = 0.0012195
I0916 11:14:22.178553 20069 solver.cpp:258]     Train net output #0: loss = 0.00121951 (* 1 = 0.00121951 loss)
I0916 11:14:22.178570 20069 sgd_solver.cpp:112] Iteration 8800, lr = 0.00622847
I0916 11:14:22.386806 20069 solver.cpp:239] Iteration 8900 (480.22 iter/s, 0.208238s/100 iters), loss = 0.0004327
I0916 11:14:22.386868 20069 solver.cpp:258]     Train net output #0: loss = 0.000432724 (* 1 = 0.000432724 loss)
I0916 11:14:22.386885 20069 sgd_solver.cpp:112] Iteration 8900, lr = 0.00620374
I0916 11:14:22.592888 20069 solver.cpp:347] Iteration 9000, Testing net (#0)
I0916 11:14:22.701798 20094 data_layer.cpp:73] Restarting data prefetching from start.
I0916 11:14:22.702610 20069 solver.cpp:414]     Test net output #0: accuracy = 0.9907
I0916 11:14:22.702656 20069 solver.cpp:414]     Test net output #1: loss = 0.0297244 (* 1 = 0.0297244 loss)
I0916 11:14:22.704602 20069 solver.cpp:239] Iteration 9000 (314.745 iter/s, 0.317718s/100 iters), loss = 0.01822
I0916 11:14:22.704663 20069 solver.cpp:258]     Train net output #0: loss = 0.01822 (* 1 = 0.01822 loss)
I0916 11:14:22.704685 20069 sgd_solver.cpp:112] Iteration 9000, lr = 0.00617924
I0916 11:14:22.911779 20069 solver.cpp:239] Iteration 9100 (482.849 iter/s, 0.207104s/100 iters), loss = 0.00928011
I0916 11:14:22.911840 20069 solver.cpp:258]     Train net output #0: loss = 0.00928012 (* 1 = 0.00928012 loss)
I0916 11:14:22.911856 20069 sgd_solver.cpp:112] Iteration 9100, lr = 0.00615496
I0916 11:14:23.122339 20069 solver.cpp:239] Iteration 9200 (475.091 iter/s, 0.210486s/100 iters), loss = 0.00208687
I0916 11:14:23.122397 20069 solver.cpp:258]     Train net output #0: loss = 0.0020869 (* 1 = 0.0020869 loss)
I0916 11:14:23.122413 20069 sgd_solver.cpp:112] Iteration 9200, lr = 0.0061309
I0916 11:14:23.329584 20069 solver.cpp:239] Iteration 9300 (482.687 iter/s, 0.207174s/100 iters), loss = 0.00691591
I0916 11:14:23.329643 20069 solver.cpp:258]     Train net output #0: loss = 0.00691594 (* 1 = 0.00691594 loss)
I0916 11:14:23.329658 20069 sgd_solver.cpp:112] Iteration 9300, lr = 0.00610706
I0916 11:14:23.475065 20093 data_layer.cpp:73] Restarting data prefetching from start.
I0916 11:14:23.536273 20069 solver.cpp:239] Iteration 9400 (483.987 iter/s, 0.206617s/100 iters), loss = 0.0161231
I0916 11:14:23.536334 20069 solver.cpp:258]     Train net output #0: loss = 0.0161231 (* 1 = 0.0161231 loss)
I0916 11:14:23.536350 20069 sgd_solver.cpp:112] Iteration 9400, lr = 0.00608343
I0916 11:14:23.741714 20069 solver.cpp:347] Iteration 9500, Testing net (#0)
I0916 11:14:23.853230 20094 data_layer.cpp:73] Restarting data prefetching from start.
I0916 11:14:23.854034 20069 solver.cpp:414]     Test net output #0: accuracy = 0.9891
I0916 11:14:23.854080 20069 solver.cpp:414]     Test net output #1: loss = 0.0330695 (* 1 = 0.0330695 loss)
I0916 11:14:23.855959 20069 solver.cpp:239] Iteration 9500 (312.882 iter/s, 0.319609s/100 iters), loss = 0.0051555
I0916 11:14:23.856004 20069 solver.cpp:258]     Train net output #0: loss = 0.00515554 (* 1 = 0.00515554 loss)
I0916 11:14:23.856020 20069 sgd_solver.cpp:112] Iteration 9500, lr = 0.00606002
I0916 11:14:24.059878 20069 solver.cpp:239] Iteration 9600 (490.531 iter/s, 0.203861s/100 iters), loss = 0.00166781
I0916 11:14:24.059936 20069 solver.cpp:258]     Train net output #0: loss = 0.00166785 (* 1 = 0.00166785 loss)
I0916 11:14:24.059949 20069 sgd_solver.cpp:112] Iteration 9600, lr = 0.00603682
I0916 11:14:24.267793 20069 solver.cpp:239] Iteration 9700 (481.128 iter/s, 0.207845s/100 iters), loss = 0.0020899
I0916 11:14:24.267850 20069 solver.cpp:258]     Train net output #0: loss = 0.00208994 (* 1 = 0.00208994 loss)
I0916 11:14:24.267865 20069 sgd_solver.cpp:112] Iteration 9700, lr = 0.00601382
I0916 11:14:24.475281 20069 solver.cpp:239] Iteration 9800 (482.12 iter/s, 0.207417s/100 iters), loss = 0.010022
I0916 11:14:24.475337 20069 solver.cpp:258]     Train net output #0: loss = 0.010022 (* 1 = 0.010022 loss)
I0916 11:14:24.475353 20069 sgd_solver.cpp:112] Iteration 9800, lr = 0.00599102
I0916 11:14:24.683315 20069 solver.cpp:239] Iteration 9900 (480.852 iter/s, 0.207964s/100 iters), loss = 0.0041218
I0916 11:14:24.683379 20069 solver.cpp:258]     Train net output #0: loss = 0.00412184 (* 1 = 0.00412184 loss)
I0916 11:14:24.683399 20069 sgd_solver.cpp:112] Iteration 9900, lr = 0.00596843
I0916 11:14:24.888979 20069 solver.cpp:464] Snapshotting to binary proto file ../../examples/mnist/lenet_iter_10000.caffemodel
I0916 11:14:24.901504 20069 sgd_solver.cpp:284] Snapshotting solver state to binary proto file ../../examples/mnist/lenet_iter_10000.solverstate
I0916 11:14:24.908828 20069 solver.cpp:327] Iteration 10000, loss = 0.0037788
I0916 11:14:24.908890 20069 solver.cpp:347] Iteration 10000, Testing net (#0)
I0916 11:14:25.016538 20094 data_layer.cpp:73] Restarting data prefetching from start.
I0916 11:14:25.017334 20069 solver.cpp:414]     Test net output #0: accuracy = 0.9915
I0916 11:14:25.017370 20069 solver.cpp:414]     Test net output #1: loss = 0.0275895 (* 1 = 0.0275895 loss)
I0916 11:14:25.017380 20069 solver.cpp:332] Optimization Done.
I0916 11:14:25.017388 20069 caffe.cpp:250] Optimization Done.

到这里,Caffe学习环境已经安装好了,加油去学吧。。。

附:其他不错的博文参考链接如下,

caffe部分配置修改后,重新编译caffe的教程

caffe 改动后的重新编译

linux下编译caffe

Ubuntu 16.04下Anaconda编译安装Caffe

安装过程中遇到的坑,

/usr/lib/x86_64-linux-gnu/libopencv_highgui.so.2.4.9: undefined reference toTIFFIsTiled@LIBTIFF_4.0'

关于解决libopencv_core.so.2.4, needed by /../libcv_bridge.so, may conflict with libopencv_core.so.3.1

caffe安装吐血总结 opencv的问题,ubuntu版本问题

ubuntu16.04LTS+opencv3.3.0,解决编译caffe时,出现的opencv的VideoCapture类接口找不到问题

编译caffe错误及方法总结

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值