【Caffe】默认打印出来的日志可以告诉我们些什么?

I0328 19:29:51.803539  2532 caffe.cpp:217] Using GPUs 0  //1. 运行模式:cpu或者gpu
I0328 19:29:51.833237  2532 caffe.cpp:222] GPU 0: TITAN X (Pascal)
I0328 19:29:54.729840  2532 solver.cpp:48] Initializing solver from parameters:  //2.经过解析后的超参数内容
test_iter: 100
test_interval: 500
base_lr: 0.01
display: 100
max_iter: 10000
lr_policy: "inv"
gamma: 0.0001
power: 0.75
momentum: 0.9
weight_decay: 0.0005
snapshot: 5000
snapshot_prefix: "examples/mnist/lenet"
solver_mode: GPU
device_id: 0
net: "examples/mnist/lenet_train_test.prototxt"
train_state {
  level: 0
  stage: ""
}
I0328 19:29:54.729990  2532 solver.cpp:91] Creating training net from net file: examples/mnist/lenet_train_test.prototxt  //3.解析网络协议内容,创建网络
I0328 19:29:54.730259  2532 net.cpp:322] The NetState phase (0) differed from the phase (1) specified by a rule in layer mnist
I0328 19:29:54.730273  2532 net.cpp:322] The NetState phase (0) differed from the phase (1) specified by a rule in layer accuracy //3.1 说明训练网络和测试网络之间的差别在哪里
I0328 19:29:54.730342  2532 net.cpp:58] Initializing net from parameters: 
name: "LeNet"
state {
  phase: TRAIN
  level: 0
  stage: ""
}
layer {
  name: "mnist"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TRAIN
  }
  transform_param {
    scale: 0.00390625
  }
  data_param {
    source: "examples/mnist/mnist_train_lmdb"
    batch_size: 64
    backend: LMDB
  }
}
layer {
  name: "conv1"
  type: "Convolution"
  bottom: "data"
  top: "conv1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 20
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "pool1"
  type: "Pooling"
  bottom: "conv1"
  top: "pool1"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "conv2"
  type: "Convolution"
  bottom: "pool1"
  top: "conv2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 50
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "pool2"
  type: "Pooling"
  bottom: "conv2"
  top: "pool2"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "ip1"
  type: "InnerProduct"
  bottom: "pool2"
  top: "ip1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 500
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "relu1"
  type: "ReLU"
  bottom: "ip1"
  top: "ip1"
}
layer {
  name: "ip2"
  type: "InnerProduct"
  bottom: "ip1"
  top: "ip2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 10
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "loss"
  type: "SoftmaxWithLoss"
  bottom: "ip2"
  bottom: "label"
  top: "loss"
}
I0328 19:29:54.730403  2532 layer_factory.hpp:77] Creating layer mnist //逐层创建
I0328 19:29:54.730799  2532 net.cpp:100] Creating Layer mnist 
I0328 19:29:54.730810  2532 net.cpp:408] mnist -> data  
I0328 19:29:54.730834  2532 net.cpp:408] mnist -> label
I0328 19:29:54.797408  2538 db_lmdb.cpp:35] Opened lmdb examples/mnist/mnist_train_lmdb  //打开训练集
I0328 19:29:54.826562  2532 data_layer.cpp:41] output data size: 64,1,28,28   
I0328 19:29:54.831125  2532 net.cpp:150] Setting up mnist
I0328 19:29:54.831168  2532 net.cpp:157] Top shape: 64 1 28 28 (50176) //输出数据的维度 n c h w(n*c*h*w)
I0328 19:29:54.831176  2532 net.cpp:157] Top shape: 64 (64)    // 输出label的维度
I0328 19:29:54.831181  2532 net.cpp:165] Memory required for data: 200960 //统计内存占用情况,逐层累计(如何统计的呢?)
I0328 19:29:54.831193  2532 layer_factory.hpp:77] Creating layer conv1
I0328 19:29:54.831220  2532 net.cpp:100] Creating Layer conv1
I0328 19:29:54.831228  2532 net.cpp:434] conv1 <- data  //输入<-
I0328 19:29:54.831243  2532 net.cpp:408] conv1 -> conv1 //输出->
I0328 19:29:56.820606  2532 net.cpp:150] Setting up conv1
I0328 19:29:56.820641  2532 net.cpp:157] Top shape: 64 20 24 24 (737280)  // 输出维度
I0328 19:29:56.820646  2532 net.cpp:165] Memory required for data: 3150080
I0328 19:29:56.820662  2532 layer_factory.hpp:77] Creating layer pool1
I0328 19:29:56.820677  2532 net.cpp:100] Creating Layer pool1
I0328 19:29:56.820700  2532 net.cpp:434] pool1 <- conv1
I0328 19:29:56.820708  2532 net.cpp:408] pool1 -> pool1
I0328 19:29:56.820751  2532 net.cpp:150] Setting up pool1
I0328 19:29:56.820760  2532 net.cpp:157] Top shape: 64 20 12 12 (184320)
I0328 19:29:56.820762  2532 net.cpp:165] Memory required for data: 3887360
I0328 19:29:56.820765  2532 layer_factory.hpp:77] Creating layer conv2
I0328 19:29:56.820776  2532 net.cpp:100] Creating Layer conv2
I0328 19:29:56.820780  2532 net.cpp:434] conv2 <- pool1
I0328 19:29:56.820786  2532 net.cpp:408] conv2 -> conv2
I0328 19:29:56.822167  2532 net.cpp:150] Setting up conv2
I0328 19:29:56.822191  2532 net.cpp:157] Top shape: 64 50 8 8 (204800)
I0328 19:29:56.822197  2532 net.cpp:165] Memory required for data: 4706560
I0328 19:29:56.822207  2532 layer_factory.hpp:77] Creating layer pool2
I0328 19:29:56.822223  2532 net.cpp:100] Creating Layer pool2
I0328 19:29:56.822227  2532 net.cpp:434] pool2 <- conv2
I0328 19:29:56.822232  2532 net.cpp:408] pool2 -> pool2
I0328 19:29:56.822294  2532 net.cpp:150] Setting up pool2
I0328 19:29:56.822302  2532 net.cpp:157] Top shape: 64 50 4 4 (51200)
I0328 19:29:56.822305  2532 net.cpp:165] Memory required for data: 4911360
I0328 19:29:56.822307  2532 layer_factory.hpp:77] Creating layer ip1
I0328 19:29:56.822320  2532 net.cpp:100] Creating Layer ip1
I0328 19:29:56.822325  2532 net.cpp:434] ip1 <- pool2
I0328 19:29:56.822335  2532 net.cpp:408] ip1 -> ip1
I0328 19:29:56.825884  2532 net.cpp:150] Setting up ip1
I0328 19:29:56.825932  2532 net.cpp:157] Top shape: 64 500 (32000)
I0328 19:29:56.825935  2532 net.cpp:165] Memory required for data: 5039360
I0328 19:29:56.825947  2532 layer_factory.hpp:77] Creating layer relu1
I0328 19:29:56.825956  2532 net.cpp:100] Creating Layer relu1
I0328 19:29:56.825960  2532 net.cpp:434] relu1 <- ip1
I0328 19:29:56.825968  2532 net.cpp:395] relu1 -> ip1 (in-place)
I0328 19:29:56.826165  2532 net.cpp:150] Setting up relu1
I0328 19:29:56.826175  2532 net.cpp:157] Top shape: 64 500 (32000)
I0328 19:29:56.826179  2532 net.cpp:165] Memory required for data: 5167360
I0328 19:29:56.826181  2532 layer_factory.hpp:77] Creating layer ip2
I0328 19:29:56.826189  2532 net.cpp:100] Creating Layer ip2
I0328 19:29:56.826195  2532 net.cpp:434] ip2 <- ip1
I0328 19:29:56.826201  2532 net.cpp:408] ip2 -> ip2
I0328 19:29:56.827221  2532 net.cpp:150] Setting up ip2
I0328 19:29:56.827252  2532 net.cpp:157] Top shape: 64 10 (640)
I0328 19:29:56.827256  2532 net.cpp:165] Memory required for data: 5169920
I0328 19:29:56.827262  2532 layer_factory.hpp:77] Creating layer loss
I0328 19:29:56.827271  2532 net.cpp:100] Creating Layer loss
I0328 19:29:56.827275  2532 net.cpp:434] loss <- ip2
I0328 19:29:56.827278  2532 net.cpp:434] loss <- label
I0328 19:29:56.827283  2532 net.cpp:408] loss -> loss
I0328 19:29:56.827303  2532 layer_factory.hpp:77] Creating layer loss
I0328 19:29:56.827914  2532 net.cpp:150] Setting up loss
I0328 19:29:56.827927  2532 net.cpp:157] Top shape: (1)
I0328 19:29:56.827942  2532 net.cpp:160]     with loss weight 1
I0328 19:29:56.827957  2532 net.cpp:165] Memory required for data: 5169924 //内存总占用情况
I0328 19:29:56.827960  2532 net.cpp:226] loss needs backward computation.  // 打印出需要反向传播项
I0328 19:29:56.827965  2532 net.cpp:226] ip2 needs backward computation.
I0328 19:29:56.827966  2532 net.cpp:226] relu1 needs backward computation.
I0328 19:29:56.827970  2532 net.cpp:226] ip1 needs backward computation.
I0328 19:29:56.827972  2532 net.cpp:226] pool2 needs backward computation.
I0328 19:29:56.827975  2532 net.cpp:226] conv2 needs backward computation.
I0328 19:29:56.827977  2532 net.cpp:226] pool1 needs backward computation.
I0328 19:29:56.827980  2532 net.cpp:226] conv1 needs backward computation.
I0328 19:29:56.827983  2532 net.cpp:228] mnist does not need backward computation.
I0328 19:29:56.827986  2532 net.cpp:270] This network produces output loss
I0328 19:29:56.827994  2532 net.cpp:283] Network initialization done.
I0328 19:29:56.828243  2532 solver.cpp:181] Creating test net (#0) specified by net file: examples/mnist/lenet_train_test.prototxt // 创建测试网络
I0328 19:29:56.828287  2532 net.cpp:322] The NetState phase (1) differed from the phase (0) specified by a rule in layer mnist
I0328 19:29:56.828361  2532 net.cpp:58] Initializing net from parameters: 
name: "LeNet"
state {
  phase: TEST
}
layer {
  name: "mnist"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TEST
  }
  transform_param {
    scale: 0.00390625
  }
  data_param {
    source: "examples/mnist/mnist_test_lmdb"
    batch_size: 100
    backend: LMDB
  }
}
layer {
  name: "conv1"
  type: "Convolution"
  bottom: "data"
  top: "conv1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 20
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "pool1"
  type: "Pooling"
  bottom: "conv1"
  top: "pool1"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "conv2"
  type: "Convolution"
  bottom: "pool1"
  top: "conv2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 50
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "pool2"
  type: "Pooling"
  bottom: "conv2"
  top: "pool2"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "ip1"
  type: "InnerProduct"
  bottom: "pool2"
  top: "ip1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 500
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "relu1"
  type: "ReLU"
  bottom: "ip1"
  top: "ip1"
}
layer {
  name: "ip2"
  type: "InnerProduct"
  bottom: "ip1"
  top: "ip2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 10
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "accuracy"
  type: "Accuracy"
  bottom: "ip2"
  bottom: "label"
  top: "accuracy"
  include {
    phase: TEST
  }
}
layer {
  name: "loss"
  type: "SoftmaxWithLoss"
  bottom: "ip2"
  bottom: "label"
  top: "loss"
}
I0328 19:29:56.828421  2532 layer_factory.hpp:77] Creating layer mnist
I0328 19:29:56.828516  2532 net.cpp:100] Creating Layer mnist
I0328 19:29:56.828526  2532 net.cpp:408] mnist -> data
I0328 19:29:56.828533  2532 net.cpp:408] mnist -> label
I0328 19:29:56.892670  2541 db_lmdb.cpp:35] Opened lmdb examples/mnist/mnist_test_lmdb
I0328 19:29:56.892979  2532 data_layer.cpp:41] output data size: 100,1,28,28
I0328 19:29:56.940191  2532 net.cpp:150] Setting up mnist
I0328 19:29:56.940234  2532 net.cpp:157] Top shape: 100 1 28 28 (78400)
I0328 19:29:56.940244  2532 net.cpp:157] Top shape: 100 (100)
I0328 19:29:56.940249  2532 net.cpp:165] Memory required for data: 314000
I0328 19:29:56.940258  2532 layer_factory.hpp:77] Creating layer label_mnist_1_split
I0328 19:29:56.940282  2532 net.cpp:100] Creating Layer label_mnist_1_split
I0328 19:29:56.940292  2532 net.cpp:434] label_mnist_1_split <- label
I0328 19:29:56.940301  2532 net.cpp:408] label_mnist_1_split -> label_mnist_1_split_0
I0328 19:29:56.940315  2532 net.cpp:408] label_mnist_1_split -> label_mnist_1_split_1
I0328 19:29:56.940464  2532 net.cpp:150] Setting up label_mnist_1_split
I0328 19:29:56.940479  2532 net.cpp:157] Top shape: 100 (100)
I0328 19:29:56.940484  2532 net.cpp:157] Top shape: 100 (100)
I0328 19:29:56.940487  2532 net.cpp:165] Memory required for data: 314800
I0328 19:29:56.940491  2532 layer_factory.hpp:77] Creating layer conv1
I0328 19:29:56.940510  2532 net.cpp:100] Creating Layer conv1
I0328 19:29:56.940517  2532 net.cpp:434] conv1 <- data
I0328 19:29:56.940526  2532 net.cpp:408] conv1 -> conv1
I0328 19:29:56.941525  2532 net.cpp:150] Setting up conv1
I0328 19:29:56.941543  2532 net.cpp:157] Top shape: 100 20 24 24 (1152000)
I0328 19:29:56.941550  2532 net.cpp:165] Memory required for data: 4922800
I0328 19:29:56.941563  2532 layer_factory.hpp:77] Creating layer pool1
I0328 19:29:56.941596  2532 net.cpp:100] Creating Layer pool1
I0328 19:29:56.941601  2532 net.cpp:434] pool1 <- conv1
I0328 19:29:56.941606  2532 net.cpp:408] pool1 -> pool1
I0328 19:29:56.941656  2532 net.cpp:150] Setting up pool1
I0328 19:29:56.941668  2532 net.cpp:157] Top shape: 100 20 12 12 (288000)
I0328 19:29:56.941671  2532 net.cpp:165] Memory required for data: 6074800
I0328 19:29:56.941675  2532 layer_factory.hpp:77] Creating layer conv2
I0328 19:29:56.941689  2532 net.cpp:100] Creating Layer conv2
I0328 19:29:56.941695  2532 net.cpp:434] conv2 <- pool1
I0328 19:29:56.941704  2532 net.cpp:408] conv2 -> conv2
I0328 19:29:56.943431  2532 net.cpp:150] Setting up conv2
I0328 19:29:56.943454  2532 net.cpp:157] Top shape: 100 50 8 8 (320000)
I0328 19:29:56.943472  2532 net.cpp:165] Memory required for data: 7354800
I0328 19:29:56.943490  2532 layer_factory.hpp:77] Creating layer pool2
I0328 19:29:56.943503  2532 net.cpp:100] Creating Layer pool2
I0328 19:29:56.943511  2532 net.cpp:434] pool2 <- conv2
I0328 19:29:56.943518  2532 net.cpp:408] pool2 -> pool2
I0328 19:29:56.943570  2532 net.cpp:150] Setting up pool2
I0328 19:29:56.943581  2532 net.cpp:157] Top shape: 100 50 4 4 (80000)
I0328 19:29:56.943585  2532 net.cpp:165] Memory required for data: 7674800
I0328 19:29:56.943608  2532 layer_factory.hpp:77] Creating layer ip1
I0328 19:29:56.943629  2532 net.cpp:100] Creating Layer ip1
I0328 19:29:56.943634  2532 net.cpp:434] ip1 <- pool2
I0328 19:29:56.943646  2532 net.cpp:408] ip1 -> ip1
I0328 19:29:56.947994  2532 net.cpp:150] Setting up ip1
I0328 19:29:56.948014  2532 net.cpp:157] Top shape: 100 500 (50000)
I0328 19:29:56.948021  2532 net.cpp:165] Memory required for data: 7874800
I0328 19:29:56.948032  2532 layer_factory.hpp:77] Creating layer relu1
I0328 19:29:56.948041  2532 net.cpp:100] Creating Layer relu1
I0328 19:29:56.948045  2532 net.cpp:434] relu1 <- ip1
I0328 19:29:56.948053  2532 net.cpp:395] relu1 -> ip1 (in-place)
I0328 19:29:56.948648  2532 net.cpp:150] Setting up relu1
I0328 19:29:56.948706  2532 net.cpp:157] Top shape: 100 500 (50000)
I0328 19:29:56.948719  2532 net.cpp:165] Memory required for data: 8074800
I0328 19:29:56.948724  2532 layer_factory.hpp:77] Creating layer ip2
I0328 19:29:56.948736  2532 net.cpp:100] Creating Layer ip2
I0328 19:29:56.948741  2532 net.cpp:434] ip2 <- ip1
I0328 19:29:56.948748  2532 net.cpp:408] ip2 -> ip2
I0328 19:29:56.948923  2532 net.cpp:150] Setting up ip2
I0328 19:29:56.948935  2532 net.cpp:157] Top shape: 100 10 (1000)
I0328 19:29:56.948938  2532 net.cpp:165] Memory required for data: 8078800
I0328 19:29:56.948945  2532 layer_factory.hpp:77] Creating layer ip2_ip2_0_split //caffe 内部层实现在网络协议中没有,就是讲iP2复制2份吧
I0328 19:29:56.948952  2532 net.cpp:100] Creating Layer ip2_ip2_0_split
I0328 19:29:56.948956  2532 net.cpp:434] ip2_ip2_0_split <- ip2
I0328 19:29:56.948963  2532 net.cpp:408] ip2_ip2_0_split -> ip2_ip2_0_split_0
I0328 19:29:56.948971  2532 net.cpp:408] ip2_ip2_0_split -> ip2_ip2_0_split_1
I0328 19:29:56.949012  2532 net.cpp:150] Setting up ip2_ip2_0_split
I0328 19:29:56.949020  2532 net.cpp:157] Top shape: 100 10 (1000)
I0328 19:29:56.949025  2532 net.cpp:157] Top shape: 100 10 (1000)
I0328 19:29:56.949029  2532 net.cpp:165] Memory required for data: 8086800
I0328 19:29:56.949033  2532 layer_factory.hpp:77] Creating layer accuracy
I0328 19:29:56.949040  2532 net.cpp:100] Creating Layer accuracy
I0328 19:29:56.949043  2532 net.cpp:434] accuracy <- ip2_ip2_0_split_0
I0328 19:29:56.949049  2532 net.cpp:434] accuracy <- label_mnist_1_split_0
I0328 19:29:56.949056  2532 net.cpp:408] accuracy -> accuracy
I0328 19:29:56.949064  2532 net.cpp:150] Setting up accuracy
I0328 19:29:56.949070  2532 net.cpp:157] Top shape: (1)
I0328 19:29:56.949074  2532 net.cpp:165] Memory required for data: 8086804
I0328 19:29:56.949077  2532 layer_factory.hpp:77] Creating layer loss
I0328 19:29:56.949084  2532 net.cpp:100] Creating Layer loss
I0328 19:29:56.949087  2532 net.cpp:434] loss <- ip2_ip2_0_split_1
I0328 19:29:56.949092  2532 net.cpp:434] loss <- label_mnist_1_split_1
I0328 19:29:56.949113  2532 net.cpp:408] loss -> loss
I0328 19:29:56.949123  2532 layer_factory.hpp:77] Creating layer loss
I0328 19:29:56.949414  2532 net.cpp:150] Setting up loss
I0328 19:29:56.949427  2532 net.cpp:157] Top shape: (1)
I0328 19:29:56.949431  2532 net.cpp:160]     with loss weight 1
I0328 19:29:56.949442  2532 net.cpp:165] Memory required for data: 8086808
I0328 19:29:56.949446  2532 net.cpp:226] loss needs backward computation.
I0328 19:29:56.949451  2532 net.cpp:228] accuracy does not need backward computation.
I0328 19:29:56.949456  2532 net.cpp:226] ip2_ip2_0_split needs backward computation.
I0328 19:29:56.949460  2532 net.cpp:226] ip2 needs backward computation.
I0328 19:29:56.949463  2532 net.cpp:226] relu1 needs backward computation.
I0328 19:29:56.949466  2532 net.cpp:226] ip1 needs backward computation.
I0328 19:29:56.949470  2532 net.cpp:226] pool2 needs backward computation.
I0328 19:29:56.949475  2532 net.cpp:226] conv2 needs backward computation.
I0328 19:29:56.949477  2532 net.cpp:226] pool1 needs backward computation.
I0328 19:29:56.949481  2532 net.cpp:226] conv1 needs backward computation.
I0328 19:29:56.949486  2532 net.cpp:228] label_mnist_1_split does not need backward computation.
I0328 19:29:56.949489  2532 net.cpp:228] mnist does not need backward computation.
I0328 19:29:56.949493  2532 net.cpp:270] This network produces output accuracy    //产生的输出项
I0328 19:29:56.949497  2532 net.cpp:270] This network produces output loss
I0328 19:29:56.949509  2532 net.cpp:283] Network initialization done.
I0328 19:29:56.949560  2532 solver.cpp:60] Solver scaffolding done.
I0328 19:29:56.949872  2532 caffe.cpp:251] Starting Optimization
I0328 19:29:56.949880  2532 solver.cpp:279] Solving LeNet
I0328 19:29:56.949884  2532 solver.cpp:280] Learning Rate Policy: inv
I0328 19:29:56.959205  2532 solver.cpp:337] Iteration 0, Testing net (#0)
I0328 19:29:56.988761  2532 blocking_queue.cpp:50] Data layer prefetch queue empty   //4.迭代训练,间隔显示损失和准确率,在训练模型时,认真分析损失变化,从而调整优化策略。
I0328 19:29:57.231995  2532 solver.cpp:404]     Test net output #0: accuracy = 0.1154
I0328 19:29:57.232029  2532 solver.cpp:404]     Test net output #1: loss = 2.37092 (* 1 = 2.37092 loss)
I0328 19:29:57.240805  2532 solver.cpp:228] Iteration 0, loss = 2.39021
I0328 19:29:57.240828  2532 solver.cpp:244]     Train net output #0: loss = 2.39021 (* 1 = 2.39021 loss)
I0328 19:29:57.240839  2532 sgd_solver.cpp:106] Iteration 0, lr = 0.01
I0328 19:29:57.475751  2532 solver.cpp:228] Iteration 100, loss = 0.206771
I0328 19:29:57.475790  2532 solver.cpp:244]     Train net output #0: loss = 0.206771 (* 1 = 0.206771 loss)
I0328 19:29:57.475797  2532 sgd_solver.cpp:106] Iteration 100, lr = 0.00992565
I0328 19:29:57.745872  2532 solver.cpp:228] Iteration 200, loss = 0.153808
I0328 19:29:57.745905  2532 solver.cpp:244]     Train net output #0: loss = 0.153808 (* 1 = 0.153808 loss)
I0328 19:29:57.745911  2532 sgd_solver.cpp:106] Iteration 200, lr = 0.00985258
I0328 19:29:58.021584  2532 solver.cpp:228] Iteration 300, loss = 0.154549
I0328 19:29:58.021610  2532 solver.cpp:244]     Train net output #0: loss = 0.154549 (* 1 = 0.154549 loss)
I0328 19:29:58.021615  2532 sgd_solver.cpp:106] Iteration 300, lr = 0.00978075
I0328 19:29:58.360554  2532 solver.cpp:228] Iteration 400, loss = 0.101197
I0328 19:29:58.360581  2532 solver.cpp:244]     Train net output #0: loss = 0.101197 (* 1 = 0.101197 loss)
I0328 19:29:58.360587  2532 sgd_solver.cpp:106] Iteration 400, lr = 0.00971013
I0328 19:29:58.680503  2532 solver.cpp:337] Iteration 500, Testing net (#0)
I0328 19:29:58.895654  2532 solver.cpp:404]     Test net output #0: accuracy = 0.9729
I0328 19:29:58.895684  2532 solver.cpp:404]     Test net output #1: loss = 0.082479 (* 1 = 0.082479 loss)
I0328 19:29:58.897418  2532 solver.cpp:228] Iteration 500, loss = 0.0777072
I0328 19:29:58.897434  2532 solver.cpp:244]     Train net output #0: loss = 0.0777071 (* 1 = 0.0777071 loss)
I0328 19:29:58.897440  2532 sgd_solver.cpp:106] Iteration 500, lr = 0.00964069
I0328 19:29:59.173388  2532 solver.cpp:228] Iteration 600, loss = 0.0892218
I0328 19:29:59.173415  2532 solver.cpp:244]     Train net output #0: loss = 0.0892217 (* 1 = 0.0892217 loss)
I0328 19:29:59.173439  2532 sgd_solver.cpp:106] Iteration 600, lr = 0.0095724
I0328 19:29:59.461107  2532 solver.cpp:228] Iteration 700, loss = 0.1191
I0328 19:29:59.461135  2532 solver.cpp:244]     Train net output #0: loss = 0.1191 (* 1 = 0.1191 loss)
I0328 19:29:59.461141  2532 sgd_solver.cpp:106] Iteration 700, lr = 0.00950522
I0328 19:29:59.829005  2532 solver.cpp:228] Iteration 800, loss = 0.19133
I0328 19:29:59.829032  2532 solver.cpp:244]     Train net output #0: loss = 0.19133 (* 1 = 0.19133 loss)
I0328 19:29:59.829038  2532 sgd_solver.cpp:106] Iteration 800, lr = 0.00943913
I0328 19:30:00.101406  2532 solver.cpp:228] Iteration 900, loss = 0.114087
I0328 19:30:00.101433  2532 solver.cpp:244]     Train net output #0: loss = 0.114086 (* 1 = 0.114086 loss)
I0328 19:30:00.101438  2532 sgd_solver.cpp:106] Iteration 900, lr = 0.00937411
I0328 19:30:00.382973  2532 solver.cpp:337] Iteration 1000, Testing net (#0)
I0328 19:30:00.687139  2532 solver.cpp:404]     Test net output #0: accuracy = 0.9808
I0328 19:30:00.687172  2532 solver.cpp:404]     Test net output #1: loss = 0.0591782 (* 1 = 0.0591782 loss)
I0328 19:30:00.687955  2532 solver.cpp:228] Iteration 1000, loss = 0.0818747
I0328 19:30:00.687976  2532 solver.cpp:244]     Train net output #0: loss = 0.0818747 (* 1 = 0.0818747 loss)
I0328 19:30:00.687988  2532 sgd_solver.cpp:106] Iteration 1000, lr = 0.00931012
I0328 19:30:00.936879  2532 solver.cpp:228] Iteration 1100, loss = 0.00642876
I0328 19:30:00.936911  2532 solver.cpp:244]     Train net output #0: loss = 0.00642875 (* 1 = 0.00642875 loss)
I0328 19:30:00.936918  2532 sgd_solver.cpp:106] Iteration 1100, lr = 0.00924715
I0328 19:30:01.289203  2532 solver.cpp:228] Iteration 1200, loss = 0.0171346
I0328 19:30:01.289229  2532 solver.cpp:244]     Train net output #0: loss = 0.0171346 (* 1 = 0.0171346 loss)
I0328 19:30:01.289237  2532 sgd_solver.cpp:106] Iteration 1200, lr = 0.00918515
I0328 19:30:01.469744  2532 solver.cpp:228] Iteration 1300, loss = 0.0239783
I0328 19:30:01.469771  2532 solver.cpp:244]     Train net output #0: loss = 0.0239783 (* 1 = 0.0239783 loss)
I0328 19:30:01.469777  2532 sgd_solver.cpp:106] Iteration 1300, lr = 0.00912412
I0328 19:30:01.880422  2532 solver.cpp:228] Iteration 1400, loss = 0.00648484
I0328 19:30:01.880453  2532 solver.cpp:244]     Train net output #0: loss = 0.00648486 (* 1 = 0.00648486 loss)
I0328 19:30:01.880460  2532 sgd_solver.cpp:106] Iteration 1400, lr = 0.00906403
I0328 19:30:02.138612  2532 solver.cpp:337] Iteration 1500, Testing net (#0)
I0328 19:30:02.252620  2532 solver.cpp:404]     Test net output #0: accuracy = 0.9849
I0328 19:30:02.252650  2532 solver.cpp:404]     Test net output #1: loss = 0.0498796 (* 1 = 0.0498796 loss)
I0328 19:30:02.253329  2532 solver.cpp:228] Iteration 1500, loss = 0.0982655
I0328 19:30:02.253347  2532 solver.cpp:244]     Train net output #0: loss = 0.0982655 (* 1 = 0.0982655 loss)
I0328 19:30:02.253357  2532 sgd_solver.cpp:106] Iteration 1500, lr = 0.00900485
I0328 19:30:02.655040  2532 solver.cpp:228] Iteration 1600, loss = 0.0899513
I0328 19:30:02.655071  2532 solver.cpp:244]     Train net output #0: loss = 0.0899513 (* 1 = 0.0899513 loss)
I0328 19:30:02.655076  2532 sgd_solver.cpp:106] Iteration 1600, lr = 0.00894657
I0328 19:30:02.952224  2532 solver.cpp:228] Iteration 1700, loss = 0.054807
I0328 19:30:02.952253  2532 solver.cpp:244]     Train net output #0: loss = 0.054807 (* 1 = 0.054807 loss)
I0328 19:30:02.952260  2532 sgd_solver.cpp:106] Iteration 1700, lr = 0.00888916
I0328 19:30:03.240932  2532 solver.cpp:228] Iteration 1800, loss = 0.0204677
I0328 19:30:03.240959  2532 solver.cpp:244]     Train net output #0: loss = 0.0204677 (* 1 = 0.0204677 loss)
I0328 19:30:03.240965  2532 sgd_solver.cpp:106] Iteration 1800, lr = 0.0088326
I0328 19:30:03.541123  2532 solver.cpp:228] Iteration 1900, loss = 0.103504
I0328 19:30:03.541153  2532 solver.cpp:244]     Train net output #0: loss = 0.103504 (* 1 = 0.103504 loss)
I0328 19:30:03.541159  2532 sgd_solver.cpp:106] Iteration 1900, lr = 0.00877687
I0328 19:30:03.880522  2532 solver.cpp:337] Iteration 2000, Testing net (#0)
I0328 19:30:04.070771  2532 solver.cpp:404]     Test net output #0: accuracy = 0.9852
I0328 19:30:04.070812  2532 solver.cpp:404]     Test net output #1: loss = 0.045745 (* 1 = 0.045745 loss)
I0328 19:30:04.071465  2532 solver.cpp:228] Iteration 2000, loss = 0.00839915
I0328 19:30:04.071482  2532 solver.cpp:244]     Train net output #0: loss = 0.0083991 (* 1 = 0.0083991 loss)
I0328 19:30:04.071492  2532 sgd_solver.cpp:106] Iteration 2000, lr = 0.00872196
I0328 19:30:04.465675  2532 solver.cpp:228] Iteration 2100, loss = 0.0168364
I0328 19:30:04.465709  2532 solver.cpp:244]     Train net output #0: loss = 0.0168364 (* 1 = 0.0168364 loss)
I0328 19:30:04.465715  2532 sgd_solver.cpp:106] Iteration 2100, lr = 0.00866784
I0328 19:30:04.743353  2532 solver.cpp:228] Iteration 2200, loss = 0.0197616
I0328 19:30:04.743382  2532 solver.cpp:244]     Train net output #0: loss = 0.0197616 (* 1 = 0.0197616 loss)
I0328 19:30:04.743388  2532 sgd_solver.cpp:106] Iteration 2200, lr = 0.0086145
I0328 19:30:05.105345  2532 solver.cpp:228] Iteration 2300, loss = 0.120333
I0328 19:30:05.105376  2532 solver.cpp:244]     Train net output #0: loss = 0.120333 (* 1 = 0.120333 loss)
I0328 19:30:05.105381  2532 sgd_solver.cpp:106] Iteration 2300, lr = 0.00856192
I0328 19:30:05.432586  2532 solver.cpp:228] Iteration 2400, loss = 0.00913988
I0328 19:30:05.432612  2532 solver.cpp:244]     Train net output #0: loss = 0.00913991 (* 1 = 0.00913991 loss)
I0328 19:30:05.432618  2532 sgd_solver.cpp:106] Iteration 2400, lr = 0.00851008
I0328 19:30:05.698125  2532 solver.cpp:337] Iteration 2500, Testing net (#0)
I0328 19:30:05.967391  2532 solver.cpp:404]     Test net output #0: accuracy = 0.9853
I0328 19:30:05.967422  2532 solver.cpp:404]     Test net output #1: loss = 0.0449315 (* 1 = 0.0449315 loss)
I0328 19:30:05.968067  2532 solver.cpp:228] Iteration 2500, loss = 0.0278854
I0328 19:30:05.968124  2532 solver.cpp:244]     Train net output #0: loss = 0.0278855 (* 1 = 0.0278855 loss)
I0328 19:30:05.968142  2532 sgd_solver.cpp:106] Iteration 2500, lr = 0.00845897
I0328 19:30:06.310573  2532 solver.cpp:228] Iteration 2600, loss = 0.0675696
I0328 19:30:06.310596  2532 solver.cpp:244]     Train net output #0: loss = 0.0675697 (* 1 = 0.0675697 loss)
I0328 19:30:06.310601  2532 sgd_solver.cpp:106] Iteration 2600, lr = 0.00840857
I0328 19:30:06.621161  2532 solver.cpp:228] Iteration 2700, loss = 0.0571786
I0328 19:30:06.621192  2532 solver.cpp:244]     Train net output #0: loss = 0.0571786 (* 1 = 0.0571786 loss)
I0328 19:30:06.621198  2532 sgd_solver.cpp:106] Iteration 2700, lr = 0.00835886
I0328 19:30:06.932826  2532 solver.cpp:228] Iteration 2800, loss = 0.00218287
I0328 19:30:06.932857  2532 solver.cpp:244]     Train net output #0: loss = 0.00218293 (* 1 = 0.00218293 loss)
I0328 19:30:06.932862  2532 sgd_solver.cpp:106] Iteration 2800, lr = 0.00830984
I0328 19:30:07.302281  2532 solver.cpp:228] Iteration 2900, loss = 0.0162561
I0328 19:30:07.302306  2532 solver.cpp:244]     Train net output #0: loss = 0.0162562 (* 1 = 0.0162562 loss)
I0328 19:30:07.302312  2532 sgd_solver.cpp:106] Iteration 2900, lr = 0.00826148
I0328 19:30:07.637953  2532 solver.cpp:337] Iteration 3000, Testing net (#0)
I0328 19:30:07.763232  2532 solver.cpp:404]     Test net output #0: accuracy = 0.9873
I0328 19:30:07.763263  2532 solver.cpp:404]     Test net output #1: loss = 0.0394564 (* 1 = 0.0394564 loss)
I0328 19:30:07.763948  2532 solver.cpp:228] Iteration 3000, loss = 0.0267792
I0328 19:30:07.763967  2532 solver.cpp:244]     Train net output #0: loss = 0.0267792 (* 1 = 0.0267792 loss)
I0328 19:30:07.763974  2532 sgd_solver.cpp:106] Iteration 3000, lr = 0.00821377
I0328 19:30:08.113940  2532 solver.cpp:228] Iteration 3100, loss = 0.0351026
I0328 19:30:08.113967  2532 solver.cpp:244]     Train net output #0: loss = 0.0351026 (* 1 = 0.0351026 loss)
I0328 19:30:08.113973  2532 sgd_solver.cpp:106] Iteration 3100, lr = 0.0081667
I0328 19:30:08.438076  2532 solver.cpp:228] Iteration 3200, loss = 0.005196
I0328 19:30:08.438118  2532 solver.cpp:244]     Train net output #0: loss = 0.00519604 (* 1 = 0.00519604 loss)
I0328 19:30:08.438125  2532 sgd_solver.cpp:106] Iteration 3200, lr = 0.00812025
I0328 19:30:08.784456  2532 solver.cpp:228] Iteration 3300, loss = 0.0158769
I0328 19:30:08.784492  2532 solver.cpp:244]     Train net output #0: loss = 0.015877 (* 1 = 0.015877 loss)
I0328 19:30:08.784498  2532 sgd_solver.cpp:106] Iteration 3300, lr = 0.00807442
I0328 19:30:09.133947  2532 solver.cpp:228] Iteration 3400, loss = 0.00985796
I0328 19:30:09.133976  2532 solver.cpp:244]     Train net output #0: loss = 0.00985805 (* 1 = 0.00985805 loss)
I0328 19:30:09.133982  2532 sgd_solver.cpp:106] Iteration 3400, lr = 0.00802918
I0328 19:30:09.482678  2532 solver.cpp:337] Iteration 3500, Testing net (#0)
I0328 19:30:09.572296  2532 solver.cpp:404]     Test net output #0: accuracy = 0.986
I0328 19:30:09.572324  2532 solver.cpp:404]     Test net output #1: loss = 0.0410168 (* 1 = 0.0410168 loss)
I0328 19:30:09.572965  2532 solver.cpp:228] Iteration 3500, loss = 0.00439527
I0328 19:30:09.572983  2532 solver.cpp:244]     Train net output #0: loss = 0.00439534 (* 1 = 0.00439534 loss)
I0328 19:30:09.572993  2532 sgd_solver.cpp:106] Iteration 3500, lr = 0.00798454
I0328 19:30:09.887393  2532 solver.cpp:228] Iteration 3600, loss = 0.0321529
I0328 19:30:09.887418  2532 solver.cpp:244]     Train net output #0: loss = 0.032153 (* 1 = 0.032153 loss)
I0328 19:30:09.887423  2532 sgd_solver.cpp:106] Iteration 3600, lr = 0.00794046
I0328 19:30:10.260455  2532 solver.cpp:228] Iteration 3700, loss = 0.0248466
I0328 19:30:10.260483  2532 solver.cpp:244]     Train net output #0: loss = 0.0248467 (* 1 = 0.0248467 loss)
I0328 19:30:10.260488  2532 sgd_solver.cpp:106] Iteration 3700, lr = 0.00789695
I0328 19:30:10.555737  2532 solver.cpp:228] Iteration 3800, loss = 0.0137009
I0328 19:30:10.555765  2532 solver.cpp:244]     Train net output #0: loss = 0.013701 (* 1 = 0.013701 loss)
I0328 19:30:10.555771  2532 sgd_solver.cpp:106] Iteration 3800, lr = 0.007854
I0328 19:30:10.835341  2532 solver.cpp:228] Iteration 3900, loss = 0.0318391
I0328 19:30:10.835364  2532 solver.cpp:244]     Train net output #0: loss = 0.0318392 (* 1 = 0.0318392 loss)
I0328 19:30:10.835371  2532 sgd_solver.cpp:106] Iteration 3900, lr = 0.00781158
I0328 19:30:11.183924  2532 solver.cpp:337] Iteration 4000, Testing net (#0)
I0328 19:30:11.441587  2532 solver.cpp:404]     Test net output #0: accuracy = 0.9894
I0328 19:30:11.441617  2532 solver.cpp:404]     Test net output #1: loss = 0.0301995 (* 1 = 0.0301995 loss)
I0328 19:30:11.443146  2532 solver.cpp:228] Iteration 4000, loss = 0.0141343
I0328 19:30:11.443162  2532 solver.cpp:244]     Train net output #0: loss = 0.0141344 (* 1 = 0.0141344 loss)
I0328 19:30:11.443169  2532 sgd_solver.cpp:106] Iteration 4000, lr = 0.0077697
I0328 19:30:11.760010  2532 solver.cpp:228] Iteration 4100, loss = 0.028197
I0328 19:30:11.760037  2532 solver.cpp:244]     Train net output #0: loss = 0.0281971 (* 1 = 0.0281971 loss)
I0328 19:30:11.760042  2532 sgd_solver.cpp:106] Iteration 4100, lr = 0.00772833
I0328 19:30:12.089447  2532 solver.cpp:228] Iteration 4200, loss = 0.0129877
I0328 19:30:12.089474  2532 solver.cpp:244]     Train net output #0: loss = 0.0129878 (* 1 = 0.0129878 loss)
I0328 19:30:12.089480  2532 sgd_solver.cpp:106] Iteration 4200, lr = 0.00768748
I0328 19:30:12.419196  2532 solver.cpp:228] Iteration 4300, loss = 0.0536625
I0328 19:30:12.419226  2532 solver.cpp:244]     Train net output #0: loss = 0.0536626 (* 1 = 0.0536626 loss)
I0328 19:30:12.419234  2532 sgd_solver.cpp:106] Iteration 4300, lr = 0.00764712
I0328 19:30:12.741686  2532 solver.cpp:228] Iteration 4400, loss = 0.0169241
I0328 19:30:12.741713  2532 solver.cpp:244]     Train net output #0: loss = 0.0169241 (* 1 = 0.0169241 loss)
I0328 19:30:12.741719  2532 sgd_solver.cpp:106] Iteration 4400, lr = 0.00760726
I0328 19:30:13.040231  2532 solver.cpp:337] Iteration 4500, Testing net (#0)
I0328 19:30:13.313794  2532 solver.cpp:404]     Test net output #0: accuracy = 0.9887
I0328 19:30:13.313845  2532 solver.cpp:404]     Test net output #1: loss = 0.0350937 (* 1 = 0.0350937 loss)
I0328 19:30:13.320554  2532 solver.cpp:228] Iteration 4500, loss = 0.00435573
I0328 19:30:13.320571  2532 solver.cpp:244]     Train net output #0: loss = 0.00435581 (* 1 = 0.00435581 loss)
I0328 19:30:13.320578  2532 sgd_solver.cpp:106] Iteration 4500, lr = 0.00756788
I0328 19:30:13.559696  2532 solver.cpp:228] Iteration 4600, loss = 0.0101214
I0328 19:30:13.559728  2532 solver.cpp:244]     Train net output #0: loss = 0.0101214 (* 1 = 0.0101214 loss)
I0328 19:30:13.559734  2532 sgd_solver.cpp:106] Iteration 4600, lr = 0.00752897
I0328 19:30:13.822892  2532 solver.cpp:228] Iteration 4700, loss = 0.00430396
I0328 19:30:13.822914  2532 solver.cpp:244]     Train net output #0: loss = 0.00430407 (* 1 = 0.00430407 loss)
I0328 19:30:13.822921  2532 sgd_solver.cpp:106] Iteration 4700, lr = 0.00749052
I0328 19:30:14.203470  2532 solver.cpp:228] Iteration 4800, loss = 0.0106281
I0328 19:30:14.203496  2532 solver.cpp:244]     Train net output #0: loss = 0.0106282 (* 1 = 0.0106282 loss)
I0328 19:30:14.203502  2532 sgd_solver.cpp:106] Iteration 4800, lr = 0.00745253
I0328 19:30:14.508708  2532 solver.cpp:228] Iteration 4900, loss = 0.00375433
I0328 19:30:14.508735  2532 solver.cpp:244]     Train net output #0: loss = 0.00375442 (* 1 = 0.00375442 loss)
I0328 19:30:14.508741  2532 sgd_solver.cpp:106] Iteration 4900, lr = 0.00741498
I0328 19:30:14.822150  2532 solver.cpp:454] Snapshotting to binary proto file examples/mnist/lenet_iter_5000.caffemodel
I0328 19:30:14.837585  2532 sgd_solver.cpp:273] Snapshotting solver state to binary proto file examples/mnist/lenet_iter_5000.solverstate
I0328 19:30:14.841120  2532 solver.cpp:337] Iteration 5000, Testing net (#0)
I0328 19:30:15.091706  2532 solver.cpp:404]     Test net output #0: accuracy = 0.9903
I0328 19:30:15.091737  2532 solver.cpp:404]     Test net output #1: loss = 0.0307095 (* 1 = 0.0307095 loss)
I0328 19:30:15.093008  2532 solver.cpp:228] Iteration 5000, loss = 0.0260099
I0328 19:30:15.093024  2532 solver.cpp:244]     Train net output #0: loss = 0.02601 (* 1 = 0.02601 loss)
I0328 19:30:15.093031  2532 sgd_solver.cpp:106] Iteration 5000, lr = 0.00737788
I0328 19:30:15.422441  2532 solver.cpp:228] Iteration 5100, loss = 0.0213667
I0328 19:30:15.422472  2532 solver.cpp:244]     Train net output #0: loss = 0.0213668 (* 1 = 0.0213668 loss)
I0328 19:30:15.422478  2532 sgd_solver.cpp:106] Iteration 5100, lr = 0.0073412
I0328 19:30:15.701787  2532 solver.cpp:228] Iteration 5200, loss = 0.00483817
I0328 19:30:15.701822  2532 solver.cpp:244]     Train net output #0: loss = 0.00483824 (* 1 = 0.00483824 loss)
I0328 19:30:15.701829  2532 sgd_solver.cpp:106] Iteration 5200, lr = 0.00730495
I0328 19:30:16.080499  2532 solver.cpp:228] Iteration 5300, loss = 0.0020156
I0328 19:30:16.080528  2532 solver.cpp:244]     Train net output #0: loss = 0.00201567 (* 1 = 0.00201567 loss)
I0328 19:30:16.080533  2532 sgd_solver.cpp:106] Iteration 5300, lr = 0.00726911
I0328 19:30:16.414523  2532 solver.cpp:228] Iteration 5400, loss = 0.00957809
I0328 19:30:16.414553  2532 solver.cpp:244]     Train net output #0: loss = 0.00957815 (* 1 = 0.00957815 loss)
I0328 19:30:16.414564  2532 sgd_solver.cpp:106] Iteration 5400, lr = 0.00723368
I0328 19:30:16.679096  2532 solver.cpp:337] Iteration 5500, Testing net (#0)
I0328 19:30:16.878810  2532 solver.cpp:404]     Test net output #0: accuracy = 0.9893
I0328 19:30:16.878842  2532 solver.cpp:404]     Test net output #1: loss = 0.0330626 (* 1 = 0.0330626 loss)
I0328 19:30:16.879482  2532 solver.cpp:228] Iteration 5500, loss = 0.0127852
I0328 19:30:16.879499  2532 solver.cpp:244]     Train net output #0: loss = 0.0127853 (* 1 = 0.0127853 loss)
I0328 19:30:16.879514  2532 sgd_solver.cpp:106] Iteration 5500, lr = 0.00719865
I0328 19:30:17.240479  2532 solver.cpp:228] Iteration 5600, loss = 0.000462517
I0328 19:30:17.240510  2532 solver.cpp:244]     Train net output #0: loss = 0.000462563 (* 1 = 0.000462563 loss)
I0328 19:30:17.240519  2532 sgd_solver.cpp:106] Iteration 5600, lr = 0.00716402
I0328 19:30:17.475414  2532 solver.cpp:228] Iteration 5700, loss = 0.00424558
I0328 19:30:17.475450  2532 solver.cpp:244]     Train net output #0: loss = 0.00424562 (* 1 = 0.00424562 loss)
I0328 19:30:17.475456  2532 sgd_solver.cpp:106] Iteration 5700, lr = 0.00712977
I0328 19:30:17.748229  2532 solver.cpp:228] Iteration 5800, loss = 0.0215939
I0328 19:30:17.748252  2532 solver.cpp:244]     Train net output #0: loss = 0.0215939 (* 1 = 0.0215939 loss)
I0328 19:30:17.748257  2532 sgd_solver.cpp:106] Iteration 5800, lr = 0.0070959
I0328 19:30:17.975960  2532 solver.cpp:228] Iteration 5900, loss = 0.00649087
I0328 19:30:17.975985  2532 solver.cpp:244]     Train net output #0: loss = 0.00649091 (* 1 = 0.00649091 loss)
I0328 19:30:17.975989  2532 sgd_solver.cpp:106] Iteration 5900, lr = 0.0070624
I0328 19:30:18.324182  2532 solver.cpp:337] Iteration 6000, Testing net (#0)
I0328 19:30:18.554812  2532 solver.cpp:404]     Test net output #0: accuracy = 0.991
I0328 19:30:18.554846  2532 solver.cpp:404]     Test net output #1: loss = 0.0282955 (* 1 = 0.0282955 loss)
I0328 19:30:18.559497  2532 solver.cpp:228] Iteration 6000, loss = 0.00446534
I0328 19:30:18.559514  2532 solver.cpp:244]     Train net output #0: loss = 0.00446538 (* 1 = 0.00446538 loss)
I0328 19:30:18.559521  2532 sgd_solver.cpp:106] Iteration 6000, lr = 0.00702927
I0328 19:30:18.812295  2532 solver.cpp:228] Iteration 6100, loss = 0.00131822
I0328 19:30:18.812326  2532 solver.cpp:244]     Train net output #0: loss = 0.00131824 (* 1 = 0.00131824 loss)
I0328 19:30:18.812331  2532 sgd_solver.cpp:106] Iteration 6100, lr = 0.0069965
I0328 19:30:19.161257  2532 solver.cpp:228] Iteration 6200, loss = 0.00702568
I0328 19:30:19.161283  2532 solver.cpp:244]     Train net output #0: loss = 0.00702571 (* 1 = 0.00702571 loss)
I0328 19:30:19.161288  2532 sgd_solver.cpp:106] Iteration 6200, lr = 0.00696408
I0328 19:30:19.425575  2532 solver.cpp:228] Iteration 6300, loss = 0.00910935
I0328 19:30:19.425602  2532 solver.cpp:244]     Train net output #0: loss = 0.00910939 (* 1 = 0.00910939 loss)
I0328 19:30:19.425608  2532 sgd_solver.cpp:106] Iteration 6300, lr = 0.00693201
I0328 19:30:19.752600  2532 solver.cpp:228] Iteration 6400, loss = 0.00992414
I0328 19:30:19.752626  2532 solver.cpp:244]     Train net output #0: loss = 0.00992417 (* 1 = 0.00992417 loss)
I0328 19:30:19.752634  2532 sgd_solver.cpp:106] Iteration 6400, lr = 0.00690029
I0328 19:30:20.022999  2532 solver.cpp:337] Iteration 6500, Testing net (#0)
I0328 19:30:20.311820  2532 solver.cpp:404]     Test net output #0: accuracy = 0.9903
I0328 19:30:20.311849  2532 solver.cpp:404]     Test net output #1: loss = 0.0293131 (* 1 = 0.0293131 loss)
I0328 19:30:20.312511  2532 solver.cpp:228] Iteration 6500, loss = 0.00896712
I0328 19:30:20.312536  2532 solver.cpp:244]     Train net output #0: loss = 0.00896714 (* 1 = 0.00896714 loss)
I0328 19:30:20.312548  2532 sgd_solver.cpp:106] Iteration 6500, lr = 0.0068689
I0328 19:30:20.579493  2532 solver.cpp:228] Iteration 6600, loss = 0.0157276
I0328 19:30:20.579521  2532 solver.cpp:244]     Train net output #0: loss = 0.0157276 (* 1 = 0.0157276 loss)
I0328 19:30:20.579527  2532 sgd_solver.cpp:106] Iteration 6600, lr = 0.00683784
I0328 19:30:20.855541  2532 solver.cpp:228] Iteration 6700, loss = 0.0125102
I0328 19:30:20.855573  2532 solver.cpp:244]     Train net output #0: loss = 0.0125102 (* 1 = 0.0125102 loss)
I0328 19:30:20.855584  2532 sgd_solver.cpp:106] Iteration 6700, lr = 0.00680711
I0328 19:30:21.220584  2532 solver.cpp:228] Iteration 6800, loss = 0.00361601
I0328 19:30:21.220610  2532 solver.cpp:244]     Train net output #0: loss = 0.00361603 (* 1 = 0.00361603 loss)
I0328 19:30:21.220615  2532 sgd_solver.cpp:106] Iteration 6800, lr = 0.0067767
I0328 19:30:21.520323  2532 solver.cpp:228] Iteration 6900, loss = 0.00433482
I0328 19:30:21.520347  2532 solver.cpp:244]     Train net output #0: loss = 0.00433484 (* 1 = 0.00433484 loss)
I0328 19:30:21.520352  2532 sgd_solver.cpp:106] Iteration 6900, lr = 0.0067466
I0328 19:30:21.834478  2532 solver.cpp:337] Iteration 7000, Testing net (#0)
I0328 19:30:22.067176  2532 solver.cpp:404]     Test net output #0: accuracy = 0.9902
I0328 19:30:22.067207  2532 solver.cpp:404]     Test net output #1: loss = 0.030529 (* 1 = 0.030529 loss)
I0328 19:30:22.068749  2532 solver.cpp:228] Iteration 7000, loss = 0.00761897
I0328 19:30:22.068766  2532 solver.cpp:244]     Train net output #0: loss = 0.00761899 (* 1 = 0.00761899 loss)
I0328 19:30:22.068774  2532 sgd_solver.cpp:106] Iteration 7000, lr = 0.00671681
I0328 19:30:22.335480  2532 solver.cpp:228] Iteration 7100, loss = 0.0149167
I0328 19:30:22.335510  2532 solver.cpp:244]     Train net output #0: loss = 0.0149167 (* 1 = 0.0149167 loss)
I0328 19:30:22.335517  2532 sgd_solver.cpp:106] Iteration 7100, lr = 0.00668733
I0328 19:30:22.615579  2532 solver.cpp:228] Iteration 7200, loss = 0.00389583
I0328 19:30:22.615602  2532 solver.cpp:244]     Train net output #0: loss = 0.00389585 (* 1 = 0.00389585 loss)
I0328 19:30:22.615608  2532 sgd_solver.cpp:106] Iteration 7200, lr = 0.00665815
I0328 19:30:22.965278  2532 solver.cpp:228] Iteration 7300, loss = 0.017772
I0328 19:30:22.965302  2532 solver.cpp:244]     Train net output #0: loss = 0.0177721 (* 1 = 0.0177721 loss)
I0328 19:30:22.965307  2532 sgd_solver.cpp:106] Iteration 7300, lr = 0.00662927
I0328 19:30:23.275080  2532 solver.cpp:228] Iteration 7400, loss = 0.00495408
I0328 19:30:23.275108  2532 solver.cpp:244]     Train net output #0: loss = 0.00495411 (* 1 = 0.00495411 loss)
I0328 19:30:23.275115  2532 sgd_solver.cpp:106] Iteration 7400, lr = 0.00660067
I0328 19:30:23.553067  2532 solver.cpp:337] Iteration 7500, Testing net (#0)
I0328 19:30:23.777024  2532 solver.cpp:404]     Test net output #0: accuracy = 0.9906
I0328 19:30:23.777057  2532 solver.cpp:404]     Test net output #1: loss = 0.03123 (* 1 = 0.03123 loss)
I0328 19:30:23.777698  2532 solver.cpp:228] Iteration 7500, loss = 0.00159795
I0328 19:30:23.777717  2532 solver.cpp:244]     Train net output #0: loss = 0.00159798 (* 1 = 0.00159798 loss)
I0328 19:30:23.777742  2532 sgd_solver.cpp:106] Iteration 7500, lr = 0.00657236
I0328 19:30:24.157346  2532 solver.cpp:228] Iteration 7600, loss = 0.00589341
I0328 19:30:24.157387  2532 solver.cpp:244]     Train net output #0: loss = 0.00589344 (* 1 = 0.00589344 loss)
I0328 19:30:24.157397  2532 sgd_solver.cpp:106] Iteration 7600, lr = 0.00654433
I0328 19:30:24.466545  2532 solver.cpp:228] Iteration 7700, loss = 0.0223221
I0328 19:30:24.466569  2532 solver.cpp:244]     Train net output #0: loss = 0.0223222 (* 1 = 0.0223222 loss)
I0328 19:30:24.466578  2532 sgd_solver.cpp:106] Iteration 7700, lr = 0.00651658
I0328 19:30:24.835237  2532 solver.cpp:228] Iteration 7800, loss = 0.00277635
I0328 19:30:24.835269  2532 solver.cpp:244]     Train net output #0: loss = 0.00277638 (* 1 = 0.00277638 loss)
I0328 19:30:24.835278  2532 sgd_solver.cpp:106] Iteration 7800, lr = 0.00648911
I0328 19:30:25.089210  2532 solver.cpp:228] Iteration 7900, loss = 0.00482228
I0328 19:30:25.089246  2532 solver.cpp:244]     Train net output #0: loss = 0.00482231 (* 1 = 0.00482231 loss)
I0328 19:30:25.089257  2532 sgd_solver.cpp:106] Iteration 7900, lr = 0.0064619
I0328 19:30:25.417707  2532 solver.cpp:337] Iteration 8000, Testing net (#0)
I0328 19:30:25.569162  2532 solver.cpp:404]     Test net output #0: accuracy = 0.9904
I0328 19:30:25.569203  2532 solver.cpp:404]     Test net output #1: loss = 0.0293426 (* 1 = 0.0293426 loss)
I0328 19:30:25.569869  2532 solver.cpp:228] Iteration 8000, loss = 0.00810755
I0328 19:30:25.569890  2532 solver.cpp:244]     Train net output #0: loss = 0.00810757 (* 1 = 0.00810757 loss)
I0328 19:30:25.569900  2532 sgd_solver.cpp:106] Iteration 8000, lr = 0.00643496
I0328 19:30:25.927444  2532 solver.cpp:228] Iteration 8100, loss = 0.0165582
I0328 19:30:25.927472  2532 solver.cpp:244]     Train net output #0: loss = 0.0165582 (* 1 = 0.0165582 loss)
I0328 19:30:25.927479  2532 sgd_solver.cpp:106] Iteration 8100, lr = 0.00640827
I0328 19:30:26.248455  2532 solver.cpp:228] Iteration 8200, loss = 0.00975051
I0328 19:30:26.248497  2532 solver.cpp:244]     Train net output #0: loss = 0.00975053 (* 1 = 0.00975053 loss)
I0328 19:30:26.248540  2532 sgd_solver.cpp:106] Iteration 8200, lr = 0.00638185
I0328 19:30:26.511210  2532 solver.cpp:228] Iteration 8300, loss = 0.029401
I0328 19:30:26.511242  2532 solver.cpp:244]     Train net output #0: loss = 0.0294011 (* 1 = 0.0294011 loss)
I0328 19:30:26.511248  2532 sgd_solver.cpp:106] Iteration 8300, lr = 0.00635567
I0328 19:30:26.809790  2532 solver.cpp:228] Iteration 8400, loss = 0.00563668
I0328 19:30:26.809816  2532 solver.cpp:244]     Train net output #0: loss = 0.0056367 (* 1 = 0.0056367 loss)
I0328 19:30:26.809823  2532 sgd_solver.cpp:106] Iteration 8400, lr = 0.00632975
I0328 19:30:27.154844  2532 solver.cpp:337] Iteration 8500, Testing net (#0)
I0328 19:30:27.456598  2532 solver.cpp:404]     Test net output #0: accuracy = 0.9905
I0328 19:30:27.456629  2532 solver.cpp:404]     Test net output #1: loss = 0.0287546 (* 1 = 0.0287546 loss)
I0328 19:30:27.461474  2532 solver.cpp:228] Iteration 8500, loss = 0.00753963
I0328 19:30:27.461493  2532 solver.cpp:244]     Train net output #0: loss = 0.00753967 (* 1 = 0.00753967 loss)
I0328 19:30:27.461499  2532 sgd_solver.cpp:106] Iteration 8500, lr = 0.00630407
I0328 19:30:27.727741  2532 solver.cpp:228] Iteration 8600, loss = 0.000695787
I0328 19:30:27.727769  2532 solver.cpp:244]     Train net output #0: loss = 0.000695824 (* 1 = 0.000695824 loss)
I0328 19:30:27.727775  2532 sgd_solver.cpp:106] Iteration 8600, lr = 0.00627864
I0328 19:30:28.052096  2532 solver.cpp:228] Iteration 8700, loss = 0.00265296
I0328 19:30:28.052125  2532 solver.cpp:244]     Train net output #0: loss = 0.00265299 (* 1 = 0.00265299 loss)
I0328 19:30:28.052131  2532 sgd_solver.cpp:106] Iteration 8700, lr = 0.00625344
I0328 19:30:28.409672  2532 solver.cpp:228] Iteration 8800, loss = 0.00110993
I0328 19:30:28.409700  2532 solver.cpp:244]     Train net output #0: loss = 0.00110996 (* 1 = 0.00110996 loss)
I0328 19:30:28.409706  2532 sgd_solver.cpp:106] Iteration 8800, lr = 0.00622847
I0328 19:30:28.797593  2532 solver.cpp:228] Iteration 8900, loss = 0.000595419
I0328 19:30:28.797628  2532 solver.cpp:244]     Train net output #0: loss = 0.000595448 (* 1 = 0.000595448 loss)
I0328 19:30:28.797634  2532 sgd_solver.cpp:106] Iteration 8900, lr = 0.00620374
I0328 19:30:29.140872  2532 solver.cpp:337] Iteration 9000, Testing net (#0)
I0328 19:30:29.408411  2532 solver.cpp:404]     Test net output #0: accuracy = 0.9905
I0328 19:30:29.408444  2532 solver.cpp:404]     Test net output #1: loss = 0.0281204 (* 1 = 0.0281204 loss)
I0328 19:30:29.409649  2532 solver.cpp:228] Iteration 9000, loss = 0.0174571
I0328 19:30:29.409668  2532 solver.cpp:244]     Train net output #0: loss = 0.0174571 (* 1 = 0.0174571 loss)
I0328 19:30:29.409678  2532 sgd_solver.cpp:106] Iteration 9000, lr = 0.00617924
I0328 19:30:29.710494  2532 solver.cpp:228] Iteration 9100, loss = 0.00726821
I0328 19:30:29.710525  2532 solver.cpp:244]     Train net output #0: loss = 0.00726824 (* 1 = 0.00726824 loss)
I0328 19:30:29.710531  2532 sgd_solver.cpp:106] Iteration 9100, lr = 0.00615496
I0328 19:30:30.002382  2532 solver.cpp:228] Iteration 9200, loss = 0.00393138
I0328 19:30:30.002410  2532 solver.cpp:244]     Train net output #0: loss = 0.00393141 (* 1 = 0.00393141 loss)
I0328 19:30:30.002416  2532 sgd_solver.cpp:106] Iteration 9200, lr = 0.0061309
I0328 19:30:30.388334  2532 solver.cpp:228] Iteration 9300, loss = 0.00772397
I0328 19:30:30.388365  2532 solver.cpp:244]     Train net output #0: loss = 0.007724 (* 1 = 0.007724 loss)
I0328 19:30:30.388375  2532 sgd_solver.cpp:106] Iteration 9300, lr = 0.00610706
I0328 19:30:30.760013  2532 solver.cpp:228] Iteration 9400, loss = 0.0255712
I0328 19:30:30.760044  2532 solver.cpp:244]     Train net output #0: loss = 0.0255713 (* 1 = 0.0255713 loss)
I0328 19:30:30.760052  2532 sgd_solver.cpp:106] Iteration 9400, lr = 0.00608343
I0328 19:30:31.048245  2532 solver.cpp:337] Iteration 9500, Testing net (#0)
I0328 19:30:31.287348  2532 solver.cpp:404]     Test net output #0: accuracy = 0.9889
I0328 19:30:31.287376  2532 solver.cpp:404]     Test net output #1: loss = 0.0341572 (* 1 = 0.0341572 loss)
I0328 19:30:31.288060  2532 solver.cpp:228] Iteration 9500, loss = 0.00328411
I0328 19:30:31.288081  2532 solver.cpp:244]     Train net output #0: loss = 0.00328414 (* 1 = 0.00328414 loss)
I0328 19:30:31.288089  2532 sgd_solver.cpp:106] Iteration 9500, lr = 0.00606002
I0328 19:30:31.623594  2532 solver.cpp:228] Iteration 9600, loss = 0.00208279
I0328 19:30:31.623620  2532 solver.cpp:244]     Train net output #0: loss = 0.00208282 (* 1 = 0.00208282 loss)
I0328 19:30:31.623626  2532 sgd_solver.cpp:106] Iteration 9600, lr = 0.00603682
I0328 19:30:31.921439  2532 solver.cpp:228] Iteration 9700, loss = 0.00417724
I0328 19:30:31.921464  2532 solver.cpp:244]     Train net output #0: loss = 0.00417727 (* 1 = 0.00417727 loss)
I0328 19:30:31.921469  2532 sgd_solver.cpp:106] Iteration 9700, lr = 0.00601382
I0328 19:30:32.150326  2532 solver.cpp:228] Iteration 9800, loss = 0.0155305
I0328 19:30:32.150352  2532 solver.cpp:244]     Train net output #0: loss = 0.0155305 (* 1 = 0.0155305 loss)
I0328 19:30:32.150362  2532 sgd_solver.cpp:106] Iteration 9800, lr = 0.00599102
I0328 19:30:32.498646  2532 solver.cpp:228] Iteration 9900, loss = 0.00393814
I0328 19:30:32.498678  2532 solver.cpp:244]     Train net output #0: loss = 0.00393817 (* 1 = 0.00393817 loss)
I0328 19:30:32.498684  2532 sgd_solver.cpp:106] Iteration 9900, lr = 0.00596843
I0328 19:30:32.794668  2532 solver.cpp:454] Snapshotting to binary proto file examples/mnist/lenet_iter_10000.caffemodel
I0328 19:30:32.799289  2532 sgd_solver.cpp:273] Snapshotting solver state to binary proto file examples/mnist/lenet_iter_10000.solverstate  // 5.快照保存情况。
I0328 19:30:32.804375  2532 solver.cpp:317] Iteration 10000, loss = 0.00215722
I0328 19:30:32.804394  2532 solver.cpp:337] Iteration 10000, Testing net (#0)
I0328 19:30:33.008592  2532 solver.cpp:404]     Test net output #0: accuracy = 0.9904
I0328 19:30:33.008620  2532 solver.cpp:404]     Test net output #1: loss = 0.0294442 (* 1 = 0.0294442 loss)
I0328 19:30:33.008625  2532 solver.cpp:322] Optimization Done.
I0328 19:30:33.008630  2532 caffe.cpp:254] Optimization Done.

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值