【C++ Caffe】ubuntu下MNIST训练结果

训练过程在这我的这篇博客
https://blog.csdn.net/Feeryman_Lee/article/details/104523858

以下为训练结果

lichunlin@ThinkPad-T420:~/caffe/data/mnist$ ./get_mnist.sh 
Downloading...
--2020-02-26 15:29:13--  http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz
Resolving yann.lecun.com (yann.lecun.com)... 216.165.22.6
Connecting to yann.lecun.com (yann.lecun.com)|216.165.22.6|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 9912422 (9.5M) [application/x-gzip]
Saving to: ‘train-images-idx3-ubyte.gz’

train-images-idx3-u  12%[=>                  ]   1.15M  --.-KB/s    in 24m 17s 

2020-02-26 15:53:33 (826 B/s) - Read error at byte 1203539/9912422 (Connection timed out). Retrying.

--2020-02-26 15:53:34--  (try: 2)  http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz
Connecting to yann.lecun.com (yann.lecun.com)|216.165.22.6|:80... connected.
HTTP request sent, awaiting response... 206 Partial Content
Length: 9912422 (9.5M), 8708883 (8.3M) remaining [application/x-gzip]
Saving to: ‘train-images-idx3-ubyte.gz’

train-images-idx3-u 100%[++=================>]   9.45M  2.80KB/s    in 51m 58s 

2020-02-26 16:45:34 (2.73 KB/s) - ‘train-images-idx3-ubyte.gz’ saved [9912422/9912422]

--2020-02-26 16:45:34--  http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz
Resolving yann.lecun.com (yann.lecun.com)... 216.165.22.6
Connecting to yann.lecun.com (yann.lecun.com)|216.165.22.6|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 28881 (28K) [application/x-gzip]
Saving to: ‘train-labels-idx1-ubyte.gz’

train-labels-idx1-u 100%[===================>]  28.20K  5.24KB/s    in 5.4s    

2020-02-26 16:45:49 (5.24 KB/s) - ‘train-labels-idx1-ubyte.gz’ saved [28881/28881]

--2020-02-26 16:45:49--  http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz
Resolving yann.lecun.com (yann.lecun.com)... 216.165.22.6
Connecting to yann.lecun.com (yann.lecun.com)|216.165.22.6|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1648877 (1.6M) [application/x-gzip]
Saving to: ‘t10k-images-idx3-ubyte.gz’

t10k-images-idx3-ub 100%[===================>]   1.57M  3.61KB/s    in 7m 39s  

2020-02-26 16:53:29 (3.51 KB/s) - ‘t10k-images-idx3-ubyte.gz’ saved [1648877/1648877]

--2020-02-26 16:53:29--  http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz
Resolving yann.lecun.com (yann.lecun.com)... 216.165.22.6
Connecting to yann.lecun.com (yann.lecun.com)|216.165.22.6|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 4542 (4.4K) [application/x-gzip]
Saving to: ‘t10k-labels-idx1-ubyte.gz’

t10k-labels-idx1-ub 100%[===================>]   4.44K  6.88KB/s    in 0.6s    

2020-02-26 16:53:31 (6.88 KB/s) - ‘t10k-labels-idx1-ubyte.gz’ saved [4542/4542]

lichunlin@ThinkPad-T420:~/caffe/data/mnist$ cd ..
lichunlin@ThinkPad-T420:~/caffe/data$ cd ..
lichunlin@ThinkPad-T420:~/caffe$ ./examples/mnist/create_mnist.sh 
Creating lmdb...
I0226 18:49:42.627514 18492 db_lmdb.cpp:35] Opened lmdb examples/mnist/mnist_train_lmdb
I0226 18:49:42.627812 18492 convert_mnist_data.cpp:88] A total of 60000 items.
I0226 18:49:42.627828 18492 convert_mnist_data.cpp:89] Rows: 28 Cols: 28
I0226 18:49:43.350973 18492 convert_mnist_data.cpp:108] Processed 60000 files.
I0226 18:49:43.377131 18496 db_lmdb.cpp:35] Opened lmdb examples/mnist/mnist_test_lmdb
I0226 18:49:43.377393 18496 convert_mnist_data.cpp:88] A total of 10000 items.
I0226 18:49:43.377408 18496 convert_mnist_data.cpp:89] Rows: 28 Cols: 28
I0226 18:49:43.497009 18496 convert_mnist_data.cpp:108] Processed 10000 files.
Done.
lichunlin@ThinkPad-T420:~/caffe$ ./examples/mnist/train_lenet.sh
I0226 22:37:11.614959  2442 caffe.cpp:197] Use CPU.
I0226 22:37:11.615259  2442 solver.cpp:45] Initializing solver from parameters: 
test_iter: 100
test_interval: 500
base_lr: 0.01
display: 100
max_iter: 10000
lr_policy: "inv"
gamma: 0.0001
power: 0.75
momentum: 0.9
weight_decay: 0.0005
snapshot: 5000
snapshot_prefix: "examples/mnist/lenet"
solver_mode: CPU
net: "examples/mnist/lenet_train_test.prototxt"
train_state {
  level: 0
  stage: ""
}
I0226 22:37:11.615427  2442 solver.cpp:102] Creating training net from net file: examples/mnist/lenet_train_test.prototxt
I0226 22:37:11.615628  2442 net.cpp:296] The NetState phase (0) differed from the phase (1) specified by a rule in layer mnist
I0226 22:37:11.615650  2442 net.cpp:296] The NetState phase (0) differed from the phase (1) specified by a rule in layer accuracy
I0226 22:37:11.615666  2442 net.cpp:53] Initializing net from parameters: 
name: "LeNet"
state {
  phase: TRAIN
  level: 0
  stage: ""
}
layer {
  name: "mnist"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TRAIN
  }
  transform_param {
    scale: 0.00390625
  }
  data_param {
    source: "examples/mnist/mnist_train_lmdb"
    batch_size: 64
    backend: LMDB
  }
}
layer {
  name: "conv1"
  type: "Convolution"
  bottom: "data"
  top: "conv1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 20
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "pool1"
  type: "Pooling"
  bottom: "conv1"
  top: "pool1"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "conv2"
  type: "Convolution"
  bottom: "pool1"
  top: "conv2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 50
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "pool2"
  type: "Pooling"
  bottom: "conv2"
  top: "pool2"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "ip1"
  type: "InnerProduct"
  bottom: "pool2"
  top: "ip1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 500
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "relu1"
  type: "ReLU"
  bottom: "ip1"
  top: "ip1"
}
layer {
  name: "ip2"
  type: "InnerProduct"
  bottom: "ip1"
  top: "ip2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 10
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "loss"
  type: "SoftmaxWithLoss"
  bottom: "ip2"
  bottom: "label"
  top: "loss"
}
I0226 22:37:11.615937  2442 layer_factory.hpp:77] Creating layer mnist
I0226 22:37:11.616030  2442 db_lmdb.cpp:35] Opened lmdb examples/mnist/mnist_train_lmdb
I0226 22:37:11.616065  2442 net.cpp:86] Creating Layer mnist
I0226 22:37:11.616081  2442 net.cpp:382] mnist -> data
I0226 22:37:11.616111  2442 net.cpp:382] mnist -> label
I0226 22:37:11.619233  2442 data_layer.cpp:45] output data size: 64,1,28,28
I0226 22:37:11.619426  2442 net.cpp:124] Setting up mnist
I0226 22:37:11.619446  2442 net.cpp:131] Top shape: 64 1 28 28 (50176)
I0226 22:37:11.619469  2442 net.cpp:131] Top shape: 64 (64)
I0226 22:37:11.619483  2442 net.cpp:139] Memory required for data: 200960
I0226 22:37:11.619516  2442 layer_factory.hpp:77] Creating layer conv1
I0226 22:37:11.619544  2442 net.cpp:86] Creating Layer conv1
I0226 22:37:11.619558  2442 net.cpp:408] conv1 <- data
I0226 22:37:11.619580  2442 net.cpp:382] conv1 -> conv1
I0226 22:37:11.619652  2442 net.cpp:124] Setting up conv1
I0226 22:37:11.619668  2442 net.cpp:131] Top shape: 64 20 24 24 (737280)
I0226 22:37:11.619689  2442 net.cpp:139] Memory required for data: 3150080
I0226 22:37:11.619994  2442 layer_factory.hpp:77] Creating layer pool1
I0226 22:37:11.620031  2442 net.cpp:86] Creating Layer pool1
I0226 22:37:11.620055  2442 net.cpp:408] pool1 <- conv1
I0226 22:37:11.620113  2442 net.cpp:382] pool1 -> pool1
I0226 22:37:11.620144  2442 net.cpp:124] Setting up pool1
I0226 22:37:11.620158  2442 net.cpp:131] Top shape: 64 20 12 12 (184320)
I0226 22:37:11.620173  2442 net.cpp:139] Memory required for data: 3887360
I0226 22:37:11.620185  2442 layer_factory.hpp:77] Creating layer conv2
I0226 22:37:11.620203  2442 net.cpp:86] Creating Layer conv2
I0226 22:37:11.620220  2442 net.cpp:408] conv2 <- pool1
I0226 22:37:11.620239  2442 net.cpp:382] conv2 -> conv2
I0226 22:37:11.620643  2442 net.cpp:124] Setting up conv2
I0226 22:37:11.620668  2442 net.cpp:131] Top shape: 64 50 8 8 (204800)
I0226 22:37:11.620684  2442 net.cpp:139] Memory required for data: 4706560
I0226 22:37:11.620703  2442 layer_factory.hpp:77] Creating layer pool2
I0226 22:37:11.620720  2442 net.cpp:86] Creating Layer pool2
I0226 22:37:11.620971  2442 net.cpp:408] pool2 <- conv2
I0226 22:37:11.621006  2442 net.cpp:382] pool2 -> pool2
I0226 22:37:11.621078  2442 net.cpp:124] Setting up pool2
I0226 22:37:11.621101  2442 net.cpp:131] Top shape: 64 50 4 4 (51200)
I0226 22:37:11.621125  2442 net.cpp:139] Memory required for data: 4911360
I0226 22:37:11.621209  2442 layer_factory.hpp:77] Creating layer ip1
I0226 22:37:11.621284  2442 net.cpp:86] Creating Layer ip1
I0226 22:37:11.621311  2442 net.cpp:408] ip1 <- pool2
I0226 22:37:11.621346  2442 net.cpp:382] ip1 -> ip1
I0226 22:37:11.625741  2442 net.cpp:124] Setting up ip1
I0226 22:37:11.625774  2442 net.cpp:131] Top shape: 64 500 (32000)
I0226 22:37:11.625785  2442 net.cpp:139] Memory required for data: 5039360
I0226 22:37:11.625804  2442 layer_factory.hpp:77] Creating layer relu1
I0226 22:37:11.625818  2442 net.cpp:86] Creating Layer relu1
I0226 22:37:11.625828  2442 net.cpp:408] relu1 <- ip1
I0226 22:37:11.625840  2442 net.cpp:369] relu1 -> ip1 (in-place)
I0226 22:37:11.625862  2442 net.cpp:124] Setting up relu1
I0226 22:37:11.625874  2442 net.cpp:131] Top shape: 64 500 (32000)
I0226 22:37:11.625885  2442 net.cpp:139] Memory required for data: 5167360
I0226 22:37:11.625896  2442 layer_factory.hpp:77] Creating layer ip2
I0226 22:37:11.625914  2442 net.cpp:86] Creating Layer ip2
I0226 22:37:11.625924  2442 net.cpp:408] ip2 <- ip1
I0226 22:37:11.625941  2442 net.cpp:382] ip2 -> ip2
I0226 22:37:11.626019  2442 net.cpp:124] Setting up ip2
I0226 22:37:11.626030  2442 net.cpp:131] Top shape: 64 10 (640)
I0226 22:37:11.626044  2442 net.cpp:139] Memory required for data: 5169920
I0226 22:37:11.626060  2442 layer_factory.hpp:77] Creating layer loss
I0226 22:37:11.626085  2442 net.cpp:86] Creating Layer loss
I0226 22:37:11.626096  2442 net.cpp:408] loss <- ip2
I0226 22:37:11.626109  2442 net.cpp:408] loss <- label
I0226 22:37:11.626124  2442 net.cpp:382] loss -> loss
I0226 22:37:11.626142  2442 layer_factory.hpp:77] Creating layer loss
I0226 22:37:11.626168  2442 net.cpp:124] Setting up loss
I0226 22:37:11.626178  2442 net.cpp:131] Top shape: (1)
I0226 22:37:11.626191  2442 net.cpp:134]     with loss weight 1
I0226 22:37:11.626219  2442 net.cpp:139] Memory required for data: 5169924
I0226 22:37:11.626231  2442 net.cpp:200] loss needs backward computation.
I0226 22:37:11.626246  2442 net.cpp:200] ip2 needs backward computation.
I0226 22:37:11.626258  2442 net.cpp:200] relu1 needs backward computation.
I0226 22:37:11.626268  2442 net.cpp:200] ip1 needs backward computation.
I0226 22:37:11.626279  2442 net.cpp:200] pool2 needs backward computation.
I0226 22:37:11.626291  2442 net.cpp:200] conv2 needs backward computation.
I0226 22:37:11.626302  2442 net.cpp:200] pool1 needs backward computation.
I0226 22:37:11.626313  2442 net.cpp:200] conv1 needs backward computation.
I0226 22:37:11.626325  2442 net.cpp:202] mnist does not need backward computation.
I0226 22:37:11.626335  2442 net.cpp:244] This network produces output loss
I0226 22:37:11.626353  2442 net.cpp:257] Network initialization done.
I0226 22:37:11.626588  2442 solver.cpp:190] Creating test net (#0) specified by net file: examples/mnist/lenet_train_test.prototxt
I0226 22:37:11.626632  2442 net.cpp:296] The NetState phase (1) differed from the phase (0) specified by a rule in layer mnist
I0226 22:37:11.626688  2442 net.cpp:53] Initializing net from parameters: 
name: "LeNet"
state {
  phase: TEST
}
layer {
  name: "mnist"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TEST
  }
  transform_param {
    scale: 0.00390625
  }
  data_param {
    source: "examples/mnist/mnist_test_lmdb"
    batch_size: 100
    backend: LMDB
  }
}
layer {
  name: "conv1"
  type: "Convolution"
  bottom: "data"
  top: "conv1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 20
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "pool1"
  type: "Pooling"
  bottom: "conv1"
  top: "pool1"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "conv2"
  type: "Convolution"
  bottom: "pool1"
  top: "conv2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 50
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "pool2"
  type: "Pooling"
  bottom: "conv2"
  top: "pool2"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "ip1"
  type: "InnerProduct"
  bottom: "pool2"
  top: "ip1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 500
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "relu1"
  type: "ReLU"
  bottom: "ip1"
  top: "ip1"
}
layer {
  name: "ip2"
  type: "InnerProduct"
  bottom: "ip1"
  top: "ip2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 10
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "accuracy"
  type: "Accuracy"
  bottom: "ip2"
  bottom: "label"
  top: "accuracy"
  include {
    phase: TEST
  }
}
layer {
  name: "loss"
  type: "SoftmaxWithLoss"
  bottom: "ip2"
  bottom: "label"
  top: "loss"
}
I0226 22:37:11.626976  2442 layer_factory.hpp:77] Creating layer mnist
I0226 22:37:11.627063  2442 db_lmdb.cpp:35] Opened lmdb examples/mnist/mnist_test_lmdb
I0226 22:37:11.627090  2442 net.cpp:86] Creating Layer mnist
I0226 22:37:11.627108  2442 net.cpp:382] mnist -> data
I0226 22:37:11.627128  2442 net.cpp:382] mnist -> label
I0226 22:37:11.627156  2442 data_layer.cpp:45] output data size: 100,1,28,28
I0226 22:37:11.627238  2442 net.cpp:124] Setting up mnist
I0226 22:37:11.627250  2442 net.cpp:131] Top shape: 100 1 28 28 (78400)
I0226 22:37:11.627265  2442 net.cpp:131] Top shape: 100 (100)
I0226 22:37:11.627276  2442 net.cpp:139] Memory required for data: 314000
I0226 22:37:11.627323  2442 layer_factory.hpp:77] Creating layer label_mnist_1_split
I0226 22:37:11.627349  2442 net.cpp:86] Creating Layer label_mnist_1_split
I0226 22:37:11.627360  2442 net.cpp:408] label_mnist_1_split <- label
I0226 22:37:11.627377  2442 net.cpp:382] label_mnist_1_split -> label_mnist_1_split_0
I0226 22:37:11.627393  2442 net.cpp:382] label_mnist_1_split -> label_mnist_1_split_1
I0226 22:37:11.627410  2442 net.cpp:124] Setting up label_mnist_1_split
I0226 22:37:11.627421  2442 net.cpp:131] Top shape: 100 (100)
I0226 22:37:11.627435  2442 net.cpp:131] Top shape: 100 (100)
I0226 22:37:11.627449  2442 net.cpp:139] Memory required for data: 314800
I0226 22:37:11.627460  2442 layer_factory.hpp:77] Creating layer conv1
I0226 22:37:11.627487  2442 net.cpp:86] Creating Layer conv1
I0226 22:37:11.627501  2442 net.cpp:408] conv1 <- data
I0226 22:37:11.627524  2442 net.cpp:382] conv1 -> conv1
I0226 22:37:11.627596  2442 net.cpp:124] Setting up conv1
I0226 22:37:11.627625  2442 net.cpp:131] Top shape: 100 20 24 24 (1152000)
I0226 22:37:11.627648  2442 net.cpp:139] Memory required for data: 4922800
I0226 22:37:11.627669  2442 layer_factory.hpp:77] Creating layer pool1
I0226 22:37:11.627686  2442 net.cpp:86] Creating Layer pool1
I0226 22:37:11.627724  2442 net.cpp:408] pool1 <- conv1
I0226 22:37:11.627744  2442 net.cpp:382] pool1 -> pool1
I0226 22:37:11.627763  2442 net.cpp:124] Setting up pool1
I0226 22:37:11.627779  2442 net.cpp:131] Top shape: 100 20 12 12 (288000)
I0226 22:37:11.627791  2442 net.cpp:139] Memory required for data: 6074800
I0226 22:37:11.627802  2442 layer_factory.hpp:77] Creating layer conv2
I0226 22:37:11.627822  2442 net.cpp:86] Creating Layer conv2
I0226 22:37:11.627837  2442 net.cpp:408] conv2 <- pool1
I0226 22:37:11.627851  2442 net.cpp:382] conv2 -> conv2
I0226 22:37:11.628115  2442 net.cpp:124] Setting up conv2
I0226 22:37:11.628129  2442 net.cpp:131] Top shape: 100 50 8 8 (320000)
I0226 22:37:11.628141  2442 net.cpp:139] Memory required for data: 7354800
I0226 22:37:11.628159  2442 layer_factory.hpp:77] Creating layer pool2
I0226 22:37:11.628172  2442 net.cpp:86] Creating Layer pool2
I0226 22:37:11.628182  2442 net.cpp:408] pool2 <- conv2
I0226 22:37:11.628197  2442 net.cpp:382] pool2 -> pool2
I0226 22:37:11.628216  2442 net.cpp:124] Setting up pool2
I0226 22:37:11.628226  2442 net.cpp:131] Top shape: 100 50 4 4 (80000)
I0226 22:37:11.628239  2442 net.cpp:139] Memory required for data: 7674800
I0226 22:37:11.628254  2442 layer_factory.hpp:77] Creating layer ip1
I0226 22:37:11.628270  2442 net.cpp:86] Creating Layer ip1
I0226 22:37:11.628281  2442 net.cpp:408] ip1 <- pool2
I0226 22:37:11.628295  2442 net.cpp:382] ip1 -> ip1
I0226 22:37:11.632292  2442 net.cpp:124] Setting up ip1
I0226 22:37:11.632319  2442 net.cpp:131] Top shape: 100 500 (50000)
I0226 22:37:11.632335  2442 net.cpp:139] Memory required for data: 7874800
I0226 22:37:11.632359  2442 layer_factory.hpp:77] Creating layer relu1
I0226 22:37:11.632377  2442 net.cpp:86] Creating Layer relu1
I0226 22:37:11.632388  2442 net.cpp:408] relu1 <- ip1
I0226 22:37:11.632402  2442 net.cpp:369] relu1 -> ip1 (in-place)
I0226 22:37:11.632417  2442 net.cpp:124] Setting up relu1
I0226 22:37:11.632426  2442 net.cpp:131] Top shape: 100 500 (50000)
I0226 22:37:11.632438  2442 net.cpp:139] Memory required for data: 8074800
I0226 22:37:11.632448  2442 layer_factory.hpp:77] Creating layer ip2
I0226 22:37:11.632467  2442 net.cpp:86] Creating Layer ip2
I0226 22:37:11.632478  2442 net.cpp:408] ip2 <- ip1
I0226 22:37:11.632493  2442 net.cpp:382] ip2 -> ip2
I0226 22:37:11.632560  2442 net.cpp:124] Setting up ip2
I0226 22:37:11.632571  2442 net.cpp:131] Top shape: 100 10 (1000)
I0226 22:37:11.632583  2442 net.cpp:139] Memory required for data: 8078800
I0226 22:37:11.632597  2442 layer_factory.hpp:77] Creating layer ip2_ip2_0_split
I0226 22:37:11.632611  2442 net.cpp:86] Creating Layer ip2_ip2_0_split
I0226 22:37:11.632622  2442 net.cpp:408] ip2_ip2_0_split <- ip2
I0226 22:37:11.632634  2442 net.cpp:382] ip2_ip2_0_split -> ip2_ip2_0_split_0
I0226 22:37:11.632648  2442 net.cpp:382] ip2_ip2_0_split -> ip2_ip2_0_split_1
I0226 22:37:11.632663  2442 net.cpp:124] Setting up ip2_ip2_0_split
I0226 22:37:11.632673  2442 net.cpp:131] Top shape: 100 10 (1000)
I0226 22:37:11.632684  2442 net.cpp:131] Top shape: 100 10 (1000)
I0226 22:37:11.632694  2442 net.cpp:139] Memory required for data: 8086800
I0226 22:37:11.632704  2442 layer_factory.hpp:77] Creating layer accuracy
I0226 22:37:11.632726  2442 net.cpp:86] Creating Layer accuracy
I0226 22:37:11.632737  2442 net.cpp:408] accuracy <- ip2_ip2_0_split_0
I0226 22:37:11.632750  2442 net.cpp:408] accuracy <- label_mnist_1_split_0
I0226 22:37:11.632762  2442 net.cpp:382] accuracy -> accuracy
I0226 22:37:11.632777  2442 net.cpp:124] Setting up accuracy
I0226 22:37:11.632786  2442 net.cpp:131] Top shape: (1)
I0226 22:37:11.632797  2442 net.cpp:139] Memory required for data: 8086804
I0226 22:37:11.632807  2442 layer_factory.hpp:77] Creating layer loss
I0226 22:37:11.632822  2442 net.cpp:86] Creating Layer loss
I0226 22:37:11.632830  2442 net.cpp:408] loss <- ip2_ip2_0_split_1
I0226 22:37:11.632843  2442 net.cpp:408] loss <- label_mnist_1_split_1
I0226 22:37:11.632855  2442 net.cpp:382] loss -> loss
I0226 22:37:11.632870  2442 layer_factory.hpp:77] Creating layer loss
I0226 22:37:11.632928  2442 net.cpp:124] Setting up loss
I0226 22:37:11.632939  2442 net.cpp:131] Top shape: (1)
I0226 22:37:11.632951  2442 net.cpp:134]     with loss weight 1
I0226 22:37:11.632970  2442 net.cpp:139] Memory required for data: 8086808
I0226 22:37:11.632982  2442 net.cpp:200] loss needs backward computation.
I0226 22:37:11.632993  2442 net.cpp:202] accuracy does not need backward computation.
I0226 22:37:11.633005  2442 net.cpp:200] ip2_ip2_0_split needs backward computation.
I0226 22:37:11.633015  2442 net.cpp:200] ip2 needs backward computation.
I0226 22:37:11.633025  2442 net.cpp:200] relu1 needs backward computation.
I0226 22:37:11.633036  2442 net.cpp:200] ip1 needs backward computation.
I0226 22:37:11.633047  2442 net.cpp:200] pool2 needs backward computation.
I0226 22:37:11.633057  2442 net.cpp:200] conv2 needs backward computation.
I0226 22:37:11.633067  2442 net.cpp:200] pool1 needs backward computation.
I0226 22:37:11.633077  2442 net.cpp:200] conv1 needs backward computation.
I0226 22:37:11.633090  2442 net.cpp:202] label_mnist_1_split does not need backward computation.
I0226 22:37:11.633100  2442 net.cpp:202] mnist does not need backward computation.
I0226 22:37:11.633108  2442 net.cpp:244] This network produces output accuracy
I0226 22:37:11.633121  2442 net.cpp:244] This network produces output loss
I0226 22:37:11.633142  2442 net.cpp:257] Network initialization done.
I0226 22:37:11.633199  2442 solver.cpp:57] Solver scaffolding done.
I0226 22:37:11.633236  2442 caffe.cpp:239] Starting Optimization
I0226 22:37:11.633246  2442 solver.cpp:289] Solving LeNet
I0226 22:37:11.633256  2442 solver.cpp:290] Learning Rate Policy: inv
I0226 22:37:11.633992  2442 solver.cpp:347] Iteration 0, Testing net (#0)
I0226 22:37:17.371232  2444 data_layer.cpp:73] Restarting data prefetching from start.
I0226 22:37:17.609834  2442 solver.cpp:414]     Test net output #0: accuracy = 0.0899
I0226 22:37:17.609889  2442 solver.cpp:414]     Test net output #1: loss = 2.40369 (* 1 = 2.40369 loss)
I0226 22:37:17.707661  2442 solver.cpp:239] Iteration 0 (0 iter/s, 6.074s/100 iters), loss = 2.46128
I0226 22:37:17.707710  2442 solver.cpp:258]     Train net output #0: loss = 2.46128 (* 1 = 2.46128 loss)
I0226 22:37:17.707731  2442 sgd_solver.cpp:112] Iteration 0, lr = 0.01
I0226 22:37:27.450498  2442 solver.cpp:239] Iteration 100 (10.2648 iter/s, 9.742s/100 iters), loss = 0.203452
I0226 22:37:27.450556  2442 solver.cpp:258]     Train net output #0: loss = 0.203452 (* 1 = 0.203452 loss)
I0226 22:37:27.450573  2442 sgd_solver.cpp:112] Iteration 100, lr = 0.00992565
I0226 22:37:37.300575  2442 solver.cpp:239] Iteration 200 (10.1523 iter/s, 9.85s/100 iters), loss = 0.127584
I0226 22:37:37.300628  2442 solver.cpp:258]     Train net output #0: loss = 0.127584 (* 1 = 0.127584 loss)
I0226 22:37:37.300644  2442 sgd_solver.cpp:112] Iteration 200, lr = 0.00985258
I0226 22:37:47.227037  2442 solver.cpp:239] Iteration 300 (10.0746 iter/s, 9.926s/100 iters), loss = 0.152882
I0226 22:37:47.227156  2442 solver.cpp:258]     Train net output #0: loss = 0.152882 (* 1 = 0.152882 loss)
I0226 22:37:47.227174  2442 sgd_solver.cpp:112] Iteration 300, lr = 0.00978075
I0226 22:37:57.398344  2442 solver.cpp:239] Iteration 400 (9.83188 iter/s, 10.171s/100 iters), loss = 0.0781792
I0226 22:37:57.398398  2442 solver.cpp:258]     Train net output #0: loss = 0.078179 (* 1 = 0.078179 loss)
I0226 22:37:57.398416  2442 sgd_solver.cpp:112] Iteration 400, lr = 0.00971013
I0226 22:38:07.812530  2442 solver.cpp:347] Iteration 500, Testing net (#0)
I0226 22:38:14.445535  2444 data_layer.cpp:73] Restarting data prefetching from start.
I0226 22:38:14.759711  2442 solver.cpp:414]     Test net output #0: accuracy = 0.9719
I0226 22:38:14.759773  2442 solver.cpp:414]     Test net output #1: loss = 0.0836109 (* 1 = 0.0836109 loss)
I0226 22:38:14.888417  2442 solver.cpp:239] Iteration 500 (5.71755 iter/s, 17.49s/100 iters), loss = 0.0671692
I0226 22:38:14.888476  2442 solver.cpp:258]     Train net output #0: loss = 0.0671691 (* 1 = 0.0671691 loss)
I0226 22:38:14.888497  2442 sgd_solver.cpp:112] Iteration 500, lr = 0.00964069
I0226 22:38:25.550348  2442 solver.cpp:239] Iteration 600 (9.37998 iter/s, 10.661s/100 iters), loss = 0.0683537
I0226 22:38:25.550529  2442 solver.cpp:258]     Train net output #0: loss = 0.0683536 (* 1 = 0.0683536 loss)
I0226 22:38:25.550545  2442 sgd_solver.cpp:112] Iteration 600, lr = 0.0095724
I0226 22:38:35.250106  2442 solver.cpp:239] Iteration 700 (10.3103 iter/s, 9.699s/100 iters), loss = 0.119176
I0226 22:38:35.250159  2442 solver.cpp:258]     Train net output #0: loss = 0.119176 (* 1 = 0.119176 loss)
I0226 22:38:35.250174  2442 sgd_solver.cpp:112] Iteration 700, lr = 0.00950522
I0226 22:38:44.834887  2442 solver.cpp:239] Iteration 800 (10.4341 iter/s, 9.584s/100 iters), loss = 0.191517
I0226 22:38:44.834939  2442 solver.cpp:258]     Train net output #0: loss = 0.191517 (* 1 = 0.191517 loss)
I0226 22:38:44.834955  2442 sgd_solver.cpp:112] Iteration 800, lr = 0.00943913
I0226 22:38:54.965384  2442 solver.cpp:239] Iteration 900 (9.87167 iter/s, 10.13s/100 iters), loss = 0.174257
I0226 22:38:54.965435  2442 solver.cpp:258]     Train net output #0: loss = 0.174257 (* 1 = 0.174257 loss)
I0226 22:38:54.965449  2442 sgd_solver.cpp:112] Iteration 900, lr = 0.00937411
I0226 22:38:58.649479  2443 data_layer.cpp:73] Restarting data prefetching from start.
I0226 22:39:05.269567  2442 solver.cpp:347] Iteration 1000, Testing net (#0)
I0226 22:39:11.354693  2444 data_layer.cpp:73] Restarting data prefetching from start.
I0226 22:39:11.597376  2442 solver.cpp:414]     Test net output #0: accuracy = 0.9808
I0226 22:39:11.597430  2442 solver.cpp:414]     Test net output #1: loss = 0.0600035 (* 1 = 0.0600035 loss)
I0226 22:39:11.690896  2442 solver.cpp:239] Iteration 1000 (5.97907 iter/s, 16.725s/100 iters), loss = 0.120871
I0226 22:39:11.690948  2442 solver.cpp:258]     Train net output #0: loss = 0.120871 (* 1 = 0.120871 loss)
I0226 22:39:11.690968  2442 sgd_solver.cpp:112] Iteration 1000, lr = 0.00931012
I0226 22:39:21.372987  2442 solver.cpp:239] Iteration 1100 (10.3284 iter/s, 9.682s/100 iters), loss = 0.00811403
I0226 22:39:21.373046  2442 solver.cpp:258]     Train net output #0: loss = 0.00811392 (* 1 = 0.00811392 loss)
I0226 22:39:21.373065  2442 sgd_solver.cpp:112] Iteration 1100, lr = 0.00924715
I0226 22:39:31.063477  2442 solver.cpp:239] Iteration 1200 (10.3199 iter/s, 9.69s/100 iters), loss = 0.0164351
I0226 22:39:31.063616  2442 solver.cpp:258]     Train net output #0: loss = 0.0164349 (* 1 = 0.0164349 loss)
I0226 22:39:31.063635  2442 sgd_solver.cpp:112] Iteration 1200, lr = 0.00918515
I0226 22:39:40.919028  2442 solver.cpp:239] Iteration 1300 (10.1471 iter/s, 9.855s/100 iters), loss = 0.0228002
I0226 22:39:40.919081  2442 solver.cpp:258]     Train net output #0: loss = 0.0228001 (* 1 = 0.0228001 loss)
I0226 22:39:40.919096  2442 sgd_solver.cpp:112] Iteration 1300, lr = 0.00912412
I0226 22:39:50.860584  2442 solver.cpp:239] Iteration 1400 (10.0594 iter/s, 9.941s/100 iters), loss = 0.00871326
I0226 22:39:50.860630  2442 solver.cpp:258]     Train net output #0: loss = 0.00871314 (* 1 = 0.00871314 loss)
I0226 22:39:50.860648  2442 sgd_solver.cpp:112] Iteration 1400, lr = 0.00906403
I0226 22:40:00.505358  2442 solver.cpp:347] Iteration 1500, Testing net (#0)
I0226 22:40:06.325935  2444 data_layer.cpp:73] Restarting data prefetching from start.
I0226 22:40:06.561522  2442 solver.cpp:414]     Test net output #0: accuracy = 0.9833
I0226 22:40:06.561571  2442 solver.cpp:414]     Test net output #1: loss = 0.0514972 (* 1 = 0.0514972 loss)
I0226 22:40:06.656510  2442 solver.cpp:239] Iteration 1500 (6.33112 iter/s, 15.795s/100 iters), loss = 0.0838645
I0226 22:40:06.656564  2442 solver.cpp:258]     Train net output #0: loss = 0.0838644 (* 1 = 0.0838644 loss)
I0226 22:40:06.656580  2442 sgd_solver.cpp:112] Iteration 1500, lr = 0.00900485
I0226 22:40:16.378629  2442 solver.cpp:239] Iteration 1600 (10.2859 iter/s, 9.722s/100 iters), loss = 0.141069
I0226 22:40:16.378680  2442 solver.cpp:258]     Train net output #0: loss = 0.141069 (* 1 = 0.141069 loss)
I0226 22:40:16.378697  2442 sgd_solver.cpp:112] Iteration 1600, lr = 0.00894657
I0226 22:40:26.211732  2442 solver.cpp:239] Iteration 1700 (10.1698 iter/s, 9.833s/100 iters), loss = 0.0447211
I0226 22:40:26.211783  2442 solver.cpp:258]     Train net output #0: loss = 0.044721 (* 1 = 0.044721 loss)
I0226 22:40:26.211796  2442 sgd_solver.cpp:112] Iteration 1700, lr = 0.00888916
I0226 22:40:35.953567  2442 solver.cpp:239] Iteration 1800 (10.2659 iter/s, 9.741s/100 iters), loss = 0.0174017
I0226 22:40:35.953621  2442 solver.cpp:258]     Train net output #0: loss = 0.0174016 (* 1 = 0.0174016 loss)
I0226 22:40:35.953637  2442 sgd_solver.cpp:112] Iteration 1800, lr = 0.0088326
I0226 22:40:42.954711  2443 data_layer.cpp:73] Restarting data prefetching from start.
I0226 22:40:45.889737  2442 solver.cpp:239] Iteration 1900 (10.0644 iter/s, 9.936s/100 iters), loss = 0.105484
I0226 22:40:45.889784  2442 solver.cpp:258]     Train net output #0: loss = 0.105484 (* 1 = 0.105484 loss)
I0226 22:40:45.889801  2442 sgd_solver.cpp:112] Iteration 1900, lr = 0.00877687
I0226 22:40:55.702467  2442 solver.cpp:347] Iteration 2000, Testing net (#0)
I0226 22:41:01.541447  2444 data_layer.cpp:73] Restarting data prefetching from start.
I0226 22:41:01.797310  2442 solver.cpp:414]     Test net output #0: accuracy = 0.9847
I0226 22:41:01.797366  2442 solver.cpp:414]     Test net output #1: loss = 0.0454274 (* 1 = 0.0454274 loss)
I0226 22:41:01.895714  2442 solver.cpp:239] Iteration 2000 (6.24805 iter/s, 16.005s/100 iters), loss = 0.00678412
I0226 22:41:01.895766  2442 solver.cpp:258]     Train net output #0: loss = 0.00678401 (* 1 = 0.00678401 loss)
I0226 22:41:01.895781  2442 sgd_solver.cpp:112] Iteration 2000, lr = 0.00872196
I0226 22:41:12.291903  2442 solver.cpp:239] Iteration 2100 (9.61908 iter/s, 10.396s/100 iters), loss = 0.0128036
I0226 22:41:12.291952  2442 solver.cpp:258]     Train net output #0: loss = 0.0128035 (* 1 = 0.0128035 loss)
I0226 22:41:12.291972  2442 sgd_solver.cpp:112] Iteration 2100, lr = 0.00866784
I0226 22:41:22.466570  2442 solver.cpp:239] Iteration 2200 (9.82898 iter/s, 10.174s/100 iters), loss = 0.0142575
I0226 22:41:22.466740  2442 solver.cpp:258]     Train net output #0: loss = 0.0142573 (* 1 = 0.0142573 loss)
I0226 22:41:22.466753  2442 sgd_solver.cpp:112] Iteration 2200, lr = 0.0086145
I0226 22:41:32.224442  2442 solver.cpp:239] Iteration 2300 (10.2491 iter/s, 9.757s/100 iters), loss = 0.0805506
I0226 22:41:32.224493  2442 solver.cpp:258]     Train net output #0: loss = 0.0805505 (* 1 = 0.0805505 loss)
I0226 22:41:32.224509  2442 sgd_solver.cpp:112] Iteration 2300, lr = 0.00856192
I0226 22:41:42.291199  2442 solver.cpp:239] Iteration 2400 (9.93443 iter/s, 10.066s/100 iters), loss = 0.00802351
I0226 22:41:42.291260  2442 solver.cpp:258]     Train net output #0: loss = 0.00802335 (* 1 = 0.00802335 loss)
I0226 22:41:42.291280  2442 sgd_solver.cpp:112] Iteration 2400, lr = 0.00851008
I0226 22:41:51.945683  2442 solver.cpp:347] Iteration 2500, Testing net (#0)
I0226 22:41:57.714076  2444 data_layer.cpp:73] Restarting data prefetching from start.
I0226 22:41:57.967310  2442 solver.cpp:414]     Test net output #0: accuracy = 0.984
I0226 22:41:57.967372  2442 solver.cpp:414]     Test net output #1: loss = 0.0476773 (* 1 = 0.0476773 loss)
I0226 22:41:58.071966  2442 solver.cpp:239] Iteration 2500 (6.33714 iter/s, 15.78s/100 iters), loss = 0.0273458
I0226 22:41:58.072018  2442 solver.cpp:258]     Train net output #0: loss = 0.0273456 (* 1 = 0.0273456 loss)
I0226 22:41:58.072034  2442 sgd_solver.cpp:112] Iteration 2500, lr = 0.00845897
I0226 22:42:07.869709  2442 solver.cpp:239] Iteration 2600 (10.2072 iter/s, 9.797s/100 iters), loss = 0.103635
I0226 22:42:07.869753  2442 solver.cpp:258]     Train net output #0: loss = 0.103635 (* 1 = 0.103635 loss)
I0226 22:42:07.869774  2442 sgd_solver.cpp:112] Iteration 2600, lr = 0.00840857
I0226 22:42:17.519234  2442 solver.cpp:239] Iteration 2700 (10.3638 iter/s, 9.649s/100 iters), loss = 0.0917561
I0226 22:42:17.519301  2442 solver.cpp:258]     Train net output #0: loss = 0.091756 (* 1 = 0.091756 loss)
I0226 22:42:17.519318  2442 sgd_solver.cpp:112] Iteration 2700, lr = 0.00835886
I0226 22:42:27.523949  2442 solver.cpp:239] Iteration 2800 (9.996 iter/s, 10.004s/100 iters), loss = 0.00693497
I0226 22:42:27.524004  2442 solver.cpp:258]     Train net output #0: loss = 0.00693486 (* 1 = 0.00693486 loss)
I0226 22:42:27.524020  2442 sgd_solver.cpp:112] Iteration 2800, lr = 0.00830984
I0226 22:42:28.321317  2443 data_layer.cpp:73] Restarting data prefetching from start.
I0226 22:42:37.224385  2442 solver.cpp:239] Iteration 2900 (10.3093 iter/s, 9.7s/100 iters), loss = 0.0207115
I0226 22:42:37.224438  2442 solver.cpp:258]     Train net output #0: loss = 0.0207114 (* 1 = 0.0207114 loss)
I0226 22:42:37.224454  2442 sgd_solver.cpp:112] Iteration 2900, lr = 0.00826148
I0226 22:42:46.819875  2442 solver.cpp:347] Iteration 3000, Testing net (#0)
I0226 22:42:52.522219  2444 data_layer.cpp:73] Restarting data prefetching from start.
I0226 22:42:52.777567  2442 solver.cpp:414]     Test net output #0: accuracy = 0.987
I0226 22:42:52.777616  2442 solver.cpp:414]     Test net output #1: loss = 0.0408713 (* 1 = 0.0408713 loss)
I0226 22:42:52.876366  2442 solver.cpp:239] Iteration 3000 (6.38937 iter/s, 15.651s/100 iters), loss = 0.0119719
I0226 22:42:52.876420  2442 solver.cpp:258]     Train net output #0: loss = 0.0119718 (* 1 = 0.0119718 loss)
I0226 22:42:52.876435  2442 sgd_solver.cpp:112] Iteration 3000, lr = 0.00821377
I0226 22:43:02.562208  2442 solver.cpp:239] Iteration 3100 (10.3252 iter/s, 9.685s/100 iters), loss = 0.00936197
I0226 22:43:02.562374  2442 solver.cpp:258]     Train net output #0: loss = 0.00936181 (* 1 = 0.00936181 loss)
I0226 22:43:02.562392  2442 sgd_solver.cpp:112] Iteration 3100, lr = 0.0081667
I0226 22:43:12.228516  2442 solver.cpp:239] Iteration 3200 (10.3455 iter/s, 9.666s/100 iters), loss = 0.0061
I0226 22:43:12.228567  2442 solver.cpp:258]     Train net output #0: loss = 0.00609983 (* 1 = 0.00609983 loss)
I0226 22:43:12.228585  2442 sgd_solver.cpp:112] Iteration 3200, lr = 0.00812025
I0226 22:43:21.852133  2442 solver.cpp:239] Iteration 3300 (10.3918 iter/s, 9.623s/100 iters), loss = 0.0449563
I0226 22:43:21.852195  2442 solver.cpp:258]     Train net output #0: loss = 0.0449562 (* 1 = 0.0449562 loss)
I0226 22:43:21.852208  2442 sgd_solver.cpp:112] Iteration 3300, lr = 0.00807442
I0226 22:43:31.578043  2442 solver.cpp:239] Iteration 3400 (10.2828 iter/s, 9.725s/100 iters), loss = 0.0159143
I0226 22:43:31.578091  2442 solver.cpp:258]     Train net output #0: loss = 0.0159141 (* 1 = 0.0159141 loss)
I0226 22:43:31.578109  2442 sgd_solver.cpp:112] Iteration 3400, lr = 0.00802918
I0226 22:43:41.093394  2442 solver.cpp:347] Iteration 3500, Testing net (#0)
I0226 22:43:46.909528  2444 data_layer.cpp:73] Restarting data prefetching from start.
I0226 22:43:47.146631  2442 solver.cpp:414]     Test net output #0: accuracy = 0.9827
I0226 22:43:47.146687  2442 solver.cpp:414]     Test net output #1: loss = 0.0511475 (* 1 = 0.0511475 loss)
I0226 22:43:47.242857  2442 solver.cpp:239] Iteration 3500 (6.38407 iter/s, 15.664s/100 iters), loss = 0.00554056
I0226 22:43:47.242905  2442 solver.cpp:258]     Train net output #0: loss = 0.0055404 (* 1 = 0.0055404 loss)
I0226 22:43:47.242923  2442 sgd_solver.cpp:112] Iteration 3500, lr = 0.00798454
I0226 22:43:56.946308  2442 solver.cpp:239] Iteration 3600 (10.3061 iter/s, 9.703s/100 iters), loss = 0.0293207
I0226 22:43:56.946359  2442 solver.cpp:258]     Train net output #0: loss = 0.0293205 (* 1 = 0.0293205 loss)
I0226 22:43:56.946374  2442 sgd_solver.cpp:112] Iteration 3600, lr = 0.00794046
I0226 22:44:06.620342  2442 solver.cpp:239] Iteration 3700 (10.3381 iter/s, 9.673s/100 iters), loss = 0.0197716
I0226 22:44:06.620400  2442 solver.cpp:258]     Train net output #0: loss = 0.0197715 (* 1 = 0.0197715 loss)
I0226 22:44:06.620417  2442 sgd_solver.cpp:112] Iteration 3700, lr = 0.00789695
I0226 22:44:10.976276  2443 data_layer.cpp:73] Restarting data prefetching from start.
I0226 22:44:16.277290  2442 solver.cpp:239] Iteration 3800 (10.3563 iter/s, 9.656s/100 iters), loss = 0.00992929
I0226 22:44:16.277463  2442 solver.cpp:258]     Train net output #0: loss = 0.00992913 (* 1 = 0.00992913 loss)
I0226 22:44:16.277480  2442 sgd_solver.cpp:112] Iteration 3800, lr = 0.007854
I0226 22:44:25.952764  2442 solver.cpp:239] Iteration 3900 (10.3359 iter/s, 9.675s/100 iters), loss = 0.0400946
I0226 22:44:25.952816  2442 solver.cpp:258]     Train net output #0: loss = 0.0400944 (* 1 = 0.0400944 loss)
I0226 22:44:25.952831  2442 sgd_solver.cpp:112] Iteration 3900, lr = 0.00781158
I0226 22:44:35.516759  2442 solver.cpp:347] Iteration 4000, Testing net (#0)
I0226 22:44:41.269843  2444 data_layer.cpp:73] Restarting data prefetching from start.
I0226 22:44:41.504411  2442 solver.cpp:414]     Test net output #0: accuracy = 0.9894
I0226 22:44:41.504463  2442 solver.cpp:414]     Test net output #1: loss = 0.0313221 (* 1 = 0.0313221 loss)
I0226 22:44:41.597961  2442 solver.cpp:239] Iteration 4000 (6.39182 iter/s, 15.645s/100 iters), loss = 0.0144327
I0226 22:44:41.598017  2442 solver.cpp:258]     Train net output #0: loss = 0.0144325 (* 1 = 0.0144325 loss)
I0226 22:44:41.598042  2442 sgd_solver.cpp:112] Iteration 4000, lr = 0.00776969
I0226 22:44:51.331691  2442 solver.cpp:239] Iteration 4100 (10.2743 iter/s, 9.733s/100 iters), loss = 0.0163888
I0226 22:44:51.331867  2442 solver.cpp:258]     Train net output #0: loss = 0.0163887 (* 1 = 0.0163887 loss)
I0226 22:44:51.331887  2442 sgd_solver.cpp:112] Iteration 4100, lr = 0.00772833
I0226 22:45:00.979555  2442 solver.cpp:239] Iteration 4200 (10.3659 iter/s, 9.647s/100 iters), loss = 0.0113799
I0226 22:45:00.979607  2442 solver.cpp:258]     Train net output #0: loss = 0.0113797 (* 1 = 0.0113797 loss)
I0226 22:45:00.979621  2442 sgd_solver.cpp:112] Iteration 4200, lr = 0.00768748
I0226 22:45:10.706171  2442 solver.cpp:239] Iteration 4300 (10.2817 iter/s, 9.726s/100 iters), loss = 0.0473539
I0226 22:45:10.706226  2442 solver.cpp:258]     Train net output #0: loss = 0.0473537 (* 1 = 0.0473537 loss)
I0226 22:45:10.706243  2442 sgd_solver.cpp:112] Iteration 4300, lr = 0.00764712
I0226 22:45:20.392432  2442 solver.cpp:239] Iteration 4400 (10.3242 iter/s, 9.686s/100 iters), loss = 0.0242695
I0226 22:45:20.392488  2442 solver.cpp:258]     Train net output #0: loss = 0.0242693 (* 1 = 0.0242693 loss)
I0226 22:45:20.392509  2442 sgd_solver.cpp:112] Iteration 4400, lr = 0.00760726
I0226 22:45:30.040961  2442 solver.cpp:347] Iteration 4500, Testing net (#0)
I0226 22:45:35.948333  2444 data_layer.cpp:73] Restarting data prefetching from start.
I0226 22:45:36.191329  2442 solver.cpp:414]     Test net output #0: accuracy = 0.9873
I0226 22:45:36.191385  2442 solver.cpp:414]     Test net output #1: loss = 0.0384393 (* 1 = 0.0384393 loss)
I0226 22:45:36.287822  2442 solver.cpp:239] Iteration 4500 (6.29129 iter/s, 15.895s/100 iters), loss = 0.00526828
I0226 22:45:36.287874  2442 solver.cpp:258]     Train net output #0: loss = 0.0052681 (* 1 = 0.0052681 loss)
I0226 22:45:36.287909  2442 sgd_solver.cpp:112] Iteration 4500, lr = 0.00756788
I0226 22:45:46.326690  2442 solver.cpp:239] Iteration 4600 (9.96214 iter/s, 10.038s/100 iters), loss = 0.0157644
I0226 22:45:46.326746  2442 solver.cpp:258]     Train net output #0: loss = 0.0157642 (* 1 = 0.0157642 loss)
I0226 22:45:46.326761  2442 sgd_solver.cpp:112] Iteration 4600, lr = 0.00752897
I0226 22:45:54.865051  2443 data_layer.cpp:73] Restarting data prefetching from start.
I0226 22:45:56.655055  2442 solver.cpp:239] Iteration 4700 (9.68242 iter/s, 10.328s/100 iters), loss = 0.00579816
I0226 22:45:56.655112  2442 solver.cpp:258]     Train net output #0: loss = 0.00579801 (* 1 = 0.00579801 loss)
I0226 22:45:56.655128  2442 sgd_solver.cpp:112] Iteration 4700, lr = 0.00749052
I0226 22:46:07.484783  2442 solver.cpp:239] Iteration 4800 (9.23446 iter/s, 10.829s/100 iters), loss = 0.0136092
I0226 22:46:07.484933  2442 solver.cpp:258]     Train net output #0: loss = 0.013609 (* 1 = 0.013609 loss)
I0226 22:46:07.484949  2442 sgd_solver.cpp:112] Iteration 4800, lr = 0.00745253
I0226 22:46:17.634359  2442 solver.cpp:239] Iteration 4900 (9.85319 iter/s, 10.149s/100 iters), loss = 0.00842682
I0226 22:46:17.634413  2442 solver.cpp:258]     Train net output #0: loss = 0.00842665 (* 1 = 0.00842665 loss)
I0226 22:46:17.634428  2442 sgd_solver.cpp:112] Iteration 4900, lr = 0.00741498
I0226 22:46:27.167671  2442 solver.cpp:464] Snapshotting to binary proto file examples/mnist/lenet_iter_5000.caffemodel
I0226 22:46:27.177868  2442 sgd_solver.cpp:284] Snapshotting solver state to binary proto file examples/mnist/lenet_iter_5000.solverstate
I0226 22:46:27.182534  2442 solver.cpp:347] Iteration 5000, Testing net (#0)
I0226 22:46:32.830696  2444 data_layer.cpp:73] Restarting data prefetching from start.
I0226 22:46:33.063828  2442 solver.cpp:414]     Test net output #0: accuracy = 0.9899
I0226 22:46:33.063880  2442 solver.cpp:414]     Test net output #1: loss = 0.0316008 (* 1 = 0.0316008 loss)
I0226 22:46:33.157487  2442 solver.cpp:239] Iteration 5000 (6.44205 iter/s, 15.523s/100 iters), loss = 0.0346689
I0226 22:46:33.157538  2442 solver.cpp:258]     Train net output #0: loss = 0.0346687 (* 1 = 0.0346687 loss)
I0226 22:46:33.157557  2442 sgd_solver.cpp:112] Iteration 5000, lr = 0.00737788
I0226 22:46:42.803390  2442 solver.cpp:239] Iteration 5100 (10.3681 iter/s, 9.645s/100 iters), loss = 0.0176471
I0226 22:46:42.803572  2442 solver.cpp:258]     Train net output #0: loss = 0.0176469 (* 1 = 0.0176469 loss)
I0226 22:46:42.803607  2442 sgd_solver.cpp:112] Iteration 5100, lr = 0.0073412
I0226 22:46:52.343144  2442 solver.cpp:239] Iteration 5200 (10.4833 iter/s, 9.539s/100 iters), loss = 0.00705752
I0226 22:46:52.343199  2442 solver.cpp:258]     Train net output #0: loss = 0.00705736 (* 1 = 0.00705736 loss)
I0226 22:46:52.343215  2442 sgd_solver.cpp:112] Iteration 5200, lr = 0.00730495
I0226 22:47:01.882566  2442 solver.cpp:239] Iteration 5300 (10.4833 iter/s, 9.539s/100 iters), loss = 0.00257235
I0226 22:47:01.882619  2442 solver.cpp:258]     Train net output #0: loss = 0.0025722 (* 1 = 0.0025722 loss)
I0226 22:47:01.882635  2442 sgd_solver.cpp:112] Iteration 5300, lr = 0.00726911
I0226 22:47:11.436738  2442 solver.cpp:239] Iteration 5400 (10.4668 iter/s, 9.554s/100 iters), loss = 0.00728917
I0226 22:47:11.436789  2442 solver.cpp:258]     Train net output #0: loss = 0.00728904 (* 1 = 0.00728904 loss)
I0226 22:47:11.436803  2442 sgd_solver.cpp:112] Iteration 5400, lr = 0.00723368
I0226 22:47:20.922358  2442 solver.cpp:347] Iteration 5500, Testing net (#0)
I0226 22:47:26.806264  2444 data_layer.cpp:73] Restarting data prefetching from start.
I0226 22:47:27.066781  2442 solver.cpp:414]     Test net output #0: accuracy = 0.9897
I0226 22:47:27.066843  2442 solver.cpp:414]     Test net output #1: loss = 0.0339334 (* 1 = 0.0339334 loss)
I0226 22:47:27.164624  2442 solver.cpp:239] Iteration 5500 (6.35849 iter/s, 15.727s/100 iters), loss = 0.0067735
I0226 22:47:27.164736  2442 solver.cpp:258]     Train net output #0: loss = 0.00677338 (* 1 = 0.00677338 loss)
I0226 22:47:27.164757  2442 sgd_solver.cpp:112] Iteration 5500, lr = 0.00719865
I0226 22:47:36.799551  2442 solver.cpp:239] Iteration 5600 (10.3799 iter/s, 9.634s/100 iters), loss = 0.000724167
I0226 22:47:36.799612  2442 solver.cpp:258]     Train net output #0: loss = 0.000724045 (* 1 = 0.000724045 loss)
I0226 22:47:36.799628  2442 sgd_solver.cpp:112] Iteration 5600, lr = 0.00716402
I0226 22:47:38.824903  2443 data_layer.cpp:73] Restarting data prefetching from start.
I0226 22:47:46.896533  2442 solver.cpp:239] Iteration 5700 (9.90491 iter/s, 10.096s/100 iters), loss = 0.0032787
I0226 22:47:46.896577  2442 solver.cpp:258]     Train net output #0: loss = 0.00327857 (* 1 = 0.00327857 loss)
I0226 22:47:46.896590  2442 sgd_solver.cpp:112] Iteration 5700, lr = 0.00712977
I0226 22:47:57.388976  2442 solver.cpp:239] Iteration 5800 (9.53107 iter/s, 10.492s/100 iters), loss = 0.0372791
I0226 22:47:57.389109  2442 solver.cpp:258]     Train net output #0: loss = 0.037279 (* 1 = 0.037279 loss)
I0226 22:47:57.389125  2442 sgd_solver.cpp:112] Iteration 5800, lr = 0.0070959
I0226 22:48:06.790045  2442 solver.cpp:239] Iteration 5900 (10.6383 iter/s, 9.4s/100 iters), loss = 0.00496813
I0226 22:48:06.790096  2442 solver.cpp:258]     Train net output #0: loss = 0.00496801 (* 1 = 0.00496801 loss)
I0226 22:48:06.790110  2442 sgd_solver.cpp:112] Iteration 5900, lr = 0.0070624
I0226 22:48:16.121098  2442 solver.cpp:347] Iteration 6000, Testing net (#0)
I0226 22:48:21.620126  2444 data_layer.cpp:73] Restarting data prefetching from start.
I0226 22:48:21.844125  2442 solver.cpp:414]     Test net output #0: accuracy = 0.9909
I0226 22:48:21.844178  2442 solver.cpp:414]     Test net output #1: loss = 0.0288429 (* 1 = 0.0288429 loss)
I0226 22:48:21.934967  2442 solver.cpp:239] Iteration 6000 (6.60328 iter/s, 15.144s/100 iters), loss = 0.00401383
I0226 22:48:21.935021  2442 solver.cpp:258]     Train net output #0: loss = 0.00401372 (* 1 = 0.00401372 loss)
I0226 22:48:21.935039  2442 sgd_solver.cpp:112] Iteration 6000, lr = 0.00702927
I0226 22:48:31.014570  2442 solver.cpp:239] Iteration 6100 (11.0144 iter/s, 9.079s/100 iters), loss = 0.00256507
I0226 22:48:31.014737  2442 solver.cpp:258]     Train net output #0: loss = 0.00256497 (* 1 = 0.00256497 loss)
I0226 22:48:31.014755  2442 sgd_solver.cpp:112] Iteration 6100, lr = 0.0069965
I0226 22:48:40.115618  2442 solver.cpp:239] Iteration 6200 (10.989 iter/s, 9.1s/100 iters), loss = 0.00779089
I0226 22:48:40.115670  2442 solver.cpp:258]     Train net output #0: loss = 0.00779079 (* 1 = 0.00779079 loss)
I0226 22:48:40.115684  2442 sgd_solver.cpp:112] Iteration 6200, lr = 0.00696408
I0226 22:48:49.190863  2442 solver.cpp:239] Iteration 6300 (11.0193 iter/s, 9.075s/100 iters), loss = 0.0148993
I0226 22:48:49.190919  2442 solver.cpp:258]     Train net output #0: loss = 0.0148992 (* 1 = 0.0148992 loss)
I0226 22:48:49.190935  2442 sgd_solver.cpp:112] Iteration 6300, lr = 0.00693201
I0226 22:48:58.256402  2442 solver.cpp:239] Iteration 6400 (11.0314 iter/s, 9.065s/100 iters), loss = 0.00686445
I0226 22:48:58.256455  2442 solver.cpp:258]     Train net output #0: loss = 0.00686436 (* 1 = 0.00686436 loss)
I0226 22:48:58.256474  2442 sgd_solver.cpp:112] Iteration 6400, lr = 0.00690029
I0226 22:49:07.221040  2442 solver.cpp:347] Iteration 6500, Testing net (#0)
I0226 22:49:12.603580  2444 data_layer.cpp:73] Restarting data prefetching from start.
I0226 22:49:12.828657  2442 solver.cpp:414]     Test net output #0: accuracy = 0.9896
I0226 22:49:12.828713  2442 solver.cpp:414]     Test net output #1: loss = 0.0310469 (* 1 = 0.0310469 loss)
I0226 22:49:12.918390  2442 solver.cpp:239] Iteration 6500 (6.82082 iter/s, 14.661s/100 iters), loss = 0.00574645
I0226 22:49:12.918442  2442 solver.cpp:258]     Train net output #0: loss = 0.00574635 (* 1 = 0.00574635 loss)
I0226 22:49:12.918457  2442 sgd_solver.cpp:112] Iteration 6500, lr = 0.0068689
I0226 22:49:18.189663  2443 data_layer.cpp:73] Restarting data prefetching from start.
I0226 22:49:21.990790  2442 solver.cpp:239] Iteration 6600 (11.0229 iter/s, 9.072s/100 iters), loss = 0.0330389
I0226 22:49:21.990844  2442 solver.cpp:258]     Train net output #0: loss = 0.0330388 (* 1 = 0.0330388 loss)
I0226 22:49:21.990856  2442 sgd_solver.cpp:112] Iteration 6600, lr = 0.00683784
I0226 22:49:31.068985  2442 solver.cpp:239] Iteration 6700 (11.0156 iter/s, 9.078s/100 iters), loss = 0.0119388
I0226 22:49:31.069037  2442 solver.cpp:258]     Train net output #0: loss = 0.0119387 (* 1 = 0.0119387 loss)
I0226 22:49:31.069051  2442 sgd_solver.cpp:112] Iteration 6700, lr = 0.00680711
I0226 22:49:40.151279  2442 solver.cpp:239] Iteration 6800 (11.0108 iter/s, 9.082s/100 iters), loss = 0.00151411
I0226 22:49:40.151388  2442 solver.cpp:258]     Train net output #0: loss = 0.00151402 (* 1 = 0.00151402 loss)
I0226 22:49:40.151407  2442 sgd_solver.cpp:112] Iteration 6800, lr = 0.0067767
I0226 22:49:49.222928  2442 solver.cpp:239] Iteration 6900 (11.0241 iter/s, 9.071s/100 iters), loss = 0.00391463
I0226 22:49:49.222980  2442 solver.cpp:258]     Train net output #0: loss = 0.00391453 (* 1 = 0.00391453 loss)
I0226 22:49:49.222995  2442 sgd_solver.cpp:112] Iteration 6900, lr = 0.0067466
I0226 22:49:58.208765  2442 solver.cpp:347] Iteration 7000, Testing net (#0)
I0226 22:50:03.612239  2444 data_layer.cpp:73] Restarting data prefetching from start.
I0226 22:50:03.838024  2442 solver.cpp:414]     Test net output #0: accuracy = 0.9904
I0226 22:50:03.838075  2442 solver.cpp:414]     Test net output #1: loss = 0.0300491 (* 1 = 0.0300491 loss)
I0226 22:50:03.927436  2442 solver.cpp:239] Iteration 7000 (6.80087 iter/s, 14.704s/100 iters), loss = 0.00687729
I0226 22:50:03.927487  2442 solver.cpp:258]     Train net output #0: loss = 0.0068772 (* 1 = 0.0068772 loss)
I0226 22:50:03.927505  2442 sgd_solver.cpp:112] Iteration 7000, lr = 0.00671681
I0226 22:50:12.997606  2442 solver.cpp:239] Iteration 7100 (11.0254 iter/s, 9.07s/100 iters), loss = 0.0138477
I0226 22:50:12.997923  2442 solver.cpp:258]     Train net output #0: loss = 0.0138476 (* 1 = 0.0138476 loss)
I0226 22:50:12.997941  2442 sgd_solver.cpp:112] Iteration 7100, lr = 0.00668733
I0226 22:50:22.059113  2442 solver.cpp:239] Iteration 7200 (11.0363 iter/s, 9.061s/100 iters), loss = 0.0080553
I0226 22:50:22.059166  2442 solver.cpp:258]     Train net output #0: loss = 0.0080552 (* 1 = 0.0080552 loss)
I0226 22:50:22.059180  2442 sgd_solver.cpp:112] Iteration 7200, lr = 0.00665815
I0226 22:50:31.139458  2442 solver.cpp:239] Iteration 7300 (11.0132 iter/s, 9.08s/100 iters), loss = 0.0308229
I0226 22:50:31.139509  2442 solver.cpp:258]     Train net output #0: loss = 0.0308228 (* 1 = 0.0308228 loss)
I0226 22:50:31.139528  2442 sgd_solver.cpp:112] Iteration 7300, lr = 0.00662927
I0226 22:50:40.195005  2442 solver.cpp:239] Iteration 7400 (11.0436 iter/s, 9.055s/100 iters), loss = 0.00373491
I0226 22:50:40.195057  2442 solver.cpp:258]     Train net output #0: loss = 0.00373482 (* 1 = 0.00373482 loss)
I0226 22:50:40.195071  2442 sgd_solver.cpp:112] Iteration 7400, lr = 0.00660067
I0226 22:50:48.808516  2443 data_layer.cpp:73] Restarting data prefetching from start.
I0226 22:50:49.173398  2442 solver.cpp:347] Iteration 7500, Testing net (#0)
I0226 22:50:54.556639  2444 data_layer.cpp:73] Restarting data prefetching from start.
I0226 22:50:54.781818  2442 solver.cpp:414]     Test net output #0: accuracy = 0.99
I0226 22:50:54.781873  2442 solver.cpp:414]     Test net output #1: loss = 0.0321095 (* 1 = 0.0321095 loss)
I0226 22:50:54.871702  2442 solver.cpp:239] Iteration 7500 (6.81385 iter/s, 14.676s/100 iters), loss = 0.00325872
I0226 22:50:54.871752  2442 solver.cpp:258]     Train net output #0: loss = 0.00325864 (* 1 = 0.00325864 loss)
I0226 22:50:54.871767  2442 sgd_solver.cpp:112] Iteration 7500, lr = 0.00657236
I0226 22:51:03.959707  2442 solver.cpp:239] Iteration 7600 (11.0047 iter/s, 9.087s/100 iters), loss = 0.0034388
I0226 22:51:03.959767  2442 solver.cpp:258]     Train net output #0: loss = 0.00343871 (* 1 = 0.00343871 loss)
I0226 22:51:03.959781  2442 sgd_solver.cpp:112] Iteration 7600, lr = 0.00654433
I0226 22:51:13.010762  2442 solver.cpp:239] Iteration 7700 (11.0497 iter/s, 9.05s/100 iters), loss = 0.0260679
I0226 22:51:13.010815  2442 solver.cpp:258]     Train net output #0: loss = 0.0260678 (* 1 = 0.0260678 loss)
I0226 22:51:13.010829  2442 sgd_solver.cpp:112] Iteration 7700, lr = 0.00651658
I0226 22:51:22.080538  2442 solver.cpp:239] Iteration 7800 (11.0266 iter/s, 9.069s/100 iters), loss = 0.00266021
I0226 22:51:22.080646  2442 solver.cpp:258]     Train net output #0: loss = 0.00266013 (* 1 = 0.00266013 loss)
I0226 22:51:22.080663  2442 sgd_solver.cpp:112] Iteration 7800, lr = 0.00648911
I0226 22:51:31.160694  2442 solver.cpp:239] Iteration 7900 (11.0132 iter/s, 9.08s/100 iters), loss = 0.00704976
I0226 22:51:31.160763  2442 solver.cpp:258]     Train net output #0: loss = 0.00704968 (* 1 = 0.00704968 loss)
I0226 22:51:31.160782  2442 sgd_solver.cpp:112] Iteration 7900, lr = 0.0064619
I0226 22:51:40.139366  2442 solver.cpp:347] Iteration 8000, Testing net (#0)
I0226 22:51:45.531428  2444 data_layer.cpp:73] Restarting data prefetching from start.
I0226 22:51:45.757876  2442 solver.cpp:414]     Test net output #0: accuracy = 0.9901
I0226 22:51:45.757930  2442 solver.cpp:414]     Test net output #1: loss = 0.030962 (* 1 = 0.030962 loss)
I0226 22:51:45.847748  2442 solver.cpp:239] Iteration 8000 (6.80921 iter/s, 14.686s/100 iters), loss = 0.00426692
I0226 22:51:45.847800  2442 solver.cpp:258]     Train net output #0: loss = 0.00426683 (* 1 = 0.00426683 loss)
I0226 22:51:45.847815  2442 sgd_solver.cpp:112] Iteration 8000, lr = 0.00643496
I0226 22:51:54.916476  2442 solver.cpp:239] Iteration 8100 (11.0278 iter/s, 9.068s/100 iters), loss = 0.00814418
I0226 22:51:54.916622  2442 solver.cpp:258]     Train net output #0: loss = 0.00814409 (* 1 = 0.00814409 loss)
I0226 22:51:54.916640  2442 sgd_solver.cpp:112] Iteration 8100, lr = 0.00640827
I0226 22:52:04.000636  2442 solver.cpp:239] Iteration 8200 (11.0084 iter/s, 9.084s/100 iters), loss = 0.0103396
I0226 22:52:04.000689  2442 solver.cpp:258]     Train net output #0: loss = 0.0103395 (* 1 = 0.0103395 loss)
I0226 22:52:04.000702  2442 sgd_solver.cpp:112] Iteration 8200, lr = 0.00638185
I0226 22:52:13.055968  2442 solver.cpp:239] Iteration 8300 (11.0436 iter/s, 9.055s/100 iters), loss = 0.0209491
I0226 22:52:13.056023  2442 solver.cpp:258]     Train net output #0: loss = 0.020949 (* 1 = 0.020949 loss)
I0226 22:52:13.056041  2442 sgd_solver.cpp:112] Iteration 8300, lr = 0.00635568
I0226 22:52:22.116808  2442 solver.cpp:239] Iteration 8400 (11.0375 iter/s, 9.06s/100 iters), loss = 0.00696749
I0226 22:52:22.116861  2442 solver.cpp:258]     Train net output #0: loss = 0.00696739 (* 1 = 0.00696739 loss)
I0226 22:52:22.116875  2442 sgd_solver.cpp:112] Iteration 8400, lr = 0.00632975
I0226 22:52:25.107352  2443 data_layer.cpp:73] Restarting data prefetching from start.
I0226 22:52:31.087731  2442 solver.cpp:347] Iteration 8500, Testing net (#0)
I0226 22:52:36.476758  2444 data_layer.cpp:73] Restarting data prefetching from start.
I0226 22:52:36.702246  2442 solver.cpp:414]     Test net output #0: accuracy = 0.9909
I0226 22:52:36.702301  2442 solver.cpp:414]     Test net output #1: loss = 0.0295619 (* 1 = 0.0295619 loss)
I0226 22:52:36.792059  2442 solver.cpp:239] Iteration 8500 (6.81431 iter/s, 14.675s/100 iters), loss = 0.00802257
I0226 22:52:36.792111  2442 solver.cpp:258]     Train net output #0: loss = 0.00802247 (* 1 = 0.00802247 loss)
I0226 22:52:36.792124  2442 sgd_solver.cpp:112] Iteration 8500, lr = 0.00630407
I0226 22:52:45.857764  2442 solver.cpp:239] Iteration 8600 (11.0314 iter/s, 9.065s/100 iters), loss = 0.000618256
I0226 22:52:45.857817  2442 solver.cpp:258]     Train net output #0: loss = 0.000618144 (* 1 = 0.000618144 loss)
I0226 22:52:45.857832  2442 sgd_solver.cpp:112] Iteration 8600, lr = 0.00627864
I0226 22:52:54.917385  2442 solver.cpp:239] Iteration 8700 (11.0387 iter/s, 9.059s/100 iters), loss = 0.00374745
I0226 22:52:54.917438  2442 solver.cpp:258]     Train net output #0: loss = 0.00374734 (* 1 = 0.00374734 loss)
I0226 22:52:54.917454  2442 sgd_solver.cpp:112] Iteration 8700, lr = 0.00625344
I0226 22:53:03.966257  2442 solver.cpp:239] Iteration 8800 (11.0522 iter/s, 9.048s/100 iters), loss = 0.00128679
I0226 22:53:03.966358  2442 solver.cpp:258]     Train net output #0: loss = 0.00128669 (* 1 = 0.00128669 loss)
I0226 22:53:03.966377  2442 sgd_solver.cpp:112] Iteration 8800, lr = 0.00622847
I0226 22:53:13.013511  2442 solver.cpp:239] Iteration 8900 (11.0534 iter/s, 9.047s/100 iters), loss = 0.000865528
I0226 22:53:13.013563  2442 solver.cpp:258]     Train net output #0: loss = 0.000865408 (* 1 = 0.000865408 loss)
I0226 22:53:13.013576  2442 sgd_solver.cpp:112] Iteration 8900, lr = 0.00620374
I0226 22:53:21.981493  2442 solver.cpp:347] Iteration 9000, Testing net (#0)
I0226 22:53:27.345178  2444 data_layer.cpp:73] Restarting data prefetching from start.
I0226 22:53:27.570178  2442 solver.cpp:414]     Test net output #0: accuracy = 0.9905
I0226 22:53:27.570235  2442 solver.cpp:414]     Test net output #1: loss = 0.0283523 (* 1 = 0.0283523 loss)
I0226 22:53:27.659744  2442 solver.cpp:239] Iteration 9000 (6.8278 iter/s, 14.646s/100 iters), loss = 0.0144745
I0226 22:53:27.659796  2442 solver.cpp:258]     Train net output #0: loss = 0.0144743 (* 1 = 0.0144743 loss)
I0226 22:53:27.659812  2442 sgd_solver.cpp:112] Iteration 9000, lr = 0.00617924
I0226 22:53:36.718863  2442 solver.cpp:239] Iteration 9100 (11.0387 iter/s, 9.059s/100 iters), loss = 0.00866055
I0226 22:53:36.719024  2442 solver.cpp:258]     Train net output #0: loss = 0.00866042 (* 1 = 0.00866042 loss)
I0226 22:53:36.719039  2442 sgd_solver.cpp:112] Iteration 9100, lr = 0.00615496
I0226 22:53:45.783699  2442 solver.cpp:239] Iteration 9200 (11.0327 iter/s, 9.064s/100 iters), loss = 0.00280615
I0226 22:53:45.783751  2442 solver.cpp:258]     Train net output #0: loss = 0.00280601 (* 1 = 0.00280601 loss)
I0226 22:53:45.783769  2442 sgd_solver.cpp:112] Iteration 9200, lr = 0.0061309
I0226 22:53:54.853555  2442 solver.cpp:239] Iteration 9300 (11.0266 iter/s, 9.069s/100 iters), loss = 0.00804057
I0226 22:53:54.853613  2442 solver.cpp:258]     Train net output #0: loss = 0.00804044 (* 1 = 0.00804044 loss)
I0226 22:53:54.853633  2442 sgd_solver.cpp:112] Iteration 9300, lr = 0.00610706
I0226 22:54:01.190043  2443 data_layer.cpp:73] Restarting data prefetching from start.
I0226 22:54:03.901945  2442 solver.cpp:239] Iteration 9400 (11.0522 iter/s, 9.048s/100 iters), loss = 0.0300722
I0226 22:54:03.902004  2442 solver.cpp:258]     Train net output #0: loss = 0.0300721 (* 1 = 0.0300721 loss)
I0226 22:54:03.902019  2442 sgd_solver.cpp:112] Iteration 9400, lr = 0.00608343
I0226 22:54:12.897828  2442 solver.cpp:347] Iteration 9500, Testing net (#0)
I0226 22:54:18.294420  2444 data_layer.cpp:73] Restarting data prefetching from start.
I0226 22:54:18.524183  2442 solver.cpp:414]     Test net output #0: accuracy = 0.9886
I0226 22:54:18.524237  2442 solver.cpp:414]     Test net output #1: loss = 0.0350763 (* 1 = 0.0350763 loss)
I0226 22:54:18.614092  2442 solver.cpp:239] Iteration 9500 (6.79717 iter/s, 14.712s/100 iters), loss = 0.00340371
I0226 22:54:18.614145  2442 solver.cpp:258]     Train net output #0: loss = 0.00340357 (* 1 = 0.00340357 loss)
I0226 22:54:18.614158  2442 sgd_solver.cpp:112] Iteration 9500, lr = 0.00606002
I0226 22:54:27.679172  2442 solver.cpp:239] Iteration 9600 (11.0314 iter/s, 9.065s/100 iters), loss = 0.00235499
I0226 22:54:27.679229  2442 solver.cpp:258]     Train net output #0: loss = 0.00235484 (* 1 = 0.00235484 loss)
I0226 22:54:27.679244  2442 sgd_solver.cpp:112] Iteration 9600, lr = 0.00603682
I0226 22:54:36.747495  2442 solver.cpp:239] Iteration 9700 (11.0278 iter/s, 9.068s/100 iters), loss = 0.00282941
I0226 22:54:36.747551  2442 solver.cpp:258]     Train net output #0: loss = 0.00282926 (* 1 = 0.00282926 loss)
I0226 22:54:36.747570  2442 sgd_solver.cpp:112] Iteration 9700, lr = 0.00601382
I0226 22:54:45.798738  2442 solver.cpp:239] Iteration 9800 (11.0485 iter/s, 9.051s/100 iters), loss = 0.00863724
I0226 22:54:45.798919  2442 solver.cpp:258]     Train net output #0: loss = 0.0086371 (* 1 = 0.0086371 loss)
I0226 22:54:45.798943  2442 sgd_solver.cpp:112] Iteration 9800, lr = 0.00599102
I0226 22:54:54.860203  2442 solver.cpp:239] Iteration 9900 (11.0363 iter/s, 9.061s/100 iters), loss = 0.00483852
I0226 22:54:54.860255  2442 solver.cpp:258]     Train net output #0: loss = 0.00483838 (* 1 = 0.00483838 loss)
I0226 22:54:54.860270  2442 sgd_solver.cpp:112] Iteration 9900, lr = 0.00596843
I0226 22:55:03.856971  2442 solver.cpp:464] Snapshotting to binary proto file examples/mnist/lenet_iter_10000.caffemodel
I0226 22:55:03.867449  2442 sgd_solver.cpp:284] Snapshotting solver state to binary proto file examples/mnist/lenet_iter_10000.solverstate
I0226 22:55:03.909601  2442 solver.cpp:327] Iteration 10000, loss = 0.00331884
I0226 22:55:03.909649  2442 solver.cpp:347] Iteration 10000, Testing net (#0)
I0226 22:55:09.291905  2444 data_layer.cpp:73] Restarting data prefetching from start.
I0226 22:55:09.518225  2442 solver.cpp:414]     Test net output #0: accuracy = 0.9916
I0226 22:55:09.518285  2442 solver.cpp:414]     Test net output #1: loss = 0.0280047 (* 1 = 0.0280047 loss)
I0226 22:55:09.518299  2442 solver.cpp:332] Optimization Done.
I0226 22:55:09.518311  2442 caffe.cpp:250] Optimization Done.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值