其他资料
以下内容摘自参考文献[2],将caffe中的data和label分别写在两个layer中:
layer {
name: "data"
type: "Data"
top: "data"
include {
phase: TRAIN
}
transform_param {
mean_value: 104.00699
mean_value: 116.66877
mean_value: 122.67892
}
data_param {
source: "skin/data/traindata_lmdb"
batch_size: 1
backend: LMDB
}
}
layer {
name: "label"
type: "Data"
top: "label"
include {
phase: TRAIN
}
data_param {
source: "skin/data/trainlabel_lmdb"
batch_size: 1
backend: LMDB
}
transform_param {
# feature scaling coefficient: this maps the [0, 255] MNIST data to [0, 1]
scale: 0.00390625
}
}
layer {
name: "data"
type: "Data"
top: "data"
include {
phase: TEST
}
transform_param {
mean_value: 104.00699
mean_value: 116.66877
mean_value: 122.67892
}
data_param {
source: "skin/data/valdata_lmdb"
batch_size: 1
backend: LMDB
}
}
layer {
name: "label"
type: "Data"
top: "label"
include {
phase: TEST
}
data_param {
source: "skin/data/vallabel_lmdb"
batch_size: 1
backend: LMDB
}
transform_param {
# feature scaling coefficient: this maps the [0, 255] MNIST data to [0, 1]
scale: 0.00390625
}
}
我们首先通过可视化工具可视化一下:
然后,通过训练一下lenet网络,检查一下上面的形式是否可行。
先略。
多标签
以下内容摘自文献[4],请多支持原文。感谢他们的分享。
train_val.prototxt
name: "LeNet"
###for data and labels
layer {
name: "data"
type: "HDF5Data"
top: "data"
top: "labels"
include {
phase: TRAIN
}
hdf5_data_param {
source: "list_train.txt"
batch_size: 100
}
}
layer {
name: "data"
type: "HDF5Data"
top: "data"
top: "labels"
include {
phase: TEST
}
hdf5_data_param {
source: "list_test.txt"
batch_size: 100
}
}
layer {
name: "slicers"
type: "Slice"
bottom: "labels"
top: "label_1"
top: "label_2"
slice_param {
axis: 1
slice_point: 1
}
}
### for all
layer {
name: "conv_all"
type: "Convolution"
bottom: "data"
top: "conv_all"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
convolution_param {
num_output: 50
kernel_size: 5
stride: 1
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "relu_all"
type: "ReLU"
bottom: "conv_all"
top: "conv_all"
}
layer {
name: "pool_all"
type: "Pooling"
bottom: "conv_all"
top: "pool_all"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
### for kind_1
layer {
name: "ip1"
type: "InnerProduct"
bottom: "pool_all"
top: "ip1"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
inner_product_param {
num_output: 2
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "accuracy1"
type: "Accuracy"
bottom: "ip1"
bottom: "label_1"
top: "accuracy1"
include {
phase: TEST
}
}
layer {
name: "loss_1"
type: "SoftmaxWithLoss"
bottom: "ip1"
bottom: "label_1"
top: "loss_1"
}
###for kind_2
layer {
name: "ip2"
type: "InnerProduct"
bottom: "pool_all"
top: "ip2"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
inner_product_param {
num_output: 3
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "accuracy2"
type: "Accuracy"
bottom: "ip2"
bottom: "label_2"
top: "accuracy2"
include {
phase: TEST
}
}
layer {
name: "loss_2"
type: "SoftmaxWithLoss"
bottom: "ip2"
bottom: "label_2"
top: "loss_2"
}
使用Netscope可视化:
参考文献:
1 http://blog.csdn.net/u013010889/article/details/53098346 [caffe实现多标签输入(multilabel、multitask)]
2. http://blog.csdn.net/zhikangfu/article/details/45170047 [将caffe中的data和label分别写在两个layer中]
3. https://www.zhihu.com/question/53296707 [已经通过 使用HDF5Data做输入 ,加入slice layer,构建五个小分类器 解决了这个问题,效果也还可以。]
4. http://blog.csdn.net/u011762313/article/details/48851015 [Caffe中HDF5Data例子。]