使用str可以一次性将caffemodel中所有的的字段读出来
例如:
# coding:utf-8
import _init_paths
import caffe.proto.caffe_pb2 as caffe_pb2
caffemodel_filename = 'F:/caffe_new/caffe/caffe-master/data/mnist-test/mnist-models/lenet_iter_10000.caffemodel'
model = caffe_pb2.NetParameter()
f = open(caffemodel_filename, 'rb')
model.ParseFromString(f.read())
f.close()
print model.__str__
该代码已经可以打印出网络训练所需的train.prototxt,但同时也会将blobs中的不必要的data数据段也打印出来,这个字段里面有太多的内容,是经过多次迭代学习出来的卷积核以及bias的数值。这些字段在prototxt文件中应当忽略。还有str输出的首尾也有不必要的字符串也要去掉。
我们可以将str读出的字段输出到文件,并将不必要的字段去掉。
因为解析caffemode时会读到大量的blobs占用很长时间,没有使用大的caffemodel,所以该代码是以lenet_iter_10000.caffemodel为例
下面是处理后的代码
# coding:utf-8
#import _init_paths
import caffe.proto.caffe_pb2 as caffe_pb2
caffemodel_filename = 'F:/caffe_new/caffe/caffe-master/data/mnist-test/mnist-models/lenet_iter_10000.caffemodel'
#caffemodel_filename = 'F:/py-faster-rcnn-master/data/imagenet_models/ZF.v2.caffemodel'
model = caffe_pb2.NetParameter()
f = open(caffemodel_filename, 'rb')
model.ParseFromString(f.read())
f.close()
import sys
old=sys.stdout
save_filename = 'lenet_from_caffemodel.prototxt'
sys.stdout=open( save_filename, 'w')
print model.__str__
sys.stdout=old
f.close()
#linux下的sed命令可以对文本中的字段进行操作,而我们使用的为Windows系统
#import os
#cmd_1 = 'sed -i "1s/^.\{38\}//" ' + save_filename # 删除第一行前面38个字符
#cmd_1 = 'sed -i "1s/^" ' + save_filename # 删除第一行前面38个字符
#cmd_2 = "sed -i '$d' " + save_filename # 删除最后一行
#os.system(cmd_1)
#os.system(cmd_2)
# 打开刚刚存储的文件,输出里面的内容,输出时过滤掉“blobs”块和"phase: TRAIN"行。
f=open(save_filename, 'r')
lines = f.readlines() #readlines()自动将文件内容分析成一个行的列表,该列表可以由 python 的 for... in ... 结构进行处理
f.close()
wr = open(save_filename, 'w')
now_have_blobs = False
nu = 1
for line in lines:
#print nu
nu = nu + 1
content = line.strip('\n') #strip(rm)函数的作用为删除line字符串中开头、结尾处,位于rm删除序列的字符(即所有'\n')
if (content == ' blobs {'): #如果字段包含'{'则开始
now_have_blobs = True
elif (content == ' }' and now_have_blobs==True): #结束的条件
now_have_blobs = False
continue
if (content == ' phase: TRAIN'): #不保留bolbs中的 phase: TRAIN字段
continue
if (now_have_blobs):
continue
else:
wr.write(content+'\n') #满足条件则将字段写入
wr.close()
下面为生成的lenet_from_caffemodel.prototxt
<bound method NetParameter.__str__ of name: "LeNet"
layer {
name: "mnist"
type: "Data"
top: "data"
top: "label"
include {
phase: TRAIN
}
transform_param {
scale: 0.00390625
mean_file: "mnist-data/trainMean.binaryproto"
}
data_param {
source: "mnist-data/mtrainlmdb"
batch_size: 50
backend: LMDB
}
}
layer {
name: "conv1"
type: "Convolution"
bottom: "data"
top: "conv1"
param {
lr_mult: 1.0
}
param {
lr_mult: 2.0
}
convolution_param {
num_output: 20
kernel_size: 5
stride: 1
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "pool1"
type: "Pooling"
bottom: "conv1"
top: "pool1"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "conv2"
type: "Convolution"
bottom: "pool1"
top: "conv2"
param {
lr_mult: 1.0
}
param {
lr_mult: 2.0
}
convolution_param {
num_output: 50
kernel_size: 5
stride: 1
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "pool2"
type: "Pooling"
bottom: "conv2"
top: "pool2"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "ip1"
type: "InnerProduct"
bottom: "pool2"
top: "ip1"
param {
lr_mult: 1.0
}
param {
lr_mult: 2.0
}
inner_product_param {
num_output: 500
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "relu1"
type: "ReLU"
bottom: "ip1"
top: "ip1"
}
layer {
name: "ip2"
type: "InnerProduct"
bottom: "ip1"
top: "ip2"
param {
lr_mult: 1.0
}
param {
lr_mult: 2.0
}
inner_product_param {
num_output: 10
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "loss"
type: "SoftmaxWithLoss"
bottom: "ip2"
bottom: "label"
top: "loss"
loss_weight: 1.0
}
>
由于还未找到删除文本首行和尾行的windows命令,暂时还不知道怎么处理这两行的字段
如果手动去除首行和尾行不必要的字符,再利用netscope工具画出的网络结构图为