使用pycaffe 编写train_test.prototxt和deploy.prototxt

write_lenet(trainlmdb,
                    testlmdb,
                    batch_size_train=64,
                    batch_size_test = 10,
                    isdeploy = False):

    # our version of LeNet: a series of linear and simple nonlinear transformations
    n = caffe.NetSpec() 
    ntest = caffe.NetSpec()
    path = '/home/sailist/GIt/intel/caffe/myExamples/cell_lenet/'
    train_path = path + 'train_test.prototxt'
    deploy_path = path + 'deploy.prototxt'

    if True:            
        n.data, n.label = L.Data(batch_size=batch_size_train, 
                             backend=P.Data.LMDB, 
                             source=trainlmdb,
                             transform_param=dict(scale=1./255,mean_value=[225, 213, 234]), 
                             include = dict(phase=caffe.TRAIN),
                             ntop=2)
        ntest.data, ntest.label = L.Data(batch_size=batch_size_test, 
                             backend=P.Data.LMDB, 
                             source=testlmdb,
                             transform_param=dict(scale=1./255), 
                             include = dict(phase=caffe.TEST),
                             ntop=2)    

    n.conv1 = L.Convolution(n.data, kernel_size=5, num_output=20, weight_filler=dict(type='xavier'))
    n.pool1 = L.Pooling(n.conv1, kernel_size=2, stride=2, pool=P.Pooling.MAX)
    n.conv2 = L.Convolution(n.pool1, kernel_size=5, num_output=50, weight_filler=dict(type='xavier'))
    n.pool2 = L.Pooling(n.conv2, kernel_size=2, stride=2, pool=P.Pooling.MAX)
    n.ip1 =   L.InnerProduct(n.pool2, num_output=500, weight_filler=dict(type='xavier'))
    n.relu1 = L.ReLU(n.ip1, in_place=True)
    n.ip2 = L.InnerProduct(n.relu1, num_output=10, weight_filler=dict(type='xavier'))
    n.soft = L.Softmax(n.ip2)
    if not isdeploy:
        n.loss =  L.SoftmaxWithLoss(n.ip2, n.label)
        n.acc =L.Accuracy(n.ip2,n.label)


    if not isdeploy:
        outputs = str(ntest.to_proto())+str(n.to_proto())
        #outputs = remove_drop(outputs)
        with open(train_path, 'w') as f:
            f.write(outputs) 
    else:
        outputs = str(n.to_proto())
        #outputs = remove_drop(outputs)
        with open(deploy_path, 'w') as f:
            f.write(outputs)     

    return n.to_proto() #写入到prototxt文件
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52

这样的话deploy.prototxt就能同时生成,但deploy的开头还需要改一下,就是删除data层,加入以下内容

layer {
  name: "data"
  type: "Input"
  top: "data"
  input_param { shape: { dim: 10 dim: 3 dim: 100 dim: 100 } }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
def change_deploy():
    path = '/home/sailist/GIt/intel/caffe/myExamples/cell_all_alexnet/'
    start = path + 'start_deploy.txt'
    deploy_path = path + 'c_deploy.prototxt'
    new_path = path + "deploy.prototxt"
    with open(deploy_path, 'r') as f:
        inputs = "".join((f.readlines()))
        spinput = inputs.split("layer")
        with open(start,'r') as f2:
            start_ = f2.readlines()
            #print(spinput[0])            
            spinput[0] = "".join(start_)
            #print(spinput[1])
            del(spinput[1])
        inputs = "layer".join(spinput)
        with open(new_path,"w") as f2:
            f2.write(inputs)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17

我是用这个方法自动化处理了一下
最后,只需要调用

write_cell_net(train_lmdb_fname,test_lmdb_fname)
write_cell_net(train_lmdb_fname,test_lmdb_fname,isdeploy = True)
change_deploy()
  • 1
  • 2
  • 3

就能同时生成train_test.prototxt,deploy.prototxt了,其实关键还是掌握了层级之间的传递关系

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值