caffe如何自定义网络以及自定义层(python)(五)

前面铺垫了这么多,终于到主题了。

先写一个配置文件conv.protxt

layer {
  name: 'MyPythonLayer'
  type: 'Python'
  top: 'output'
  python_param {
    module: 'mypythonlayer'
    layer: 'MyLayer'
    param_str: "{\'data_dir\':\'../../images\',\'num\': 100}"
  }
}
然后就是这个python模块怎么写?mypythonlayer.py

#!/usr/bin/env python
#coding=utf-8
import sys 
import os
import numpy as np
import os.path as osp
import matplotlib.pyplot as plt
import pylab
from copy import copy
caffe_root = '/home/x/git/caffe/'
sys.path.insert(0, caffe_root + 'python')
import caffe
import numpy as np
import yaml
import numpy as np
from PIL import Image

class MyLayer(caffe.Layer):

    def setup(self, bottom, top):
        params = eval(self.param_str)
	self.data_dir = params['data_dir']
        self.num = yaml.load(self.param_str)["num"]
	self.cat  = '{}/{}.jpg'.format(self.data_dir,'cat')
	print "Image:",self.cat
        print "Parameter num : ", self.num
	

    def reshape(self, bottom, top):
        im = Image.open(self.cat)
        in_ = np.array(im, dtype=np.float32)
        in_ = in_[:,:,::-1]
        self.data = in_.transpose((2,0,1))
	top[0].reshape(1, *self.data.shape)


    def forward(self, bottom, top):
	top[0].data[...] = self.data
	print "forward=============================>cat"
	print self.data
	print type(top[0])
	print dir(top[0])
	print type(top[0].data)
	#print help(top[0].data)
	print type(top[0].data[...])

    def backward(self, top, propagate_down, bottom):
        pass
里面包括几个基本的设置,然后就是参数的读取,我这里添加了两个参数,路径和一个num参数。

基本实现里面有参数读取和图片读取的简单实现。

接下来调用test_python.py

#!/usr/bin/env python
#coding=utf-8
import numpy as np
import matplotlib.pyplot as plt
from PIL import Image

caffe_root = '/home/x/git/caffe/'
import sys
sys.path.insert(0, caffe_root + 'python')
import caffe
import mypythonlayer
sys.path.append("/home/x/git/caffe/examples/test_layer_lb/mypython")

caffe.set_device(0)
caffe.set_mode_gpu()

print "================================="
net = caffe.Net('/home/x/git/caffe/examples/test_layer_lb/mypython/conv.prototxt',caffe.TEST)
#print net.blobs.items()
#print dir(net.blobs)
net.forward()
日志:

WARNING: Logging before InitGoogleLogging() is written to STDERR
I1129 19:11:46.731223 16691 net.cpp:49] Initializing net from parameters: 
state {
  phase: TEST
}
layer {
  name: "MyPythonLayer"
  type: "Python"
  top: "output"
  python_param {
    module: "mypythonlayer"
    layer: "MyLayer"
    param_str: "{\'data_dir\':\'../../images\',\'num\': 100}"
  }
}
I1129 19:11:46.731266 16691 layer_factory.hpp:77] Creating layer MyPythonLayer
I1129 19:11:46.731314 16691 net.cpp:91] Creating Layer MyPythonLayer
I1129 19:11:46.731322 16691 net.cpp:399] MyPythonLayer -> output
I1129 19:11:46.744997 16691 net.cpp:141] Setting up MyPythonLayer
I1129 19:11:46.745023 16691 net.cpp:148] Top shape: 1 3 360 480 (518400)
I1129 19:11:46.745028 16691 net.cpp:156] Memory required for data: 2073600
I1129 19:11:46.745035 16691 net.cpp:219] MyPythonLayer does not need backward computation.
I1129 19:11:46.745039 16691 net.cpp:261] This network produces output output
I1129 19:11:46.745045 16691 net.cpp:274] Network initialization done.
=================================
Image: ../../images/cat.jpg
Parameter num :  100
forward=============================>cat
[[[  49.   50.   47. ...,   30.   36.   45.]
  [  51.   52.   48. ...,   32.   37.   45.]
  [  51.   51.   49. ...,   27.   35.   42.]
  ..., 
  [  15.   13.   13. ...,  156.  162.  164.]
  [  19.   20.   22. ...,  167.  158.  167.]
  [  25.   25.   27. ...,  188.  157.  158.]]

 [[  57.   58.   55. ...,   77.   87.   94.]
  [  57.   58.   56. ...,   82.   89.   96.]
  [  57.   57.   57. ...,   88.   95.   99.]
  ..., 
  [  47.   46.   47. ...,  173.  179.  181.]
  [  53.   56.   58. ...,  184.  175.  184.]
  [  62.   64.   66. ...,  205.  174.  175.]]

 [[  26.   27.   25. ...,   45.   50.   56.]
  [  26.   27.   25. ...,   50.   52.   58.]
  [  26.   26.   26. ...,   52.   55.   60.]
  ..., 
  [  36.   32.   36. ...,  182.  188.  190.]
  [  42.   42.   44. ...,  193.  184.  193.]
  [  46.   49.   51. ...,  214.  183.  184.]]]
<class 'caffe._caffe.Blob'>
['__class__', '__delattr__', '__dict__', '__doc__', '__format__', '__getattribute__', '__hash__', '__init__', '__module__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', 'channels', 'count', 'data', 'diff', 'height', 'num', 'reshape', 'shape', 'width']
<type 'numpy.ndarray'>
<type 'numpy.ndarray'>
本人水平有点菜,开始一看python的protxt里面的参数有点懵逼,如果看不懂里面的某个东西直接写个文件调试,慢慢就懂了。
a="{\'sbdd_dir\': \'../data/sbdd/dataset\', \'seed\': 1337, \'split\': \'train\', \'mean\': (104.00699, 116.66877, 122.67892)}"
params = eval(a)
print type(())
print params.items()
params_dir=params['sbdd_dir']
print params_dir

评论 6
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值