caffe 利用Python API 做数据输入层

caffe (Convolutional Architecture for Fast Feature Embedding)

在caffe中,主要使用LMDB提供数据管理,将形形色色的原始数据转换为统一的Key-Value形式存储,便于数据输入层获得这些数据,而且提高了磁盘IO的利用率。
但是,有时我们可以使用python作为网络结构数据的输入层,毕竟python 简单易写。

参考网址:https://chrischoy.github.io/research/caffe-python-layer/

编译选择

想要使用Python Layer,我们需要在编译的时候修改Makefile.config中的pyhon 的选项。

WITH_PYTHON_LAYER := 1

修改完之后,重新编译:

#在caffe_root依次运行
make clean
make 
make pycaffe 

如果还是 import caffe错误,则将以下几句加入到Python文件的最前面,再导入caffe库:

import sys
sys.path.append("/home/yonghuming/caffe-master/python")
sys.path.append("/home/yonghuming/caffe-master/python/caffe")

或者是修改全局变量:

sudo vim /etc/profile

添加全局变量:

export PYTHONPATH=/home/yonghu/caffe/python

最后:source /etc/profile

先看一个官方例程

此历程所在位置为$caffe_root/examples/pycaffe下
配置文件linreg.prototxt

layer {
  type: 'Python'
  name: 'loss'
  top: 'loss'
  bottom: 'ipx'
  bottom: 'ipy'
  python_param {
    # the module name -- usually the filename -- that needs to be in $PYTHONPATH
    module: 'pyloss'
    # the layer name -- the class name in the module
    layer: 'EuclideanLossLayer'
  }
  # set loss weight so Caffe knows this is a loss layer.
  # since PythonLayer inherits directly from Layer, this isn't automatically
  # known to Caffe
  loss_weight: 1
}

此配置文件中的loss层就是用Python写的层。

 module: 'pyloss'
 layer: 'EuclideanLossLayer'

module指的是python文件的名字;layer值得是Python文件中的类。

    # the module name -- usually the filename -- that needs to be in $PYTHONPATH

其中$PYTHONPATH指的是$caffe_root/python , 只要把python文件放到此目录下就可以了。
再看一下pyloss.py文件

import caffe
import numpy as np


class EuclideanLossLayer(caffe.Layer):
    """
    Compute the Euclidean Loss in the same manner as the C++ EuclideanLossLayer
    to demonstrate the class interface for developing layers in Python.
    """

    def setup(self, bottom, top):
        # check input pair
        if len(bottom) != 2:
            raise Exception("Need two inputs to compute distance.")

    def reshape(self, bottom, top):
        # check input dimensions match
        if bottom[0].count != bottom[1].count:
            raise Exception("Inputs must have the same dimension.")
        # difference is shape of inputs
        self.diff = np.zeros_like(bottom[0].data, dtype=np.float32)
        # loss output is scalar
        top[0].reshape(1)

    def forward(self, bottom, top):
        self.diff[...] = bottom[0].data - bottom[1].data
        top[0].data[...] = np.sum(self.diff**2) / bottom[0].num / 2.

    def backward(self, top, propagate_down, bottom):
        for i in range(2):
            if not propagate_down[i]:
                continue
            if i == 0:
                sign = 1
            else:
                sign = -1
            bottom[i].diff[...] = sign * self.diff / bottom[i].num

使用Python layer 做数据输入层

此处我导入的数据为28*28*6的数组,也就是数据类型为:

print type(data)
print np.shape(data)

# <type 'numpy.ndarray'>
# (28, 28, 6)

图片也是类似,利用opencv读入的图片也是数组结构:

import numpy as np 
import cv2
data = cv2.imread('1.jpg')
print type(data)
#<type 'numpy.ndarray'>
print np.shape(data)
# <type 'numpy.ndarray'>
# (375, 500, 3)

但是一定要注意三色通道问题!
此时我的配置文件中有关Python层的定义:

name: "LeNet"
layer {
  name: "Data"
  type: "Python"
  top: "data"
  top: "label"
  include {
    phase: TRAIN
  }
  python_param {
    module: "dataLayer"
    layer: "Custom_Data_Layer"
    param_str: '{"batch_size":64, "im_shape":28, "src_file":"data/input/train"}'
  }
}

param_str为参数。
此时py文件:

import caffe
import numpy as np
import os
import random
def GetTupleList(src_file, dirtag):
    subDirTuples = []
    folder = os.path.join(src_file, dirtag)
    fns = os.listdir(folder)
    if dirtag=='pos':
        tag = 0
    elif dirtag=='neg':
        tag = 1
    else:
        raise Exception('Invalid dirtag {}'.format(str(dirtag)))

    for fn in fns:
        path = os.path.join(folder, fn)
        data = np.load(path)
        subDirTuples.append((data, np.array([tag])))
    return subDirTuples

def readSrcFile(src_file):
    posTuples = GetTupleList(src_file, 'pos')
    print(len(posTuples))
    negTuples = GetTupleList(src_file, 'neg')
    print(len(negTuples))
    imgTuples = posTuples + negTuples

    return imgTuples

class Custom_Data_Layer(caffe.Layer):
    def setup(self, bottom, top):
        # Check top shape
        if len(top) != 2:
            raise Exception("Need to define tops (data and label1)")

        # Check bottom shape
        if len(bottom) != 0:
            raise Exception("Do not define a bottom")

        # Read parameters
        params = eval(self.param_str)
        src_file = params["src_file"]
        self.batch_size = params["batch_size"]
        self.im_shape = params["im_shape"]

        top[0].reshape(self.batch_size, 6, self.im_shape, self.im_shape)
        top[1].reshape(self.batch_size, 1)
        self.imgTuples = readSrcFile(src_file)
        self._cur = 0 # use this to check if we need to restart the list of images

    def forward(self, bottom, top):
        for itt in range(self.batch_size):
            # Use the batch loader to load the next image
            im, label = self.load_next_image()
            # Here we could preprocess the image

            # Add directly to the top blob
            im_data = np.reshape(im, (6,28,28))
            #注意!!np.reshape()
            top[0].data[itt, ...] = im_data 
            top[1].data[itt, ...] = label

    def load_next_image(self):
        # If we have finished forwarding all images, then an epoch has finished
        # and it is time to start a new one
        if self._cur == len(self.imgTuples):
            self._cur = 0
            random.shuffle(self.imgTuples)

        im, label = self.imgTuples[self._cur]
        self._cur += 1
        return im, label

    def reshape(self, bottom, top):
        """
        There is no need to reshape the data, since the input is of fixed size
        (img shape and batch size)
        """
        pass

    def backward(self, bottom, top):
        """
        This layer does not back propagate
        """
        pass

完成!

  • 1
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值