caffe-segnet的一个小例子

最近想尝试一下图像的分割,就先用caffe-segnet做了一次简单的实验,一个简单二分类的实验,目标就是细胞分割

首先是ubuntu下配置segnet-caffe,安装教程http://blog.csdn.net/hjxu2016/article/details/77473591

然后是对数据的处理,需要原图和groudtruth图,因为我得到的数据是二分类的,所以给我的groudtruth是一个二值化的黑白图。像素为0和255,

首先需要将mask的像素值转换成0和1,注意,这里转换成0和1的图片我使用的是uint8型,基本的图像处理知识就可以实现

然后我们需要制作一个txt的列表,左边是原图的根目录路径,右边是mask图的根目录路径,中间以空格隔开,一定要注意,mask的路径和原图的路径一定要对应

列表形式如下:

/home/hjxu/caffe_examples/segnet_xu/data/test/image/8900_11800.tiff /home/hjxu/caffe_examples/segnet_xu/data/test/mask/8500_10200_ConfidenceMap.png
/home/hjxu/caffe_examples/segnet_xu/data/test/image/10100_9800.tiff /home/hjxu/caffe_examples/segnet_xu/data/test/mask/8900_11800_ConfidenceMap.png
/home/hjxu/caffe_examples/segnet_xu/data/test/image/8900_9000.tiff /home/hjxu/caffe_examples/segnet_xu/data/test/mask/9300_10200_ConfidenceMap.png
/home/hjxu/caffe_examples/segnet_xu/data/test/image/8900_10200.tiff /home/hjxu/caffe_examples/segnet_xu/data/test/mask/8900_9000_ConfidenceMap.png

制作txt我采用python代码

#!/usr/bin/env python2
# -*- coding: utf-8 -*-
"""
Created on Fri Sep 15 10:55:49 2017

@author: hjxu
"""
import glob
import os
IMG_PATH = '/home/hjxu/caffe_examples/segnet_xu/data/test/image' #原图像的路径
MASK_PATH = '/home/hjxu/caffe_examples/segnet_xu/data/test/mask'#mask的路径,注意这里的mask值是uint8型的0和1
img_paths = glob.glob(os.path.join(IMG_PATH, '*.tiff'))
mask_paths = glob.glob(os.path.join(MASK_PATH,'*.png'))
img_paths.sort()
mask_paths.sort()
image_mask_pair = zip(img_paths, mask_paths)
image_mask_pair = list(image_mask_pair)
file=open('/home/hjxu/PycharmProjects/create_txt/train.txt','w') #写入一个txt文件
for image_path, mask_path in image_mask_pair:
    temp = image_path + ' ' +mask_path   #注意单引号之间有个空格
    file.write(temp +'\n')
file.close() 
生产txt文件后就可以直接写caffe的prototxt文件了,下面这句话抄的别人的,写的挺简单了,就是一个类别权重的计算,以及修改输入图片大小后上采样参数的修改

下面就说训练了,在训练的时候我们可以根据自己训练的要求更改分割的类型,segnet对原来是11中类型,在我的工程中我只有两种类型,这就会遇到对网络的修改,同时数据输入的也是一样原来的是360*480,我用的是400*400,网络中的修改根据个人的要求以及效果进行修改,要注意的是上采样upsample这个参数的修改,以及最后的class_weighting,对于class_weighting个数以及参数是根据自己的数据以及要求设定,输出几个类别class_weighting就有几个,对于class_weighting参数的确定是根据训练数据的mask中每一种类型的label确定的,就算方法:(all_label/class)/label,下面是计算的算法代码:

caffe的用法应该比较熟悉,三个文件slover.prototxt,deploy.prototxt, train_val.prototxt

这三个文件的原先github上有https://github.com/alexgkendall/SegNet-Tutorial

这里就不介绍了,相信大家还是很熟悉的,然后就是训练了,建立以下sh脚本,train.sh

#!/usr/bin/env sh
/home/hjxu/caffe-master/caffe-segnet-segnet-cleaned/build/tools/caffe train \
-gpu 0 -solver /home/hjxu/caffe_examples/Cell_segnet/code/segnet_solver.prototxt \
-weights /home/hjxu/caffe_examples/Cell_segnet/code/VGG_ILSVRC_16_layers.caffemodel \
2>&1|tee /home/hjxu/caffe_examples/Cell_segnet/train.txt

这里面调用caffe-segnet的train函数,有三个参数

第一个参数:-gpu 0 指的的GPU选择

第二个参数 -slover /home/hjxu/caffe_examples/Cell_segnet/code/segnet_solver.prototxt 指的是caffe三个文件的solver文件

第三个参数 -weights /home/hjxu/caffe_examples/Cell_segnet/code/VGG_ILSVRC_16_layers.caffemodel 指的是初始化权重

然后就可以开心得到模型了,不要着急,生成这个模型后,还不能直接使用图片来测试,我们还需要写一个python脚本,命名compute_bn_statistics.py

#!/usr/bin/env python
import os
import numpy as np
from skimage.io import ImageCollection
from argparse import ArgumentParser




caffe_root = '/home/hjxu/caffe-master/caffe-segnet-segnet-cleaned/' 			# Change this to the absolute directoy to SegNet Caffe
import sys
sys.path.insert(0, caffe_root + 'python')

import caffe
from caffe.proto import caffe_pb2
from google.protobuf import text_format


def extract_dataset(net_message):
    assert net_message.layer[0].type == "DenseImageData"
    source = net_message.layer[0].dense_image_data_param.source
    with open(source) as f:
        data = f.read().split()
    ims = ImageCollection(data[::2])
    labs = ImageCollection(data[1::2])
    assert len(ims) == len(labs) > 0
    return ims, labs


def make_testable(train_model_path):
    # load the train net prototxt as a protobuf message
    with open(train_model_path) as f:
        train_str = f.read()
    train_net = caffe_pb2.NetParameter()
    text_format.Merge(train_str, train_net)

    # add the mean, var top blobs to all BN layers
    for layer in train_net.layer:
        if layer.type == "BN" and len(layer.top) == 1:
            layer.top.append(layer.top[0] + "-mean")
            layer.top.append(layer.top[0] + "-var")

    # remove the test data layer if present
    if train_net.layer[1].name == "data" and train_net.layer[1].include:
        train_net.layer.remove(train_net.layer[1])
        if train_net.layer[0].include:
            # remove the 'include {phase: TRAIN}' layer param
            train_net.layer[0].include.remove(train_net.layer[0].include[0])
    return train_net


def make_test_files(testable_net_path, train_weights_path, num_iterations,
                    in_h, in_w):
    # load the train net prototxt as a protobuf message
    with open(testable_net_path) as f:
        testable_str = f.read()
    testable_msg = caffe_pb2.NetParameter()
    text_format.Merge(testable_str, testable_msg)
    
    bn_layers = [l.name for l in testable_msg.layer if l.type == "BN"]
    bn_blobs = [l.top[0] for l in testable_msg.layer if l.type == "BN"]
    bn_means = [l.top[1] for l in testable_msg.layer if l.type == "BN"]
    bn_vars = [l.top[2] for l in testable_msg.layer if l.type == "BN"]

    net = caffe.Net(testable_net_path, train_weights_path, caffe.TEST)
    
    # init our blob stores with the first forward pass
    res = net.forward()
    bn_avg_mean = {bn_mean: np.squeeze(res[bn_mean]).copy() for bn_mean in bn_means}
    bn_avg_var = {bn_var: np.squeeze(res[bn_var]).copy() for bn_var in bn_vars}

    # iterate over the rest of the training set
    for i in xrange(1, num_iterations):
        res = net.forward()
        for bn_mean in bn_means:
            bn_avg_mean[bn_mean] += np.squeeze(res[bn_mean])
        for bn_var in bn_vars:
            bn_avg_var[bn_var] += np.squeeze(res[bn_var])
        print 'progress: {}/{}'.format(i, num_iterations)

    # compute average means and vars
    for bn_mean in bn_means:
        bn_avg_mean[bn_mean] /= num_iterations
    for bn_var in bn_vars:
        bn_avg_var[bn_var] /= num_iterations

    for bn_blob, bn_var in zip(bn_blobs, bn_vars):
        m = np.prod(net.blobs[bn_blob].data.shape) / np.prod(bn_avg_var[bn_var].shape)
        bn_avg_var[bn_var] *= (m / (m - 1))

    # calculate the new scale and shift blobs for all the BN layers
    scale_data = {bn_layer: np.squeeze(net.params[bn_layer][0].data)
                  for bn_layer in bn_layers}
    shift_data = {bn_layer: np.squeeze(net.params[bn_layer][1].data)
                  for bn_layer in bn_layers}

    var_eps = 1e-9
    new_scale_data = {}
    new_shift_data = {}
    for bn_layer, bn_mean, bn_var in zip(bn_layers, bn_means, bn_vars):
        gamma = scale_data[bn_layer]
        beta = shift_data[bn_layer]
        Ex = bn_avg_mean[bn_mean]
        Varx = bn_avg_var[bn_var]
        new_gamma = gamma / np.sqrt(Varx + var_eps)
        new_beta = beta - (gamma * Ex / np.sqrt(Varx + var_eps))

        new_scale_data[bn_layer] = new_gamma
        new_shift_data[bn_layer] = new_beta
    print "New data:"
    print new_scale_data.keys()
    print new_shift_data.keys()

    # assign computed new scale and shift values to net.params
    for bn_layer in bn_layers:
        net.params[bn_layer][0].data[...] = new_scale_data[bn_layer].reshape(
            net.params[bn_layer][0].data.shape
        )
        net.params[bn_layer][1].data[...] = new_shift_data[bn_layer].reshape(
            net.params[bn_layer][1].data.shape
        )
        
    # build a test net prototxt
    test_msg = testable_msg
    # replace data layers with 'input' net param
    data_layers = [l for l in test_msg.layer if l.type.endswith("Data")]
    for data_layer in data_layers:
        test_msg.layer.remove(data_layer)
    test_msg.input.append("data")
    test_msg.input_dim.append(1)
    test_msg.input_dim.append(3)
    test_msg.input_dim.append(in_h)
    test_msg.input_dim.append(in_w)
    # Set BN layers to INFERENCE so they use the new stat blobs
    # and remove mean, var top blobs.
    for l in test_msg.layer:
        if l.type == "BN":
            if len(l.top) > 1:
                dead_tops = l.top[1:]
                for dl in dead_tops:
                    l.top.remove(dl)
            l.bn_param.bn_mode = caffe_pb2.BNParameter.INFERENCE
    # replace output loss, accuracy layers with a softmax
    dead_outputs = [l for l in test_msg.layer if l.type in ["SoftmaxWithLoss", "Accuracy"]]
    out_bottom = dead_outputs[0].bottom[0]
    for dead in dead_outputs:
        test_msg.layer.remove(dead)
    test_msg.layer.add(
        name="prob", type="Softmax", bottom=[out_bottom], top=['prob']
    )
    return net, test_msg


def make_parser():
    p = ArgumentParser()
    p.add_argument('train_model')
    p.add_argument('weights')
    p.add_argument('out_dir')
    return p


if __name__ == '__main__':
    caffe.set_mode_gpu()
    p = make_parser()
    args = p.parse_args()

    # build and save testable net
    if not os.path.exists(args.out_dir):
        os.makedirs(args.out_dir)
    print "Building BN calc net..."
    testable_msg = make_testable(args.train_model)
    BN_calc_path = os.path.join(
        args.out_dir, '__for_calculating_BN_stats_' + os.path.basename(args.train_model)
    )
    with open(BN_calc_path, 'w') as f:
        f.write(text_format.MessageToString(testable_msg))

    # use testable net to calculate BN layer stats
    print "Calculate BN stats..."
    train_ims, train_labs = extract_dataset(testable_msg)
    train_size = len(train_ims)
    minibatch_size = testable_msg.layer[0].dense_image_data_param.batch_size
    num_iterations = train_size // minibatch_size + train_size % minibatch_size
    in_h, in_w =(360, 480)   #记得修改和自己图片一样的大小
    test_net, test_msg = make_test_files(BN_calc_path, args.weights, num_iterations,
                                         in_h, in_w)
    
    # save deploy prototxt
    #print "Saving deployment prototext file..."
    #test_path = os.path.join(args.out_dir, "deploy.prototxt")
    #with open(test_path, 'w') as f:
    #    f.write(text_format.MessageToString(test_msg))
    
    print "Saving test net weights..."
    test_net.save(os.path.join(args.out_dir, "test_weights.caffemodel"))   #记得修改迭代多少次命名
    print "done"

然后调用下面sh脚本,test.sh

python /home/hjxu/caffe_examples/Cell_segnet/code/compute_bn_statistics.py \
/home/hjxu/caffe_examples/segnet_xu/code/profile/segnet_train.prototxt \
/home/hjxu/caffe_examples/Cell_segnet/model-xu/seg_iter_30000.caffemodel \
/home/hjxu/caffe_examples/segnet_xu/model/seg_30000  # compute BN statistics for SegNet
就可以在model生成一个新的caffemodel文件了

下面我们来测试一下自己的图片,首先还是得制作txt文件列表,然后写一个python文件,命名test_segmentation_camvid.py

import numpy as np
import matplotlib.pyplot as plt
import os.path
import json
import scipy
import argparse
import math
import pylab
from sklearn.preprocessing import normalize
caffe_root = '/home/hjxu/caffe-master/caffe-segnet-segnet-cleaned/' 			# Change this to the absolute directoy to SegNet Caffe
import sys
sys.path.insert(0, caffe_root + 'python')

import caffe

# Import arguments
parser = argparse.ArgumentParser()
parser.add_argument('--model', type=str, required=True)
parser.add_argument('--weights', type=str, required=True)
parser.add_argument('--iter', type=int, required=True)
args = parser.parse_args()

caffe.set_mode_gpu()

net = caffe.Net(args.model,
                args.weights,
                caffe.TEST)


for i in range(0, args.iter):

	net.forward()

	image = net.blobs['data'].data
	label = net.blobs['label'].data
	predicted = net.blobs['prob'].data
	image = np.squeeze(image[0,:,:,:])
	output = np.squeeze(predicted[0,:,:,:])
	ind = np.argmax(output, axis=0)

	r = ind.copy()
	g = ind.copy()
	b = ind.copy()
	r_gt = label.copy()
	g_gt = label.copy()
	b_gt = label.copy()

#	Sky = [128,128,128]
#	Building = [128,0,0]
#	Pole = [192,192,128]
#	Road_marking = [255,69,0]
#	Road = [128,64,128]
#	Pavement = [60,40,222]
#	Tree = [128,128,0]
#	SignSymbol = [192,128,128]
#	Fence = [64,64,128]
#	Car = [64,0,128]
#	Pedestrian = [64,64,0]
#	Bicyclist = [0,128,192]
#	Unlabelled = [0,0,0]

#	label_colours = np.array([Sky, Building, Pole, Road, Pavement, Tree, SignSymbol, Fence, Car, Pedestrian, Bicyclist, Unlabelled])
        BG = [0,0,0]
        M = [0,255,0]
        label_colours = np.array([BG, M])
	for l in range(0,2):
		r[ind==l] = label_colours[l,0]
		g[ind==l] = label_colours[l,1]
		b[ind==l] = label_colours[l,2]
		r_gt[label==l] = label_colours[l,0]
		g_gt[label==l] = label_colours[l,1]
		b_gt[label==l] = label_colours[l,2]

	rgb = np.zeros((ind.shape[0], ind.shape[1], 3))
	rgb[:,:,0] = r/255.0
	rgb[:,:,1] = g/255.0
	rgb[:,:,2] = b/255.0
	rgb_gt = np.zeros((ind.shape[0], ind.shape[1], 3))
	rgb_gt[:,:,0] = r_gt/255.0
	rgb_gt[:,:,1] = g_gt/255.0
	rgb_gt[:,:,2] = b_gt/255.0

	image = image/255.0

	image = np.transpose(image, (1,2,0))
	output = np.transpose(output, (1,2,0))
	image = image[:,:,(2,1,0)]


	#scipy.misc.toimage(rgb, cmin=0.0, cmax=255).save(IMAGE_FILE+'_segnet.png') #保存文件

	plt.figure()
	plt.imshow(image,vmin=0, vmax=1)  #显示源文件
	plt.figure()
	plt.imshow(rgb_gt,vmin=0, vmax=1) #给的mask图片,如果测试的图片没有mask,可以随便放个图片列表,省的修改代码
	plt.figure()
	plt.imshow(rgb,vmin=0, vmax=1) # 预测图片
	plt.show()


print 'Success!'
然后调用下面sh脚本,命名result_show_test.sh

python /home/hjxu/caffe_examples/Cell_segnet/code/test_segmentation_camvid.py \
--model /home/hjxu/caffe_examples/Cell_segnet/code/segnet_inference.prototxt \
--weights /home/hjxu/caffe_examples/segnet_xu/model/seg_30000/test_weights.caffemodel --iter 111  # Test SegNet

就可以得到测试图片了

具体参考

Segnet分割网络caffe教程(一)

segnet分割网络的地址说明:http://mi.eng.cam.ac.uk/projects/segnet/tutorial.html




评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值