目标检测的模型培训实验

本文是目标检测的模型haartraining培训的继续,上文介绍了培训的软件及使用介绍。

本文根据上文做的实验。正样品的准备,负样本的准备,培训traincascade。目标培训的目标是一个篮球。

简单说,正样本,就是包含目标篮球的图片,负样本是没有篮球的图片。、

正样本的形成

篮球拍了27个样本,去掉背景,就是背景为0,如下图:

有的介绍一个正样本,就可以了,因为可以用createsamples 产生很多正样本,就是旋转,调整光强度等等。也有说法是一个纯正样本产生100个正样本是比较合适的。我就根据这个原则用27个产生2700个正样本。

createsamples -info pos27\posa.txt -img pg\p27.jpg -bg neg.txt -w 64 -h 64 -maxxangle 0 -maxyangle 0 

这样运行27次

这个产生正样本有些曲折。开始以为27个产生2700个就好了,结果只是产生27个。这就是说要么一个纯正样本产生你需要的个数的vec 文件,要么有几个图片就产生多少个正样本。都不能达到我的目的。

其实可以一个纯样本,就产生一个vec 文件,然后把他们合并起来就好了。但我不知道vec 结构,也不知道怎么合并。

后来查看到一个链接如下:是一个python 写的。我觉得那个方法更好,可以跳过下面这段直接看 合并vec的方法那段

https://github.com/wulfebw/mergevec

我是从https://medium.com/@rithikachowta/object-detection-lbp-cascade-classifier-generation-a1d1a1c2d0b得到上面链接的,她的实验介绍值得一看。后面有这个方法的介绍。

我做的就是上面的方法产生27个目录,每个目录下还有posa.txt ,结构如下:

然后合并这27个posa.txt 文件为一个pos.txt

用的是python 代码如下:

for i in range(1,28):
    pos="/apython/train/pos"+str(i)+"/"
    with open('posx.txt','a') as f, open(pos+'posa.txt','r') as fread:
    # loop over the input images
        data = fread.read().splitlines()
        for stra in data:
            print(pos+stra,file=f)
print("finish2")

把这27个目录合并正样本的pos.txt,其部分内容如下,总共有2700行。

pos1/0097_0010_0009_0047_0047.jpg 1 10 9 47 47
pos1/0098_0005_0009_0045_0045.jpg 1 5 9 45 45
pos1/0099_0002_0004_0057_0057.jpg 1 2 4 57 57
pos1/0100_0003_0004_0055_0055.jpg 1 3 4 55 55
pos2/0001_0004_0004_0048_0048.jpg 1 4 4 48 48
pos2/0002_0001_0004_0056_0056.jpg 1 1 4 56 56
pos2/0003_0004_0002_0052_0052.jpg 1 4 2 52 52

这样处理后,就可以产生正样本的vec 文件,如下:

createsamples -info pos.txt  -vec pos.vec -bg neg.txt  -num 2700 -w 64 -h 64

这样就得到了pos.vec

合并vec 的方法

https://github.com/wulfebw/mergevec 下载mergevec.py,那里也有使用方法介绍。为了方便,我也复制一份贴这里。

合并vec方法是

1:把所有的vec 文件放一个单独的目录,我是放在pos 目录下,

2:下载或者这里复制mergevec.py

3: 执行 python mergevec.py -v pos -o  posa.vec

这里pos 是需要合并的vec 文件目录, posa.vec 是放合并结果的文件名。

使用合并vec 的方法,就比较简单了。

产生单个vec 文件的方法是:

opencv_createsamples -img pg/p1.jpg -vec pos/p1.vec -bg neg.txt -w 64 -h 64 -num 100 -maxxangle 0 -maxyangle 0

这样要循环执行27次,可以有个批处理命令来完成,我没找到合适的批处理命令,就用python 来完成的。可以看我的另篇博文:

用python 扩展批处理命令, 这个博文就是介绍怎么一次完成27个命令。

mergevec.py代码如下:

###############################################################################
# Copyright (c) 2014, Blake Wulfe
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
###############################################################################

"""
File: mergevec.py
Author: blake.w.wulfe@gmail.com
Date: 6/13/2014
File Description:

	This file contains a function that merges .vec files called "merge_vec_files".
	I made it as a replacement for mergevec.cpp (created by Naotoshi Seo.
	See: http://note.sonots.com/SciSoftware/haartraining/mergevec.cpp.html)
	in order to avoid recompiling openCV with mergevec.cpp.

	To use the function:
	(1) Place all .vec files to be merged in a single directory (vec_directory).
	(2) Navigate to this file in your CLI (terminal or cmd) and type "python mergevec.py -v your_vec_directory -o your_output_filename".

		The first argument (-v) is the name of the directory containing the .vec files
		The second argument (-o) is the name of the output file

	To test the output of the function:
	(1) Install openCV.
	(2) Navigate to the output file in your CLI (terminal or cmd).
	(2) Type "opencv_createsamples -w img_width -h img_height -vec output_filename".
		This should show the .vec files in sequence.

"""

import sys
import glob
import struct
import argparse
import traceback


def exception_response(e):
	exc_type, exc_value, exc_traceback = sys.exc_info()
	lines = traceback.format_exception(exc_type, exc_value, exc_traceback)
	for line in lines:
		print(line)

def get_args():
	parser = argparse.ArgumentParser()
	parser.add_argument('-v', dest='vec_directory')
	parser.add_argument('-o', dest='output_filename')
	args = parser.parse_args()
	return (args.vec_directory, args.output_filename)

def merge_vec_files(vec_directory, output_vec_file):
	"""
	Iterates throught the .vec files in a directory and combines them.

	(1) Iterates through files getting a count of the total images in the .vec files
	(2) checks that the image sizes in all files are the same

	The format of a .vec file is:

	4 bytes denoting number of total images (int)
	4 bytes denoting size of images (int)
	2 bytes denoting min value (short)
	2 bytes denoting max value (short)

	ex: 	6400 0000 4605 0000 0000 0000

		hex		6400 0000  	4605 0000 		0000 		0000
			   	# images  	size of h * w		min		max
		dec	    	100     	1350			0 		0

	:type vec_directory: string
	:param vec_directory: Name of the directory containing .vec files to be combined.
				Do not end with slash. Ex: '/Users/username/Documents/vec_files'

	:type output_vec_file: string
	:param output_vec_file: Name of aggregate .vec file for output.
		Ex: '/Users/username/Documents/aggregate_vec_file.vec'

	"""

	# Check that the .vec directory does not end in '/' and if it does, remove it.
	if vec_directory.endswith('/'):
		vec_directory = vec_directory[:-1]
	# Get .vec files
	files = glob.glob('{0}/*.vec'.format(vec_directory))

	# Check to make sure there are .vec files in the directory
	if len(files) <= 0:
		print('Vec files to be mereged could not be found from directory: {0}'.format(vec_directory))
		sys.exit(1)
	# Check to make sure there are more than one .vec files
	if len(files) == 1:
		print('Only 1 vec file was found in directory: {0}. Cannot merge a single file.'.format(vec_directory))
		sys.exit(1)


	# Get the value for the first image size
	prev_image_size = 0
	try:
		with open(files[0], 'rb') as vecfile:
			content = b''.join((line) for line in vecfile.readlines())
			val = struct.unpack('<iihh', content[:12])
			prev_image_size = val[1]
	except IOError as e:
		print('An IO error occured while processing the file: {0}'.format(f))
		exception_response(e)


	# Get the total number of images
	total_num_images = 0
	for f in files:
		try:
			with open(f, 'rb') as vecfile:
				content = b''.join((line) for line in vecfile.readlines())
				val = struct.unpack('<iihh', content[:12])
				num_images = val[0]
				image_size = val[1]
				if image_size != prev_image_size:
					err_msg = """The image sizes in the .vec files differ. These values must be the same. \n The image size of file {0}: {1}\n
						The image size of previous files: {0}""".format(f, image_size, prev_image_size)
					sys.exit(err_msg)

				total_num_images += num_images
		except IOError as e:
			print('An IO error occured while processing the file: {0}'.format(f))
			exception_response(e)


	# Iterate through the .vec files, writing their data (not the header) to the output file
	# '<iihh' means 'little endian, int, int, short, short'
	header = struct.pack('<iihh', total_num_images, image_size, 0, 0)
	try:
		with open(output_vec_file, 'wb') as outputfile:
			outputfile.write(header)

			for f in files:
				with open(f, 'rb') as vecfile:
					content = b''.join((line) for line in vecfile.readlines())
					outputfile.write(bytearray(content[12:]))
	except Exception as e:
		exception_response(e)


if __name__ == '__main__':
	vec_directory, output_filename = get_args()
	if not vec_directory:
		sys.exit('mergvec requires a directory of vec files. Call mergevec.py with -v /your_vec_directory')
	if not output_filename:
		sys.exit('mergevec requires an output filename. Call mergevec.py with -o your_output_filename')

	merge_vec_files(vec_directory, output_filename)

 

负样本的产生

我是先开始产生正样本,发现其需要负样本,所以负样本是先产生的。只是我的思维是先正,再负。

拍了10多张照片,然后切割,形成3600张背景照片

切割的python 代码如下:

import cv2
from os import listdir

path="bg/"
patha="bga/"
imagePaths = sorted(list(listdir(path)))

# loop over the input images
k=0
for imagePath in imagePaths:
    image = cv2.imread(path+imagePath)
    (h, w, d) = image.shape
    resized=cv2.resize(image, (w//3,h//3))
    (h, w, d) = resized.shape
    print(h,w)
    for i in range(h//64-1):
        for j in range(w//64-1):
            roi=resized[(i*64):(i*64+64), (j*64):(j*64+64)]
            
            filename=patha+"bg"+str(k)+".jpg"
            print(filename,roi.shape)
            cv2.imwrite(filename,roi)
            k=k+1
print("finish")

你需要根据你的负样本照片数,以及负样本数量的需要,控制resized 的参数,我的照片像素比较大,所以压缩了3倍。

代码里path="bg/" 是拍的照片, patha="bga" 这个是切割后输出的文件

有了这个负样本文件,还要形成neg.txt

代码如下:

from os import listdir

patha="/apython/train/bga/"

imagePaths = sorted(list(listdir(patha)))
imagePs=imagePaths[0:2700]

with open('train/neg.txt','w') as f:
# loop over the input images
    for imagePath in imagePs:
         print(patha+imagePath,file=f)
print("finish2")

培训

有了正样本文件pos.vec和负样本neg.txt,就可以开始培训了。

下面截图的命令是:

traincascade -data xml2 -vec pos.vec -bg negs.txt -numPos 2400 -numNeg 3500 -numStages 15 -w 64 -h 64 -featureType LBP

 

下面是LBP 选项时的结束界面,用时 2个半小时。到5阶段时说条件已满足。 

这个命令有很多参数,尝试了很多回。

第一个是速度的问题, -featureType  这个有3个参数,分别是:HAAR, LBP, HOG。

你可以不带参数直接traincascade ,他会显示帮助,可以看到参数的。

缺省是HAAR,这个很慢,LBP 是中间, HOG最快,我开始缺省用的是HAAR, 2-3天才有结果, LBP只有2小时,而HOG 只有20分钟。

第二个是 -numPos 我开始认为我有多少就写多少吧,就是2700,但运行到Stage 1,就退出来了。后来查网上信息,这个最好是总样本数字的90%。这样才能运行完成。答案来自:

https://answers.opencv.org/question/776/error-in-parameter-of-traincascade/

第三个注意的参数是 -mode 

-mode <BASIC (default) | CORE | ALL>

其含义如下图, ALL 数据最多。

验证

从-data 指定的目录里复制xml 文件到人脸识别软件的目录里,替代人脸的xml 文件就可以了。

开始验证效果不好,我就修改我的xml 形成方法,添加负样本到7000。

验证的时候不像人脸,很快就出来了,可能与我是64x64尺寸的缘故,或者什么的。

# import the necessary packages 输入包
import imutils
import argparse
import cv2
import datetime

# construct the argument parser and parse the arguments //构造命令行参数分析
# 为了集成测试,或者命令行输入的简单,这里都有缺省参数
#image 是 images/family.jpg
#detector 是 haarcascade_frontalface_default.xml
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", default='balls.jpg',
	help="path to the input image")
#ap.add_argument("-d", "--detector", default='haarcascade_frontalface_default.xml',
ap.add_argument("-d", "--detector", default='cascade.xml',
	help="path to Haar cacscade face detector")
args = vars(ap.parse_args())

# load our image and convert it to grayscale 导入图形文件,并灰度化
image = cv2.imread(args["image"])
image =imutils.resize(image,width=500)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

# load the face detector and detect faces in the image
# 导入脸部检测文件
print('detect')
start = datetime.datetime.now()
detector = cv2.CascadeClassifier(args["detector"])
#检测图形中的脸部
rects = detector.detectMultiScale(gray, scaleFactor=1.25, minNeighbors=12,
	minSize=(64, 64), flags=cv2.CASCADE_SCALE_IMAGE)
#显示检测到的人脸数目
print("[INFO] detection took: {}s".format(
	(datetime.datetime.now() - start).total_seconds()))

print("[INFO] detected {} faces".format(len(rects)))

# loop over the bounding boxes and draw a rectangle around each face
# 循环rects,绘图每个检测到的人脸框
if len(rects) >0:
   for (x, y, w, h) in rects:
	    cv2.rectangle(image, (x, y), (x + w, y + h), (0, 255, 0), 2)

# show the detected faces
print('show')
cv2.imshow("Faces", image)
start = datetime.datetime.now()
while 1:
    if cv2.waitKey(10) & 0xFF==ord('q'):# q to quit
        cv2.destroyAllWindows()
        print('finish')
        break
    if (datetime.datetime.now() - start).total_seconds()>100:
        print('finish2')
        break

这样的测试一次需要5秒多,还是更改后的结果,开始200多秒。scaleFactor =1.25,比较影响大,最开始是1.05

 

google 的目标检测介绍:

https://github.com/tensorflow/models/tree/master/research/object_detection

Google深度学习目标检测API模型比较

https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/training.html

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值