From:http://www.computer-vision-software.com/blog/2009/11/faq-opencv-haartraining/
Hi All, before posting your question, please look at this FAQ carefully! Also you can readOpenCV haartraining article. If you are sure, there is no answer to your question, feel free to post comment. Also please, put comments about improvement of this post. This post will be updated, if needed.
Positive images
Why positive images are named so?
Because a positive image contains the target object which you want machine to detect. Unlike them, a negative image doesn’t contain such target objects.
What’s vec file in OpenCV haartraining?
During haartraining positive samples should have the same width and height as you define in command “-w -h size”. So original positive images are resized and packed as thumbs to vec file. Vec file has header: number of positive samples, width, height and contain positive thumbs in body.
Is it possible to merge vec files?
Yes, use Google, there are free tools, written by OpenCV’s community.
I have positive images, how create vec file of positive samples?
There is tool in C:\Program Files\OpenCV\apps\HaarTraining\src createsamples.cpp. Usage:
createsamples -info positive_description.txt -vec samples.vec -w 20 -h 20
What’s positive description file?
The matter is that, on each positive image, there can be several objects. They have bounding rectangles: x,y, width, height. So you can write such description info of image:
positive_image_name num_of_objects x y width height x y width height …
Text file, which contains such info about positive images is called description file. So during vec file generation, really objects are packed, but not whole image. Essentially vec file is needed to speed up machine learning.
Do I always need description file, even if I have only one object on a image?
Yes, with createsamples you need description file. If you have only one object, it’s bounding rectangle may be bounding rectangle of whole image. If you want, write your own tool for vec file generation =)
Should lightning conditions and background be various on positive images?
Yes, it’s very important. On each positive image, beside object, there is background. Try to fill this background with random noise, avoid constant background.
How much background should be on positive image?
If you have much background pixels on your positive images in comparison with object’s pixels – it’s bad since the haartraining could remember the background as feature of positive image.
If you don’t have background pixels at all – it’s also bad. There should be small background frame on positive image
Should all original positive images
have the same size?
[For training we supply number of positive images, cropped to remove extra background down to boundaries of object. Cropped positive pictures are marked by surrounding rectangle (position and size of this rectangle is saved in positive images list). When you make this markup, you have to retain aspect ratio. Haartraining application will scale your positive images to size of (w,h) and this process must not disturb proportions of object.]
What’ s -w and -h should I put in createsamples? Should it be always square?
You can put any value to -w and -h depend on aspect ratio of the target object which you want to detect. But objects of smaller size will not be detected! For faces, commonly used values are 24×24, 20×20. But you may use 24×20, 20×24, etc.
【
info.txt file:
pos/10003_frame.png 1 0 0 500 300
pos/10010_frame.png 1 0 0 500 300
pos/10021_frame.png 1 0 0 500 300
and the command line:
createsamples -info info.txt -vec pos.vec -w 25 -h 15
】
Errors during vec file generation: Incorrect size of input array, 0 kb vec file,
-First check you description file: positive_image_name should be absolute path name without spaces like “C:\content\image.jpg” not “C:\con tent\image.jpg” or relative path name.
-Avoid empty lines in description file
-Resolution of original positive image file should be not less, then -w -h parameters you put.
-Check that positive images are available in your file systems and not corrupted.
-There can be unsupported formats. Jpeg, Bmp, PPM are supported!
Example of vec file generation!
Let’s working directory be C:\haartraining. In it there is createsamples.exe. There is folder
C:\haartraining\positives. So create description file positive_desc.txt.
positives\image1.jpg 1 10 10 20 20
positives\image2.jpg 2 30 30 50 50 60 60 70 70
or
C:\haartraining\positives\image1.jpg 1 10 10 20 20
C:\haartraining\positives\image2.jpg 2 30 30 50 50 60 60 70 70
You should avoid empty lines and empty space in image’s path
createsamples -info positive_desc.txt -vec samples.vec -w 20 -h 20
Negative images
What negative images should I take?
You can use any image of OpenCV supported formats, which does not contain target objects (which are present on positive images). But they should be various – it’s important! Good enough database is here
Should negative images have the same size?
No. But the size should not be less than -w -h, which were put during vec file generation.
What’s description file for negative image?
It’s just text file, often called negative.dat, which contains full path to negative images like:
image_name1.jpg
image_name2.jpg
Avoid empty lines in it.
How many negative/positive image should I take?
It depends on your task. For real cascades there should be about 1000 positive images and 2000 negative images e.g.
Good enough proportion is positive:negative = 1:2, but it’s not hard rule! I would recommend first to use small number of samples, generate cascade, test it, then enlarge number of samples.
Launch haartraining.exe (OpenCV\apps\HaarTraining\src)
Example of launching
Working directory is C:\haartraining with haartraining.exe tool and samples.vec file.
Let’s negative images are in C:\haartraining\negative, in this case negative.dat should be like this:
negative\neg1.jpg
negative\neg2.jpg
…
So in C:\haartraining launch this: haartraining -data haarcascade -vec samples.vec -bg negatives.dat -nstages 20 -minhitrate 0.999 -maxfalsealarm 0.5 -npos 1000 -nneg 2000 -w 20 -h 20 -nonsym -mem 1024
- w h is the same, you put during vec file generation
- npos nneg – number of positive samples and negative samples
- mem – RAM memory, that program may use
- maxfalsealarm – maximum false alarm, that stage may have. If big false alarm – it could be bad detection system (maxfalsealarm should be in [0.4-0.5])
- minhitrate – minimal hit rate, that should stage have at least
- nstage – number of stages in cascade
What’ s falsealarm and hitrate of stage?
You should read theory of adaboost about strong classifier. Stage is strong classifier. In short:
- For example you have 1000 positive samples. You want your system to detect 900 of them. So desired hitrate = 900/1000 = 0.9. Commonly, put minhitrate = 0.999 (why not 0.9 ?)
- For example you have 1000 negative samples. Because it’s negative, you don’t want your system to detect them. But your system, because it has error, will detect some of them. Let error be about 490 samples, so false alarm = 490/1000 = 0.49. Commonly,put false alarm = 0.5
Are falsealarm and hitrate depend on each other?
Yes, there is dependency. You could not put minhitrate = 1.0 and maxfalsealarm = 0.0. .
Firstly, the system builds classifier with desired hitrate, then it will calculate it’s falsealarm, if the false alarm is higher than maxfalsealarm, the system will reject such classifier and will build the next one. During haartraining you may see such:
N |%SMP|F| ST.THR | HR | FA | EXP. ERR|
+—-+—-+-+———+———+———+———+
| 0 |25%|-|-1423.312590| 1.000000| 1.000000| 0.876272|
HR – hitrate
FA – falsealarm
What’s falsealarm and hitrate of whole cascade?
Cascade is linked list (or three) of stages. That’s why:
- False alarm of cascade = false alarm of stage 1* false alarm of stage 2* …
- Hit rate = hitrate of stage 1 * hitrate of stage 2* …
How many stages should be used?
- If you set big number of stages, then you will achieve better false alarm, but it will take more time for generating cascade.
- If you set big number of stages, then the detection time could be slower
- If you set big number of stages, then the worse hitrate will be (0.99*0.99*… etc).Commonly 14-25 stages are enough
- It’s useless to set many stage, if you have small number of positive, negative samples
What’s weighttrimming, eqw, bt, nonsym options?
Really all these parameters are related to Adaboost, read theory. In short:
- nonsym (样本特征不对称)– If you positive samples are not X or Y symmetric, put -nonsym, -sym is default!
- eqw – if you have different number of pos and neg images, it’s better to put no eqw
- weighttrimming – for calculation optimization. It can reduce calculation time a little, but quality may be worse
- bt – what Adaboost algorithm to use: Real AB, Gentle AB, etc.
What’s minpos, nsplits, maxtreesplits options?
These parameters are related to clustering. In Adaboost different week classifier may be used:stump-based or tree-based. If you choose nsplits > 0, tree-based will be used and you should set up minpos and maxtreesplits.
- nsplits – minimun number of nodes in tree ??应该选多大
- maxtreesplits – maximum number of nodes in tree. If maxtreesplits < nsplits, tree will not be built ??
- minpos – number of positive images, that can be used by one node during training. All positive images are splitted between nodes. Generally minpos should be not less than npos/nsplits.??
Errors and stranges during haartraining!
- Error (valid only for Discrete and Real AdaBoost): misclass – it’s warning, but no error. Some options are specific to D and R Adaboost. So your haartraining is ok.
- Screen is filled with such | 1000 |25%|-|-1423.312590| 1.000000| 1.000000| 0.876272| – your training is cycled, restart it. First column should have value < 100
- cvAlloc fails. Our of memory – you give too much negative images or sample.vec is too big. All these pictures are loaded to RAM.
- Pay attention you put the same -w and -h, as during vec file generation
- Pay attention, that number of positive samples and negative samples, you put in -npos -nneg are really available
- Avoid empty line in negative.dat file
- Required leaf false alarm rate achieved. Branch training terminated – it’s impossible to build classifier with good false alarm on this negative images. Check your negative images are really negative =), maxfalsealarm should be in [0.4-0.5]
OpenCV XML haarcascade
During haartraining, there are txt file in haarcascade folder, how can we get XML from them?
There is OpenCV/samples/c/convert_cascade.c. Use like:
convert_cascade –size=”20×20″ haarcascade haarcascade.xml
How can I test generated XML cascade?
There is OpenCv/apps/HaarTraining/src /perfomance.cpp. You need have positive images(not used during training) and positive description file. Use like:
performance -data haarcascade -w 20 -h 20 -info positive_description.txt -ni
performance -data haarcascade.xml -info positive_description.txt -ni
Time and Speed of haar cascade generation
Average time to generate cascade on PC?
It depends on task and your machine. I generated cascade for face detection, for this used such parameters: -nstages 20 -minhitrate 0.999 -maxfalsealarm 0.5 -npos 4000 -nneg 5000 -w 20 -h 20 -nonsym -mem 1024. It took 6 days on Pentium 2.7GHZ 2GB RAM.
What is OpenMP?
“The OpenMP (Open Multi-Processing) is an application programming interface (API) that supports multi-platform shared memory multiprocessing programming in C, C++ and Fortran on many architectures, including Unix and Microsoft Windows platforms“. If you have MT processor, you can use it. In code you should add OpenMP defines and put compile options. For example in VisualStudio2005: Properties->C/C++->Language->OpenMP support
Is it possible to improve speed of haartraining?
Yes, one of possible ways is to use parallel programming. We have realized OpenCV haartraining using MPI for linux cluster. You can read it here
Object detection with OpenCV XML cascades
Is it possible to detect rotated faces?
Yes. It is impossible to generate cascade, which can detect face in all orientations. But you can generate cascade for each orientation separately. For this you need positive content of rotated faces. You can try to generate cascade with OpenCV , add -mode ALL, with it tilted haar feature will be used. But it’s badly implemented, at least in OpenCV 1.1. If you want you can add your own feature to opencv haartraining – it’s not too hard.
Another approach is to write head pose estimator. Then rotate your pictures, so that you have frontal face and detect it with OpenCV default face cascade
Is it possible to recognize gender, attention, race with Haar features?
We tried, but could not do it with OpenCV haartraining. That’s why for such classification, we used our own gender and attention classificators. Of course you can use Adaboost for this task, which is implemented in haartraining, but we did not get good results.
Is it possible to detect faces in real time?
Yes. On PC default OpenCV facedetector takes about 200 ms for 640×480 picture, about 5fps – it’s not real time. We have changed facedetector and get about 15 fps – which is real time. You can see results here and here.
1. How to add new features?
Is adding another cvHaarFeature() in icvCreateIntHaarFeatures() the right way?
2.Endless loop,without termination.The reason?
endless loop happens if haartraining cannot extract enough negative samples (small images) from your negative images. Try to replace negative images with others of bigger size or more multifarious. You can stop training, change images and restart training. It will continue from the last successful stage.[更换样本后,程序是可以接着执行的,不用从头开始]
3.负样本大小?
The algorithm requires big negative images, the bigger the better (for example 1280×1024 for positive samples 20×20), don’t use images of size –w –h – it will not work. Haartraining will automatically extract millions of small negative samples from your negative images, so it need images with millions of subregions.
4.怎样确定 -w -h 的值?
Hi,
I am having trouble understanding how to determine the values for -w and -h :
example:
createsamples -info positive_description.txt -vec samples.vec -w 20 -h 20
How are the values “-w 20″ and -h 20″ determined?
Here is my situation:
example positive Image details
name: pos0001.bmp
width: 320 pixels
height: 240 pixels
In my description file for my positive images, the line for the above image look like this:
C:\BMPpositve\pos0001.bmp 1 17 81 232 64
This means that the bounding box that surronds the object of interest is of the following size:
width: 232
height: 64
232 divided by 64 = 3.625
Above it is stated that -w and -h are just aspect ratios? What will be the appropriate values in my case?
Here are other lines from my description file:c:\BMPpositve\pos0002.bmp 1 54 82 228 70
c:\BMPpositve\pos0003.bmp 1 56 86 162 58
c:\BMPpositve\pos0004.bmp 1 46 94 141 50
c:\BMPpositve\pos0005.bmp 1 44 98 137 45
c:\BMPpositve\pos0006.bmp 1 38 99 127 44
c:\BMPpositve\pos0007.bmp 1 11 64 279 80
c:\BMPpositve\pos0008.bmp 1 11 65 242 79
c:\BMPpositve\pos0009.bmp 1 22 68 227 76
c:\BMPpositve\pos0010.bmp 1 15 70 222 74
c:\BMPpositve\pos0011.bmp 1 14 76 204 63
c:\BMPpositve\pos0012.bmp 1 15 79 191 63
c:\BMPpositve\pos0013.bmp 1 18 82 177 57
c:\BMPpositve\pos0014.bmp 1 9 74 153 66
c:\BMPpositve\pos0015.bmp 1 6 73 186 63
c:\BMPpositve\pos0016.bmp 1 11 76 213 64
c:\BMPpositve\pos0017.bmp 1 31 116 157 41
-----
First of all choose -w and -h parameters. See next paragraph how to choose it. In the second place make all bounding boxes in your description file of the same ratio as ratio of -w and -h. It may mean that you need to crop some of your current boxes and expand the others. Don’t use bounding boxes of different aspect ratios as they will be scaled and proportions will be distorted hence algorithm will never see in nature object it was learned.
As for value of -w and -h, all your images from description file will be cropped by bounding box and rescaled to -w -h. If you choose small size, it can be impossible to recognize object by small picture. But if you choose big size, it can be impossible for the algorithm to distinguish meaningful features of object and random noise. So quality of recognition depends on these size significantly. You should choose it by trial and error method. For face recognition sizes of 20×20 or 24×24 are commonly used. Start from something like this. Make sure that you can (by your eyes) recognize the object downscaled to chosen size. Don’t use size bigger than necessary.