剑指天空的博客

图像处理

26局部与分割-平均背景法和codebook背景学习法

25局部与分割-平均背景法和codebook背景学习法

主要原理:

1:平均背景法:首先统计给定样本图像的平均值和平均差,这里的平均值和平均差都是针对单个像素而言,对应平均值和平均差分别有一副mask图像对应。最后给定一幅判定图像,遍历图像中的每一个像素点,依据平均值和平均差是否在给定的范围内,如果在范围内,则是前景,否则为背景。

2:codebook背景学习法(内存占用比较大):
<1>学习过程中无移动的前景:首先为样本图像的每个像素都创建一个codebook,
之后遍历样本每一帧图像中的每一个像素点,记录每一个像素值在对应的codebook中,这样重复出现的背景点,都会刷新到codebook中,最后给定一幅判定图像,遍历图像中的每一个像素点,判断该像素点在codebook中是否有相应的记录,如有则是背景点,如果没有就是前景点。

<2>学习过程中有移动的前景:
和无移动的前景相对比,多了一步删除陈旧的码字(码字是码本中的一个结构);在统计样本过程中,会更新每一个codebook(码本)的时间记录,因为背景像素点出现的频率远高于前景像素点,当清除陈旧的codebook时,移动的前景像素点会被清除掉,最后给定一幅判定图像,遍历图像中的每一个像素点,判断该像素点在codebook中是否有相应的记录,如有则是背景点,如果没有就是前景点。


相应的代码:

<1>主代码:主要是调用平均背景法,和codebook背景学习法的接口

//命令行参数:1 50 D:\\openvc_project\\learn_opencv_code\\LearningOpenCV_Code\\tree.avi
#include "cv.h"
#include "highgui.h"
#include <stdio.h>
#include <stdlib.h>
#include <ctype.h>
#include "AvgBackground.h"
#include "cv_yuv_codebook.h"

//VARIABLES for CODEBOOK METHOD:
codeBook *cB;   //This will be our linear model of the image, a vector 
                //of lengh = height*width
int maxMod[CHANNELS];   //Add these (possibly negative) number onto max 
                        // level when code_element determining if new pixel is foreground
int minMod[CHANNELS];   //Subract these (possible negative) number from min 
                        //level code_element when determining if pixel is foreground
unsigned cbBounds[CHANNELS]; //Code Book bounds for learning
bool ch[CHANNELS];      //This sets what channels should be adjusted for background bounds
int nChannels = CHANNELS;
int imageLen = 0;
uchar *pColor; //YUV pointer

void help() {
    printf("\nLearn background and find foreground using simple average and average difference learning method:\n"
        "\nUSAGE:\n  ch9_background startFrameCollection# endFrameCollection# [movie filename, else from camera]\n"
        "If from AVI, then optionally add HighAvg, LowAvg, HighCB_Y LowCB_Y HighCB_U LowCB_U HighCB_V LowCB_V\n\n"
        "***Keep the focus on the video windows, NOT the consol***\n\n"
        "INTERACTIVE PARAMETERS:\n"
        "\tESC,q,Q  - quit the program\n"
        "\th    - print this help\n"
        "\tp    - pause toggle\n"
        "\ts    - single step\n"
        "\tr    - run mode (single step off)\n"
        "=== AVG PARAMS ===\n"
        "\t-    - bump high threshold UP by 0.25\n"
        "\t=    - bump high threshold DOWN by 0.25\n"
        "\t[    - bump low threshold UP by 0.25\n"
        "\t]    - bump low threshold DOWN by 0.25\n"
        "=== CODEBOOK PARAMS ===\n"
        "\ty,u,v- only adjust channel 0(y) or 1(u) or 2(v) respectively\n"
        "\ta    - adjust all 3 channels at once\n"
        "\tb    - adjust both 2 and 3 at once\n"
        "\ti,o  - bump upper threshold up,down by 1\n"
        "\tk,l  - bump lower threshold up,down by 1\n"
        );
}

//
//USAGE:  ch9_background startFrameCollection# endFrameCollection# [movie filename, else from camera]
//If from AVI, then optionally add HighAvg, LowAvg, HighCB_Y LowCB_Y HighCB_U LowCB_U HighCB_V LowCB_V
//
int main(int argc, char** argv)
{
    IplImage* rawImage = 0, *yuvImage = 0; //yuvImage is for codebook method
    IplImage *ImaskAVG = 0,*ImaskAVGCC = 0;
    IplImage *ImaskCodeBook = 0,*ImaskCodeBookCC = 0;
    CvCapture* capture = 0;

    int startcapture = 1;
    int endcapture = 30;
    int c,n;

    maxMod[0] = 3;  //Set color thresholds to default values
    minMod[0] = 10;
    maxMod[1] = 1;
    minMod[1] = 1;
    maxMod[2] = 1;
    minMod[2] = 1;
    float scalehigh = HIGH_SCALE_NUM;
    float scalelow = LOW_SCALE_NUM;
//--------------------------------------【获取视频源】---------------------------------------- 
//------判断是从文件还是摄像头中获取视频文件
    if(argc < 3) {
        printf("ERROR: Too few parameters\n");
        help();
    }else{
        if(argc == 3){
            printf("Capture from Camera\n");
            capture = cvCaptureFromCAM( 0 );
        }
        else {
            printf("Capture from file %s\n",argv[3]);
    //      capture = cvCaptureFromFile( argv[3] );
            capture = cvCreateFileCapture( argv[3] );
            if(!capture) { printf("Couldn't open %s\n",argv[3]); return -1;}
        }
//------------------------------------------------------------------------------------------

//---------------------------------【通过命令行设置参数】-----------------------------------
        if(isdigit(argv[1][0])) { //Start from of background capture
            startcapture = atoi(argv[1]);
            printf("startcapture = %d\n",startcapture);
        }
        if(isdigit(argv[2][0])) { //End frame of background capture
            endcapture = atoi(argv[2]);
            printf("endcapture = %d\n"); 
        }
        if(argc > 4){ //See if parameters are set from command line
            //FOR AVG MODEL
            if(argc >= 5){
                if(isdigit(argv[4][0])){
                    scalehigh = (float)atoi(argv[4]);
                }
            }
            if(argc >= 6){
                if(isdigit(argv[5][0])){
                    scalelow = (float)atoi(argv[5]);
                }
            }
            //FOR CODEBOOK MODEL, CHANNEL 0
            if(argc >= 7){
                if(isdigit(argv[6][0])){
                    maxMod[0] = atoi(argv[6]);
                }
            }
            if(argc >= 8){
                if(isdigit(argv[7][0])){
                    minMod[0] = atoi(argv[7]);
                }
            }
            //Channel 1
            if(argc >= 9){
                if(isdigit(argv[8][0])){
                    maxMod[1] = atoi(argv[8]);
                }
            }
            if(argc >= 10){
                if(isdigit(argv[9][0])){
                    minMod[1] = atoi(argv[9]);
                }
            }
            //Channel 2
            if(argc >= 11){
                if(isdigit(argv[10][0])){
                    maxMod[2] = atoi(argv[10]);
                }
            }
            if(argc >= 12){
                if(isdigit(argv[11][0])){
                    minMod[2] = atoi(argv[11]);
                }
            }
        }
    }
//------------------------------------------------------------------------------------------
    //MAIN PROCESSING LOOP:
    bool pause = false;
    bool singlestep = false;

    if( capture )
    {
      cvNamedWindow( "Raw", 1 );
        cvNamedWindow( "AVG_ConnectComp",1);
        cvNamedWindow( "ForegroundCodeBook",1);
        cvNamedWindow( "CodeBook_ConnectComp",1);
        cvNamedWindow( "ForegroundAVG",1);//声明窗口
        int i = -1;

        for(;;)
        {
                if(!pause){
//              if( !cvGrabFrame( capture ))
//                  break;
//              rawImage = cvRetrieveFrame( capture );
                rawImage = cvQueryFrame( capture );//获取图像
                ++i;//count it//获取的帧数
//              printf("%d\n",i);
                if(!rawImage) //判断是否为空
                    break;
                //REMOVE THIS FOR GENERAL OPERATION, JUST A CONVIENIENCE WHEN RUNNING WITH THE SMALL tree.avi file
                if(i == 56){//获取56帧图像后就不获取新的图像
                    pause = 1;
                    printf("\n\nVideo paused for your convienience at frame 50 to work with demo\n"
                    "You may adjust parameters, single step or continue running\n\n");
                    help();
                }
            }
            if(singlestep){//判断为单一步骤也停止获取图像
                pause = true;
            }
            //First time:
            if(0 == i) {//第一帧图像的处理流程
                printf("\n . . . wait for it . . .\n"); //Just in case you wonder why the image is white at first

//---------------------------------【AVG METHOD ALLOCATION】-------------------------------
                AllocateImages(rawImage);//根据原图像申请平均背景法所需要的内存
                scaleHigh(scalehigh);//设置平均背景法的高阈值
                scaleLow(scalelow);//设置平均背景法的低阈值
                ImaskAVG = cvCreateImage( cvGetSize(rawImage), IPL_DEPTH_8U, 1 );//平均图像的掩码图像
                ImaskAVGCC = cvCreateImage( cvGetSize(rawImage), IPL_DEPTH_8U, 1 );//平均图像的掩码图像的副本
                cvSet(ImaskAVG,cvScalar(255));//置位白色(初始化全为背景)
//------------------------------------------------------------------------------------------

//---------------------------------【CODEBOOK METHOD ALLOCATION】---------------------------
                yuvImage = cvCloneImage(rawImage);
                ImaskCodeBook = cvCreateImage( cvGetSize(rawImage), IPL_DEPTH_8U, 1 );
                ImaskCodeBookCC = cvCreateImage( cvGetSize(rawImage), IPL_DEPTH_8U, 1 );
                cvSet(ImaskCodeBook,cvScalar(255));
                imageLen = rawImage->width*rawImage->height;//计算所需要的codebook,每一个像素点都有一个codebook,三个通道
                cB = new codeBook [imageLen];//申请codebook
//------------------------------------------------------------------------------------------
//---------------------------------【初始化CODEBOOK】---------------------------------------
                for(int f = 0; f<imageLen; f++)
                {
                    cB[f].numEntries = 0;
                }
//------------------------------------------------------------------------------------------
//---------------------------------【初始化各通道的边界学习因子】---------------------------
                for(int nc=0; nc<nChannels;nc++)
                {
                    cbBounds[nc] = 10; //Learning bounds factor学习边界因子
                }
//------------------------------------------------------------------------------------------
                ch[0] = true; //Allow threshold setting simultaneously for all channels
                ch[1] = true;
                ch[2] = true; //设置各通道的阈值
            }
            //If we've got an rawImage and are good to go:                
            if( rawImage )
            {
                cvCvtColor( rawImage, yuvImage, CV_BGR2YCrCb );//YUV For codebook method
                //This is where we build our background model
//---------------------------------【统计从自定义的开始帧到结束帧】--------------------------   
                if( !pause && i >= startcapture && i < endcapture  ){
                    //LEARNING THE AVERAGE AND AVG DIFF BACKGROUND
                    accumulateBackground(rawImage);//累加图像,计算出平均值:IavgF/count,平均差:IdiffF/count
                    //LEARNING THE CODEBOOK BACKGROUND
                    pColor = (uchar *)((yuvImage)->imageData);
                    for(int c=0; c<imageLen; c++)
                    {
                        cvupdateCodeBook(pColor, cB[c], cbBounds, nChannels);//更新码本:更新计数,更新阈值或者创建新的码本
                        pColor += 3;//YUV三个通道
                    }
                }
//-------------------------------------------------------------------------------------------

//----------------------------【如果是统计结束帧则创建平均背景法模板】-----------------------
                //When done, create the background model
                if(i == endcapture){//如果为设置的最后一帧,则创建模板
                    createModelsfromStats();//创建平均背景法的模板,最后得到阈值模板IhiF和IlowF
                }
//-------------------------------------------------------------------------------------------

//-------------------------------【平均背景法查找前景像素】----------------------------------
                //Find the foreground if any
                if(i >= endcapture) {
                    //FIND FOREGROUND BY AVG METHOD:
                    backgroundDiff(rawImage,ImaskAVG);//结果ImaskAVG为前景和背景的掩码图像
                    cvCopy(ImaskAVG,ImaskAVGCC);//拷贝一份掩码图像
                    cvconnectedComponents(ImaskAVGCC);//传进去mask图像,然后在mask图像上绘制轮廓
//-------------------------------------------------------------------------------------------

//-------------------------------【学习背景法查找前景像素】----------------------------------
                    //FIND FOREGROUND BY CODEBOOK METHOD
                    uchar maskPixelCodeBook;
                    pColor = (uchar *)((yuvImage)->imageData); //3 channel yuv image
                    uchar *pMask = (uchar *)((ImaskCodeBook)->imageData); //1 channel image
                    for(int c=0; c<imageLen; c++)
                    {
                        //遍历一遍码本,判断当前像素是否在码本的范围内,背景为0,前景为255
                         maskPixelCodeBook = cvbackgroundDiff(pColor, cB[c], nChannels, minMod, maxMod);
                        *pMask++ = maskPixelCodeBook;
                        pColor += 3;
                    }
                    //This part just to visualize bounding boxes and centers if desired
                    cvCopy(ImaskCodeBook,ImaskCodeBookCC);  
                    cvconnectedComponents(ImaskCodeBookCC);//传进去mask图像,然后在mask图像上绘制轮廓
                }
//-------------------------------------------------------------------------------------------
                //Display
                cvShowImage( "Raw", rawImage );//显示原图像
                cvShowImage( "AVG_ConnectComp",ImaskAVGCC);//前景掩码图像的轮廓[平均背景法]
                cvShowImage( "ForegroundAVG",ImaskAVG);//前景的掩码图像[平均背景法]
                cvShowImage( "ForegroundCodeBook",ImaskCodeBook);//前景掩码图像的轮廓[CodeBook法]
                cvShowImage( "CodeBook_ConnectComp",ImaskCodeBookCC);//前景的掩码图像[CodeBook法]

                //USER INPUT:
                c = cvWaitKey(10)&0xFF;
                //End processing on ESC, q or Q
                if(c == 27 || c == 'q' | c == 'Q')
                    break;
                //Else check for user input
//-------------------------------【修改参数】------------------------------------------------
                switch(c)
                {
                    case 'h':
                        help();
                        break;
                    case 'p':
                        pause ^= 1;
                        break;
                    case 's':
                        singlestep = 1;
                        pause = false;
                        break;
                    case 'r':
                        pause = false;
                        singlestep = false;
                        break;
                    //AVG BACKROUND PARAMS
                    case '-':
                        if(i > endcapture){
                            scalehigh += 0.25;
                            printf("AVG scalehigh=%f\n",scalehigh);
                            scaleHigh(scalehigh);
                        }
                        break;
                    case '=':
                        if(i > endcapture){
                            scalehigh -= 0.25;
                            printf("AVG scalehigh=%f\n",scalehigh);
                            scaleHigh(scalehigh);
                        }
                        break;
                    case '[':
                        if(i > endcapture){
                            scalelow += 0.25;
                            printf("AVG scalelow=%f\n",scalelow);
                            scaleLow(scalelow);
                        }
                        break;
                    case ']':
                        if(i > endcapture){
                            scalelow -= 0.25;
                            printf("AVG scalelow=%f\n",scalelow);
                            scaleLow(scalelow);
                        }
                        break;
                //CODEBOOK PARAMS
                case 'y':
                case '0':
                        ch[0] = 1;
                        ch[1] = 0;
                        ch[2] = 0;
                        printf("CodeBook YUV Channels active: ");
                        for(n=0; n<nChannels; n++)
                                printf("%d, ",ch[n]);
                        printf("\n");
                        break;
                case 'u':
                case '1':
                        ch[0] = 0;
                        ch[1] = 1;
                        ch[2] = 0;
                        printf("CodeBook YUV Channels active: ");
                        for(n=0; n<nChannels; n++)
                                printf("%d, ",ch[n]);
                        printf("\n");
                        break;
                case 'v':
                case '2':
                        ch[0] = 0;
                        ch[1] = 0;
                        ch[2] = 1;
                        printf("CodeBook YUV Channels active: ");
                        for(n=0; n<nChannels; n++)
                                printf("%d, ",ch[n]);
                        printf("\n");
                        break;
                case 'a': //All
                case '3':
                        ch[0] = 1;
                        ch[1] = 1;
                        ch[2] = 1;
                        printf("CodeBook YUV Channels active: ");
                        for(n=0; n<nChannels; n++)
                                printf("%d, ",ch[n]);
                        printf("\n");
                        break;
                case 'b':  //both u and v together
                        ch[0] = 0;
                        ch[1] = 1;
                        ch[2] = 1;
                        printf("CodeBook YUV Channels active: ");
                        for(n=0; n<nChannels; n++)
                                printf("%d, ",ch[n]);
                        printf("\n");
                        break;
                case 'i': //modify max classification bounds (max bound goes higher)
                    for(n=0; n<nChannels; n++){
                        if(ch[n])
                            maxMod[n] += 1;
                        printf("%.4d,",maxMod[n]);
                    }
                    printf(" CodeBook High Side\n");
                    break;
                case 'o': //modify max classification bounds (max bound goes lower)
                    for(n=0; n<nChannels; n++){
                        if(ch[n])
                            maxMod[n] -= 1;
                        printf("%.4d,",maxMod[n]);
                    }
                    printf(" CodeBook High Side\n");
                    break;
                case 'k': //modify min classification bounds (min bound goes lower)
                    for(n=0; n<nChannels; n++){
                        if(ch[n])
                            minMod[n] += 1;
                        printf("%.4d,",minMod[n]);
                    }
                    printf(" CodeBook Low Side\n");
                    break;
                case 'l': //modify min classification bounds (min bound goes higher)
                    for(n=0; n<nChannels; n++){
                        if(ch[n])
                            minMod[n] -= 1;
                        printf("%.4d,",minMod[n]);
                    }
                    printf(" CodeBook Low Side\n");
                    break;
                }

            }
        }       
//-------------------------------------------------------------------------------------------
//-------------------------------【释放资源】------------------------------------------------
      cvReleaseCapture( &capture );
      cvDestroyWindow( "Raw" );
        cvDestroyWindow( "ForegroundAVG" );
        cvDestroyWindow( "AVG_ConnectComp");
        cvDestroyWindow( "ForegroundCodeBook");
        cvDestroyWindow( "CodeBook_ConnectComp");
        DeallocateImages();
        if(yuvImage) cvReleaseImage(&yuvImage);
        if(ImaskAVG) cvReleaseImage(&ImaskAVG);
        if(ImaskAVGCC) cvReleaseImage(&ImaskAVGCC);
        if(ImaskCodeBook) cvReleaseImage(&ImaskCodeBook);
        if(ImaskCodeBookCC) cvReleaseImage(&ImaskCodeBookCC);
        delete [] cB;
    }
//-------------------------------------------------------------------------------------------
    else{ printf("\n\nDarn, Something wrong with the parameters\n\n"); help();
    }
    return 0;
}

<2>平均背景法具体实现代码:
AvgBackground.h:

///////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// Accumulate average and ~std (really absolute difference) image and use this to detect background and foreground
//
// Typical way of using this is to:
//  AllocateImages();
//  //loop for N images to accumulate background differences
//  accumulateBackground();
//  //When done, turn this into our avg and std model with high and low bounds
//  createModelsfromStats();
//  //Then use the function to return background in a mask (255 == foreground, 0 == background)
//  backgroundDiff(IplImage *I,IplImage *Imask, int num);
//  //Then tune the high and low difference from average image background acceptance thresholds
//  float scalehigh,scalelow; //Set these, defaults are 7 and 6. Note: scalelow is how many average differences below average
//  scaleHigh(scalehigh);
//  scaleLow(scalelow);
//  //That is, change the scale high and low bounds for what should be background to make it work.
//  //Then continue detecting foreground in the mask image
//  backgroundDiff(IplImage *I,IplImage *Imask, int num);
//
//NOTES: num is camera number which varies from 0 ... NUM_CAMERAS - 1.  Typically you only have one camera, but this routine allows
//       you to index many.
//
#ifndef AVGSEG_
#define AVGSEG_


#include "cv.h"             // define all of the opencv classes etc.
#include "highgui.h"
#include "cxcore.h"

//IMPORTANT DEFINES:
#define NUM_CAMERAS   1             //This function can handle an array of cameras
#define HIGH_SCALE_NUM 7.0          //How many average differences from average image on the high side == background
#define LOW_SCALE_NUM 6.0       //How many average differences from average image on the low side == background

void AllocateImages(IplImage *I);
void DeallocateImages();
void accumulateBackground(IplImage *I, int number=0);
void scaleHigh(float scale = HIGH_SCALE_NUM, int num = 0);
void scaleLow(float scale = LOW_SCALE_NUM, int num = 0);
void createModelsfromStats();
void backgroundDiff(IplImage *I,IplImage *Imask, int num = 0);

#endif

AvgBackground.cpp:

#include "AvgBackground.h"


//GLOBALS

IplImage *IavgF[NUM_CAMERAS],*IdiffF[NUM_CAMERAS], *IprevF[NUM_CAMERAS], *IhiF[NUM_CAMERAS], *IlowF[NUM_CAMERAS];
IplImage *Iscratch,*Iscratch2,*Igray1,*Igray2,*Igray3,*Imaskt;
IplImage *Ilow1[NUM_CAMERAS],*Ilow2[NUM_CAMERAS],*Ilow3[NUM_CAMERAS],*Ihi1[NUM_CAMERAS],*Ihi2[NUM_CAMERAS],*Ihi3[NUM_CAMERAS];

float Icount[NUM_CAMERAS];

void AllocateImages(IplImage *I)  //I is just a sample for allocation purposes
{
    for(int i = 0; i<NUM_CAMERAS; i++){
        IavgF[i] = cvCreateImage( cvGetSize(I), IPL_DEPTH_32F, 3 );
        IdiffF[i] = cvCreateImage( cvGetSize(I), IPL_DEPTH_32F, 3 );
        IprevF[i] = cvCreateImage( cvGetSize(I), IPL_DEPTH_32F, 3 );
        IhiF[i] = cvCreateImage( cvGetSize(I), IPL_DEPTH_32F, 3 );
        IlowF[i] = cvCreateImage(cvGetSize(I), IPL_DEPTH_32F, 3 );
        Ilow1[i] = cvCreateImage( cvGetSize(I), IPL_DEPTH_32F, 1 );
        Ilow2[i] = cvCreateImage( cvGetSize(I), IPL_DEPTH_32F, 1 );
        Ilow3[i] = cvCreateImage( cvGetSize(I), IPL_DEPTH_32F, 1 );
        Ihi1[i] = cvCreateImage( cvGetSize(I), IPL_DEPTH_32F, 1 );
        Ihi2[i] = cvCreateImage( cvGetSize(I), IPL_DEPTH_32F, 1 );
        Ihi3[i] = cvCreateImage( cvGetSize(I), IPL_DEPTH_32F, 1 );
        cvZero(IavgF[i]  );
        cvZero(IdiffF[i]  );
        cvZero(IprevF[i]  );
        cvZero(IhiF[i] );
        cvZero(IlowF[i]  );     
        Icount[i] = 0.00001; //Protect against divide by zero
    }
    Iscratch = cvCreateImage( cvGetSize(I), IPL_DEPTH_32F, 3 );
    Iscratch2 = cvCreateImage( cvGetSize(I), IPL_DEPTH_32F, 3 );
    Igray1 = cvCreateImage( cvGetSize(I), IPL_DEPTH_32F, 1 );
    Igray2 = cvCreateImage( cvGetSize(I), IPL_DEPTH_32F, 1 );
    Igray3 = cvCreateImage( cvGetSize(I), IPL_DEPTH_32F, 1 );
    Imaskt = cvCreateImage( cvGetSize(I), IPL_DEPTH_8U, 1 );

    cvZero(Iscratch);
    cvZero(Iscratch2 );
}

void DeallocateImages()
{
    for(int i=0; i<NUM_CAMERAS; i++){
        cvReleaseImage(&IavgF[i]);
        cvReleaseImage(&IdiffF[i] );
        cvReleaseImage(&IprevF[i] );
        cvReleaseImage(&IhiF[i] );
        cvReleaseImage(&IlowF[i] );
        cvReleaseImage(&Ilow1[i]  );
        cvReleaseImage(&Ilow2[i]  );
        cvReleaseImage(&Ilow3[i]  );
        cvReleaseImage(&Ihi1[i]   );
        cvReleaseImage(&Ihi2[i]   );
        cvReleaseImage(&Ihi3[i]  );
    }
    cvReleaseImage(&Iscratch);
    cvReleaseImage(&Iscratch2);

    cvReleaseImage(&Igray1  );
    cvReleaseImage(&Igray2 );
    cvReleaseImage(&Igray3 );

    cvReleaseImage(&Imaskt);
}

// Accumulate the background statistics for one more frame
// We accumulate the images, the image differences and the count of images for the 
//    the routine createModelsfromStats() to work on after we're done accumulating N frames.
// I        Background image, 3 channel, 8u
// number   Camera number
void accumulateBackground(IplImage *I, int number)
{
    static int first = 1;
    cvCvtScale(I,Iscratch,1,0); //To float;//转化为浮点型矩阵
    if (!first){
        cvAcc(Iscratch,IavgF[number]);
        cvAbsDiff(Iscratch,IprevF[number],Iscratch2);
        cvAcc(Iscratch2,IdiffF[number]);
        Icount[number] += 1.0;
    }
    first = 0;
    cvCopy(Iscratch,IprevF[number]);
}

// Scale the average difference from the average image high acceptance threshold
void scaleHigh(float scale, int num)
{
    cvConvertScale(IdiffF[num],Iscratch,scale); //Converts with rounding and saturation
    cvAdd(Iscratch,IavgF[num],IhiF[num]);
    cvCvtPixToPlane( IhiF[num], Ihi1[num],Ihi2[num],Ihi3[num], 0 );
}

// Scale the average difference from the average image low acceptance threshold
void scaleLow(float scale, int num)
{
    cvConvertScale(IdiffF[num],Iscratch,scale); //Converts with rounding and saturation
    cvSub(IavgF[num],Iscratch,IlowF[num]);
    cvCvtPixToPlane( IlowF[num], Ilow1[num],Ilow2[num],Ilow3[num], 0 );
}

//Once you've learned the background long enough, turn it into a background model
void createModelsfromStats()
{
    for(int i=0; i<NUM_CAMERAS; i++)
    {
        cvConvertScale(IavgF[i],IavgF[i],(double)(1.0/Icount[i]));
        cvConvertScale(IdiffF[i],IdiffF[i],(double)(1.0/Icount[i]));
        cvAddS(IdiffF[i],cvScalar(1.0,1.0,1.0),IdiffF[i]);  //Make sure diff is always something确保平均差不为零
        scaleHigh(HIGH_SCALE_NUM,i);
        scaleLow(LOW_SCALE_NUM,i);
    }
}

// Create a binary: 0,255 mask where 255 means forground pixel
// I        Input image, 3 channel, 8u
// Imask    mask image to be created, 1 channel 8u
// num      camera number.
//判断当前图像中的像素是否在背景图像中
void backgroundDiff(IplImage *I,IplImage *Imask, int num)  //Mask should be grayscale
{
    cvCvtScale(I,Iscratch,1,0); //To float;
    //Channel 1
    cvCvtPixToPlane( Iscratch, Igray1,Igray2,Igray3, 0 );
    cvInRange(Igray1,Ilow1[num],Ihi1[num],Imask);
    //判断是否在背景范围内,因为前景值小于背景值,
    //Channel 2
    cvInRange(Igray2,Ilow2[num],Ihi2[num],Imaskt);
    cvOr(Imask,Imaskt,Imask);
    //Channel 3
    cvInRange(Igray3,Ilow3[num],Ihi3[num],Imaskt);
    cvOr(Imask,Imaskt,Imask);
    //Finally, invert the results
    //因为前景值小于背景值,此时前景为255,背景景值为0
    cvSubRS( Imask, cvScalar(255), Imask);//执行该操作后前景为0,背景景值为255

}

//////////////////////////////////////////////////////////////////////////
/*
//Utility comparision function
gbCmp(IplImage *I1, IplImage *I2, IplImage *Imask, int op)
{
    int len = I1->width*I1->height;
    int x;
    float *fp1 = (float *)I1->imageData;
    float *fp2 = (float *)I2->imageData;
    char *cp = Imask->imageData;
    if(op == CV_CMP_GT)
    {
        for(x=0;x<len;x++)
        {
            if(*fp1++ > *fp2++)
                *cp++ = 255;
            else
                *cp++ = 0;
        }
    }
    else
    {
        for(x=0;x<len;x++)
        {
            if(*fp1++ < *fp2++)
                *cp++ = 255;
            else
                *cp++ = 0;
        }
    }
}


void backgroundDiff(IplImage *I,IplImage *Imask, int num)  //Mask should be grayscale
{
    cvCvtScale(I,Iscratch,1,0); //To float;
    cvCvtPixToPlane( Iscratch, Igray1,Igray2,Igray3, 0 );

    gbCmp(Igray1,Ihi1[num],Imask,CV_CMP_GT);
    gbCmp(Igray2,Ihi2[num],Imaskt,CV_CMP_GT);
    cvOr(Imask,Imaskt,Imask);
    gbCmp(Igray3,Ihi3[num],Imaskt,CV_CMP_GT);
    cvOr(Imask,Imaskt,Imask);

    gbCmp(Igray1,Ilow1[num],Imaskt,CV_CMP_LT);
    cvOr(Imask,Imaskt,Imask);
    gbCmp(Igray2,Ilow2[num],Imaskt,CV_CMP_LT);
    cvOr(Imask,Imaskt,Imask);
    gbCmp(Igray3,Ilow3[num],Imaskt,CV_CMP_LT);
    cvOr(Imask,Imaskt,Imask);
    //Some morphology
//  cvErode( Imask, Imask, NULL, 1);
//  cvMorphologyEx(Imask, Imask, NULL, NULL, CV_MOP_CLOSE, 1);
}
*/

<3>codebook背景学习法具体实现:
cv_yuv_codebook.h:

////////YUV CODEBOOK ////////////////////////////////////////////////////
// Gary Bradski, a pre-vacation doodle July 14, 2005
// Note that this is a YUV pixel model, must have one for each YUV pixel that you care about



/////////////////////////////////////////////////////////////////////////////////////////// 
/* How to call externally

//CONVERT IMAGE TO YUV
cvCvtColor( image, yuvImage, CV_BGR2YCrCb );


  //DECLARATIONS:
#include "yuv_codebook.h"
// #define CHANNELS 3   //Could also use just 1 ("Y", brightness), but this is set in this header file.
  //VARIABLES:
codeBook *cB;  //This will be our linear model of the image, a vector of lengh = height*width
int maxMod[CHANNELS]; //Add these (possibly negative) number onto max level when code_element determining if new pixel is foreground
int minMod[CHANNELS]; //Subract these (possible negative) number from min level code_element when determining if pixel is foreground
unsigned cbBounds[CHANNELS]; //Code Book bounds for learning
int nChannels = CHANNELS;
int imageLen;
bool ch[CHANNELS];
...
//ALLOCATE IT WHEN YOU KNOW THE IMAGE SIZE
imageLen = image->width*image->height;
cB = new codeBook [imageLen];
for(int f = 0; f<imageLen; f++)
{
   cB[f].numEntries = 0;
}
for(n=0; n<nChannels;n++)
{
    cbBounds[n] = 10; //Learning bounds factor
}
maxMod[0] = 3;  //Set color thresholds to more likely values
minMod[0] = 10;
maxMod[1] = 1;
minMod[1] = 1;
maxMod[2] = 1;
minMod[2] = 1;
...
//LEARNING BACKGROUND
uchar *pColor; //YUV pointer

if(learn)
{
    pColor = (uchar *)((yuv)->imageData);
    for(c=0; c<imageLen; c++)
    {
        cvupdateCodeBook(pColor, cB[c], cbBounds, nChannels);
        pColor += 3;
    }
    learnCnt += 1;
}


//ELIMINATE SPURIOUS CODEBOOK ENTRIES (FOR SPEED)

int cleanedCnt; //will hold number of codebook entries eliminated
cleanedCnt = 0;
for(c=0; c<imageLen; c++)
{
    cleanedCnt += cvclearStaleEntries(cB[c]);
}
...


//BACKGROUND SEGMENTATION
uchar *pMask,*pColor;
//For connected components bounding box and center of mass if wanted, else can leave out by default
int num = 5; //Just chose 5 arbitrarily, could be 1, 20, anything
CvRect bbs[5];
CvPoint centers[5];

if(modelExists)
{
    pColor = (uchar *)((yuv)->imageData); //3 channel yuv image
    pMask = (uchar *)((mask)->imageData); //1 channel image
    for(c=0; c<imageLen; c++)
    {
        maskQ = cvbackgroundDiff(pColor, cB[c], nChannels, minMod, maxMod);
        *pMask++ = maskQ;
        pColor += 3;
    }
    //This part just to visualize bounding boxes and centers if desired
    cvCopy(mask,maskCC);
    num = 5; //
    cvconnectedComponents(maskCC,1,4.0, &num, bbs, centers);
    for(int f=0; f<num; f++)
    {
        CvPoint pt1, pt2; //Draw the bounding box in white
        pt1.x = bbs[f].x;
        pt1.y = bbs[f].y;
        pt2.x = bbs[f].x+bbs[f].width;
        pt2.y = bbs[f].y+bbs[f].height;
        cvRectangle(maskCC,pt1,pt2, CV_RGB(255,255,255),2);
        pt1.x = centers[f].x - 3; //Draw the center of mass in black
        pt1.y = centers[f].y - 3;
        pt2.x = centers[f].x +3;
        pt2.y = centers[f].y + 3;
        cvRectangle(maskCC,pt1,pt2, CV_RGB(0,0,0),2);
    }
    mw.paint(maskCC,0,1,0);
}

...
//EXAMPEL OF HOW TO ADJUST BACKGROUDN THRESHOLDS
ch[0] = 0; //ch[0]=>y, ch[1]=>u, ch[2]=>v
ch[1] = 1;
ch[2] = 1;
. . .
                case '0':
                        ch[0] = 1;
                        ch[1] = 0;
                        ch[2] = 0;
                        printf("Channels active: ");
                        for(n=0; n<nChannels; n++)
                                printf("%d, ",ch[n]);
                        printf("\n");
                        break;
                case '1':
                        ch[0] = 0;
                        ch[1] = 1;
                        ch[2] = 0;
                        printf("Channels active: ");
                        for(n=0; n<nChannels; n++)
                                printf("%d, ",ch[n]);
                        printf("\n");
                        break;
                case '2':
                        ch[0] = 0;
                        ch[1] = 0;
                        ch[2] = 1;
                        printf("Channels active: ");
                        for(n=0; n<nChannels; n++)
                                printf("%d, ",ch[n]);
                        printf("\n");
                        break;
                case '3':
                        ch[0] = 1;
                        ch[1] = 1;
                        ch[2] = 1;
                        printf("Channels active: ");
                        for(n=0; n<nChannels; n++)
                                printf("%d, ",ch[n]);
                        printf("\n");
                        break;
                case '4':
                        ch[0] = 0;
                        ch[1] = 1;
                        ch[2] = 1;
                        printf("Channels active: ");
                        for(n=0; n<nChannels; n++)
                                printf("%d, ",ch[n]);
                        printf("\n");
                        break;

. . . 
case 'u': //modify max classification bounds
    for(n=0; n<nChannels; n++){
        if(ch[n])
            maxMod[n] += 1;
        printf("%.4d,",maxMod[n]);
    }
    printf("\n");
    break;
case 'i': //modify max classification bounds
    for(n=0; n<nChannels; n++){
        if(ch[n])
            maxMod[n] -= 1;
        printf("%.4d,",maxMod[n]);
    }
    printf("\n");
    break;
case ',': //modify min classification bounds (min bound goes lower)
    for(n=0; n<nChannels; n++){
        if(ch[n])
            minMod[n] += 1;
        printf("%.4d,",minMod[n]);
    }
    printf("\n");
    break;
case '.': //modify min classification bounds (min bound goes higher)
    for(n=0; n<nChannels; n++){
        if(ch[n])
            minMod[n] -= 1;
        printf("%.4d,",minMod[n]);
    }
    printf("\n");
    break;
...
//CLEAN UP
delete [] cB;
*/
///////////////////////////////////////////////////////////////////////////////////////////////


// Accumulate average and ~std deviation
#ifndef CVYUV_CB
#define CVYUV_CB


#include <cv.h>             // define all of the opencv classes etc.
#include <highgui.h>
#include <cxcore.h>

#define CHANNELS 3

typedef struct ce {
    uchar learnHigh[CHANNELS]; //High side threshold for learning
    uchar learnLow[CHANNELS];  //Low side threshold for learning
    uchar max[CHANNELS];    //High side of box boundary
    uchar min[CHANNELS];    //Low side of box boundary
    int t_last_update;      //This is book keeping to allow us to kill stale entries
    int stale;              //max negative run (biggest period of inactivity)
} code_element;

typedef struct code_book {
    code_element **cb;
    int numEntries;
    int t;                  //count every access
} codeBook;

///////////////////////////////////////////////////////////////////////////////////
// int updateCodeBook(uchar *p, codeBook &c, unsigned cbBounds)
// Updates the codebook entry with a new data point
//
// p            Pointer to a YUV pixel
// c            Codebook for this pixel
// cbBounds     Learning bounds for codebook (Rule of thumb: 10)
// numChannels  Number of color channels we're learning
//
// NOTES:
//      cvBounds must be of size cvBounds[numChannels]
//
// RETURN
//  codebook index
int cvupdateCodeBook(uchar *p, codeBook &c, unsigned *cbBounds, int numChannels = 3);

///////////////////////////////////////////////////////////////////////////////////
// uchar cvbackgroundDiff(uchar *p, codeBook &c, int minMod, int maxMod)
// Given a pixel and a code book, determine if the pixel is covered by the codebook
//
// p        pixel pointer (YUV interleaved)
// c        codebook reference
// numChannels  Number of channels we are testing
// maxMod   Add this (possibly negative) number onto max level when code_element determining if new pixel is foreground
// minMod   Subract this (possible negative) number from min level code_element when determining if pixel is foreground
//
// NOTES: 
// minMod and maxMod must have length numChannels, e.g. 3 channels => minMod[3], maxMod[3].
//
// Return
// 0 => background, 255 => foreground
uchar cvbackgroundDiff(uchar *p, codeBook &c, int numChannels, int *minMod, int *maxMod);



//UTILITES////////////////////////////////////////////////////////////////////////////////////

/////////////////////////////////////////////////////////////////////////////////
//int clearStaleEntries(codeBook &c)
// After you've learned for some period of time, periodically call this to clear out stale codebook entries
//
//c     Codebook to clean up
//
// Return
// number of entries cleared
int cvclearStaleEntries(codeBook &c);

/////////////////////////////////////////////////////////////////////////////////
//int countSegmentation(codeBook *c, IplImage *I)
//
//Count how many pixels are detected as foreground
// c    Codebook
// I    Image (yuv, 24 bits)
// numChannels  Number of channels we are testing
// maxMod   Add this (possibly negative) number onto max level when code_element determining if new pixel is foreground
// minMod   Subract this (possible negative) number from min level code_element when determining if pixel is foreground
//
// NOTES: 
// minMod and maxMod must have length numChannels, e.g. 3 channels => minMod[3], maxMod[3].
//
//Return
// Count of fg pixels
//
int cvcountSegmentation(codeBook *c, IplImage *I, int numChannels, int *minMod, int *maxMod);

///////////////////////////////////////////////////////////////////////////////////////////
//void cvconnectedComponents(IplImage *mask, int poly1_hull0, float perimScale, int *num, CvRect *bbs, CvPoint *centers)
// This cleans up the forground segmentation mask derived from calls to cvbackgroundDiff
//
// mask         Is a grayscale (8 bit depth) "raw" mask image which will be cleaned up
//
// OPTIONAL PARAMETERS:
// poly1_hull0  If set, approximate connected component by (DEFAULT) polygon, or else convex hull (0)
// perimScale   Len = image (width+height)/perimScale.  If contour len < this, delete that contour (DEFAULT: 4)
// num          Maximum number of rectangles and/or centers to return, on return, will contain number filled (DEFAULT: NULL)
// bbs          Pointer to bounding box rectangle vector of length num.  (DEFAULT SETTING: NULL)
// centers      Pointer to contour centers vectore of length num (DEFULT: NULL)
//
void cvconnectedComponents(IplImage *mask, int poly1_hull0=1, float perimScale=4.0, int *num=NULL, CvRect *bbs=NULL, CvPoint *centers=NULL);

#endif

cv_yuv_codebook.cpp:

////////YUV CODEBOOK
// Gary Bradski, July 14, 2005


#include "cv_yuv_codebook.h"

//GLOBALS FOR ALL CAMERA MODELS

//For connected components:
int CVCONTOUR_APPROX_LEVEL = 2;   // Approx.threshold - the bigger it is, the simpler is the boundary
int CVCLOSE_ITR = 1;                // How many iterations of erosion and/or dialation there should be
//#define CVPERIMSCALE 4            // image (width+height)/PERIMSCALE.  If contour lenght < this, delete that contour

//For learning background

//Just some convienience macros
#define CV_CVX_WHITE    CV_RGB(0xff,0xff,0xff)
#define CV_CVX_BLACK    CV_RGB(0x00,0x00,0x00)


///////////////////////////////////////////////////////////////////////////////////
// int updateCodeBook(uchar *p, codeBook &c, unsigned cbBounds)
// Updates the codebook entry with a new data point
//
// p            Pointer to a YUV pixel
// c            Codebook for this pixel
// cbBounds     Learning bounds for codebook (Rule of thumb: 10)
// numChannels  Number of color channels we're learning
//
// NOTES:
//      cvBounds must be of size cvBounds[numChannels]
//
// RETURN
//  codebook index
int cvupdateCodeBook(uchar *p, codeBook &c, unsigned *cbBounds, int numChannels)
{

    if(c.numEntries == 0) c.t = 0;
    c.t += 1;       //Record learning event
    //SET HIGH AND LOW BOUNDS
    int n;
    unsigned int high[3],low[3];
    for(n=0; n<numChannels; n++)
    {
        high[n] = *(p+n)+*(cbBounds+n);
        if(high[n] > 255) high[n] = 255;
        low[n] = *(p+n)-*(cbBounds+n);
        if(low[n] < 0) low[n] = 0;
    }
    int matchChannel;
    //SEE IF THIS FITS AN EXISTING CODEWORD
    int i;
    for(i=0; i<c.numEntries; i++)
    {
        matchChannel = 0;
        for(n=0; n<numChannels; n++)
        {
            if((c.cb[i]->learnLow[n] <= *(p+n)) && (*(p+n) <= c.cb[i]->learnHigh[n])) //Found an entry for this channel
            {
                matchChannel++;
            }
        }
        if(matchChannel == numChannels) //If an entry was found over all channels
        {
            c.cb[i]->t_last_update = c.t;
            //adjust this codeword for the first channel
            for(n=0; n<numChannels; n++)
            {
                if(c.cb[i]->max[n] < *(p+n))
                {
                    c.cb[i]->max[n] = *(p+n);
                }
                else if(c.cb[i]->min[n] > *(p+n))
                {
                    c.cb[i]->min[n] = *(p+n);
                }
            }
            break;
        }
    }

    //OVERHEAD TO TRACK POTENTIAL STALE ENTRIES
    for(int s=0; s<c.numEntries; s++)
    {
        //This garbage is to track which codebook entries are going stale
        int negRun = c.t - c.cb[s]->t_last_update;
        if(c.cb[s]->stale < negRun) c.cb[s]->stale = negRun;
    }


    //ENTER A NEW CODE WORD IF NEEDED
    if(i == c.numEntries)  //No existing code word found, make a new one
    {
        code_element **foo = new code_element* [c.numEntries+1];
        for(int ii=0; ii<c.numEntries; ii++)
        {
            foo[ii] = c.cb[ii];
        }
        foo[c.numEntries] = new code_element;
        if(c.numEntries) delete [] c.cb;//清除之前的内存
        c.cb = foo;//指向新的数据
        for(n=0; n<numChannels; n++)
        {
            c.cb[c.numEntries]->learnHigh[n] = high[n];
            c.cb[c.numEntries]->learnLow[n] = low[n];
            c.cb[c.numEntries]->max[n] = *(p+n);
            c.cb[c.numEntries]->min[n] = *(p+n);
        }
        c.cb[c.numEntries]->t_last_update = c.t;
        c.cb[c.numEntries]->stale = 0;
        c.numEntries += 1;
    }

    //SLOWLY ADJUST LEARNING BOUNDS
    for(n=0; n<numChannels; n++)
    {
        if(c.cb[i]->learnHigh[n] < high[n]) c.cb[i]->learnHigh[n] += 1;
        if(c.cb[i]->learnLow[n] > low[n]) c.cb[i]->learnLow[n] -= 1;
    }

    return(i);
}

///////////////////////////////////////////////////////////////////////////////////
// uchar cvbackgroundDiff(uchar *p, codeBook &c, int minMod, int maxMod)
// Given a pixel and a code book, determine if the pixel is covered by the codebook
//
// p        pixel pointer (YUV interleaved)
// c        codebook reference
// numChannels  Number of channels we are testing
// maxMod   Add this (possibly negative) number onto max level when code_element determining if new pixel is foreground
// minMod   Subract this (possible negative) number from min level code_element when determining if pixel is foreground
//
// NOTES:
// minMod and maxMod must have length numChannels, e.g. 3 channels => minMod[3], maxMod[3].
//
// Return
// 0 => background, 255 => foreground
uchar cvbackgroundDiff(uchar *p, codeBook &c, int numChannels, int *minMod, int *maxMod)
{
    int matchChannel;
    //SEE IF THIS FITS AN EXISTING CODEWORD
    int i;
    for(i=0; i<c.numEntries; i++)
    {
        matchChannel = 0;
        for(int n=0; n<numChannels; n++)
        {
            if((c.cb[i]->min[n] - minMod[n] <= *(p+n)) && (*(p+n) <= c.cb[i]->max[n] + maxMod[n]))
            {
                matchChannel++; //Found an entry for this channel
            }
            else
            {
                break;
            }
        }
        if(matchChannel == numChannels)
        {
            break; //Found an entry that matched all channels
        }
    }
    if(i >= c.numEntries) return(255);
    return(0);
}


//UTILITES/////////////////////////////////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////////////////////////////
//int clearStaleEntries(codeBook &c)
// After you've learned for some period of time, periodically call this to clear out stale codebook entries
//
//c     Codebook to clean up
//
// Return
// number of entries cleared
int cvclearStaleEntries(codeBook &c)
{
    int staleThresh = c.t>>1;
    int *keep = new int [c.numEntries];
    int keepCnt = 0;
    //SEE WHICH CODEBOOK ENTRIES ARE TOO STALE
    for(int i=0; i<c.numEntries; i++)
    {
        if(c.cb[i]->stale > staleThresh)
            keep[i] = 0; //Mark for destruction
        else
        {
            keep[i] = 1; //Mark to keep
            keepCnt += 1;
        }
    }
    //KEEP ONLY THE GOOD
    c.t = 0;                        //Full reset on stale tracking
    code_element **foo = new code_element* [keepCnt];
    int k=0;
    for(int ii=0; ii<c.numEntries; ii++)
    {
        if(keep[ii])
        {
            foo[k] = c.cb[ii];
            foo[k]->stale = 0;      //We have to refresh these entries for next clearStale
            foo[k]->t_last_update = 0;
            k++;
        }
    }
    //CLEAN UP
    delete [] keep;
    delete [] c.cb;
    c.cb = foo;
    int numCleared = c.numEntries - keepCnt;
    c.numEntries = keepCnt;
    return(numCleared);
}

/////////////////////////////////////////////////////////////////////////////////
//int countSegmentation(codeBook *c, IplImage *I)
//
//Count how many pixels are detected as foreground
// c    Codebook
// I    Image (yuv, 24 bits)
// numChannels  Number of channels we are testing
// maxMod   Add this (possibly negative) number onto max level when code_element determining if new pixel is foreground
// minMod   Subract this (possible negative) number from min level code_element when determining if pixel is foreground
//
// NOTES:
// minMod and maxMod must have length numChannels, e.g. 3 channels => minMod[3], maxMod[3].
//
//Return
// Count of fg pixels
//
int cvcountSegmentation(codeBook *c, IplImage *I, int numChannels, int *minMod, int *maxMod)
{
    int count = 0,i;
    uchar *pColor;
    int imageLen = I->width * I->height;

    //GET BASELINE NUMBER OF FG PIXELS FOR Iraw
    pColor = (uchar *)((I)->imageData);
    for(i=0; i<imageLen; i++)
    {
        if(cvbackgroundDiff(pColor, c[i], numChannels, minMod, maxMod))
            count++;
        pColor += 3;
    }
    return(count);
}


///////////////////////////////////////////////////////////////////////////////////////////
//void cvconnectedComponents(IplImage *mask, int poly1_hull0, float perimScale, int *num, CvRect *bbs, CvPoint *centers)
// This cleans up the forground segmentation mask derived from calls to cvbackgroundDiff
//
// mask         Is a grayscale (8 bit depth) "raw" mask image which will be cleaned up
//
// OPTIONAL PARAMETERS:
// poly1_hull0  If set, approximate connected component by (DEFAULT) polygon, or else convex hull (0)
// perimScale   Len = image (width+height)/perimScale.  If contour len < this, delete that contour (DEFAULT: 4)
// num          Maximum number of rectangles and/or centers to return, on return, will contain number filled (DEFAULT: NULL)
// bbs          Pointer to bounding box rectangle vector of length num.  (DEFAULT SETTING: NULL)
// centers      Pointer to contour centers vectore of length num (DEFULT: NULL)
//mask:前景和背景的掩码图像
//void cvconnectedComponents(IplImage *mask, int poly1_hull0=1, float perimScale=4.0, int *num=NULL, CvRect *bbs=NULL, CvPoint *centers=NULL);
//传进去mask图像,然后在mask图像上绘制轮廓
void cvconnectedComponents(IplImage *mask, int poly1_hull0, float perimScale, int *num, CvRect *bbs, CvPoint *centers)
{
static CvMemStorage*    mem_storage = NULL;
static CvSeq*           contours    = NULL;
//CLEAN UP RAW MASK
    cvMorphologyEx( mask, mask, NULL, NULL, CV_MOP_OPEN, CVCLOSE_ITR );//执行开操作
    cvMorphologyEx( mask, mask, NULL, NULL, CV_MOP_CLOSE, CVCLOSE_ITR );//执行开操作

//FIND CONTOURS AROUND ONLY BIGGER REGIONS
    if( mem_storage==NULL ) mem_storage = cvCreateMemStorage(0);
    else cvClearMemStorage(mem_storage);

    CvContourScanner scanner = cvStartFindContours(mask,mem_storage,sizeof(CvContour),CV_RETR_EXTERNAL,CV_CHAIN_APPROX_SIMPLE);
    CvSeq* c;
    int numCont = 0;
//-------------------------------【轮廓的相应操作】---------------------------------------
    while( (c = cvFindNextContour( scanner )) != NULL )
    {
        double len = cvContourPerimeter( c );
        double q = (mask->height + mask->width) /perimScale;   //calculate perimeter len threshold
        if( len < q ) //Get rid of blob if it's perimeter is too small
        {
            cvSubstituteContour( scanner, NULL );//删除周长比较小的轮廓
        }
        else //Smooth it's edges if it's large enough
        {
            CvSeq* c_new;
            if(poly1_hull0) //Polygonal approximation of the segmentation
                      //cvApproxPoly:将freeman链码转换为用多边形拟合,
                c_new = cvApproxPoly(c,sizeof(CvContour),mem_storage,CV_POLY_APPROX_DP, CVCONTOUR_APPROX_LEVEL,0);
            else //Convex Hull of the segmentation
                //如果不是链码格式,则直接使用轮廓去
                c_new = cvConvexHull2(c,mem_storage,CV_CLOCKWISE,1); //计算轮廓的凸包(包围轮廓的多边形)
            cvSubstituteContour( scanner, c_new );//替换scanner序列中相对应的轮廓
            numCont++;
        }
    }
    contours = cvEndFindContours( &scanner );//和cvStartFindC对应ontours;此时并不会释放scanner,而是返回第一个元素
//-------------------------------------------------------------------------------------------
// PAINT THE FOUND REGIONS BACK INTO THE IMAGE
    cvZero( mask );
    IplImage *maskTemp;
    //CALC CENTER OF MASS AND OR BOUNDING RECTANGLES
    if(num != NULL)
    {
        int N = *num, numFilled = 0, i=0;
        CvMoments moments;
        double M00, M01, M10;
        maskTemp = cvCloneImage(mask);
        for(i=0, c=contours; c != NULL; c = c->h_next,i++ )
        {
            if(i < N) //Only process up to *num of them
            {
                cvDrawContours(maskTemp,c,CV_CVX_WHITE, CV_CVX_WHITE,-1,CV_FILLED,8);
                //Find the center of each contour
                if(centers != NULL)
                {
                    cvMoments(maskTemp,&moments,1);
                    M00 = cvGetSpatialMoment(&moments,0,0);
                    M10 = cvGetSpatialMoment(&moments,1,0);
                    M01 = cvGetSpatialMoment(&moments,0,1);
                    centers[i].x = (int)(M10/M00);
                    centers[i].y = (int)(M01/M00);
                }
                //Bounding rectangles around blobs
                if(bbs != NULL)
                {
                    bbs[i] = cvBoundingRect(c);
                }
                cvZero(maskTemp);
                numFilled++;
            }
            //Draw filled contours into mask
            cvDrawContours(mask,c,CV_CVX_WHITE,CV_CVX_WHITE,-1,CV_FILLED,8); //draw to central mask
        } //end looping over contours
        *num = numFilled;
        cvReleaseImage( &maskTemp);
    }
    //ELSE JUST DRAW PROCESSED CONTOURS INTO THE MASK
    else
    {
        for( c=contours; c != NULL; c = c->h_next )
        {
            cvDrawContours(mask,c,CV_CVX_WHITE, CV_CVX_BLACK,-1,CV_FILLED,8);
        }
    }
}


<4>更新codebook(码本)的方法,删除陈旧的码字(码字是码本中的一个结构):
ClearStaleCB_Entries.cpp:

///////////////////////////////////////////////////////////////////
//int cvClearStaleEntries(codeBook &c)
// During learning, after you've learned for some period of time, 
// periodically call this to clear out stale codebook entries
//
// c   Codebook to clean up
//
// Return
// number of entries cleared
//用于学习有移动前景目标的背景
int cvClearStaleEntries(codeBook &c){
   int staleThresh = c.t>>1; 
   int *keep = new int [c.numEntries];
   int keepCnt = 0;
   //SEE WHICH CODEBOOK ENTRIES ARE TOO STALE
   for(int i=0; i<c.numEntries; i++){
      if(c.cb[i]->stale > staleThresh)
         keep[i] = 0; //Mark for destruction
      else
      {
         keep[i] = 1; //Mark to keep
         keepCnt += 1;
      }
   }
   //KEEP ONLY THE GOOD
   c.t = 0;         //Full reset on stale tracking
   code_element **foo = new code_element* [keepCnt];
   int k=0;
   for(int ii=0; ii<c.numEntries; ii++){
      if(keep[ii])
      {
         foo[k] = c.cb[ii];       
         //We have to refresh these entries for next clearStale
         foo[k]->t_last_update = 0;
         k++;
      }
   }
   //CLEAN UP
   delete [] keep;   
   delete [] c.cb;
   c.cb = foo;
   int numCleared = c.numEntries - keepCnt;
   c.numEntries = keepCnt;
   return(numCleared);
}

最终结果:

这里写图片描述
从图可以看到codebook背景学习法要比平均背景法的效果好点。

阅读更多
个人分类: open cv
上一篇25局部与分割-帧差法
下一篇27局部与分割-分水岭算法
想对作者说点什么? 我来说一句

opencv码本(codebook)背景模型

2014年02月08日 68KB 下载

codebook动态背景检测

2013年04月10日 17KB 下载

没有更多推荐了,返回首页

关闭
关闭