用于双目视觉的程序框架,里面是代码和一些说明

这是大约一个多星期前完成了大部分的框架,现在为了更好的试验,又进行了些修改。修改,应该还是没有结束的,现在,有个毛病,呆呆的看代码,就是看不进去,所以,打算,一边写这篇文章,一边修改自己的代码。

对于,双目视觉而言,重要的两个部分是摄像机标定和图像中的特征的提取。但是,在这个程序中,这两部分都没有,这个程序只是一个用于实现双目视觉的平台。为双目视觉的实现搭一个架子。

一般双目视觉用于测距,会用到的是带图像采集卡的ccd的摄像头,对于这样的摄像头,图像的采集是没有问题的,在图像采集卡中会有自己带的函数来实现。但是,在这里,我们要用到的是usb的摄像头,也就是普通的用于视频聊天的webCam。对于这样的摄像头,要进行双目视觉的试验,只有自己捕捉视频流了,这需要自己写代码。在普通的windows平台下进行视频流捕捉的方法有两种,一种是vfw,一种是directshow,在下面的程序中,我用到的就是directshow技术。

好了,下面具体对这个程序进行说明。

这个程序具体由视频捕捉,双目视觉算法,数值算法和整个程序的控制这几个部分组成。其中,为了实时的进行双目视觉,而不是一般的只是得到两个图片,然后,再离线的进行处理,这样,这个程序的控制部分是用了好几个线程,这些线程函数都是全局的函数,也就是说,由这些函数控制视频捕捉类,双目视觉算法类和数值算法类中的函数进行具体的处理。

首先,讲一下两个摄像头的视频流捕捉代码。

这也就是一个建立filter graph的过程。这里我是用一个类实现的。

//buildTwoCamFilterGraph.h

#ifndef _BUILDTWOCAMFILTERGRAPH_H_
#define _BUILDTWOCAMFILTERGRAPH_H_

#include <DShow.h>
 //ISampleGrabberCB,因为这里面的一个类是继承了directshow基类类库中的类,这一个头文件必须加上。

#include <Qedit.h>         
#include <Windows.h>


#include <Streams.h>

#include "dataBuffer.h"


#pragma comment(lib,"strmiids.lib") 
#pragma comment(lib,"strmbase.lib")


//下面的这个类是一个辅助的类,当要捕捉相应帧的图像时要用
//

extern DATABUFFER cb1;//for camera 1
extern DATABUFFER cb2;//for camera 2

extern bool g_bOneShot1;
extern bool g_bOneShot2;

extern HANDLE hEventProcessData1;
extern HANDLE hEventProcessData2;

// Note: this object is a SEMI-COM object, and can only be created statically.
// We use this little semi-com object to handle the sample-grab-callback,
// since the callback must provide a COM interface. We could have had an interface
// where you provided a function-call callback, but that's really messy, so we
// did it this way. You can put anything you want into this C++ object, even
// a pointer to a CDialog. Be aware of multi-thread issues though.
//
//


class CSampleGrabberCB : public ISampleGrabberCB
{
public:
 // these will get set by the main thread below. We need to
    // know this in order to write out the bmp
 long lWidth;
 long lHeight;

 DATABUFFER *pcb;
 bool *pbOneShot;
 
 HANDLE *phEventProcessData;

 TCHAR m_szCapDir[MAX_PATH];// the directory we want to capture to
 TCHAR m_szSnappedName[MAX_PATH];

  //Constructor
 CSampleGrabberCB(DATABUFFER *pdataBuffer,bool *pg_bOneShot,HANDLE *phEvent):
 pcb(pdataBuffer),pbOneShot(pg_bOneShot),phEventProcessData(phEvent)
 {
  ZeroMemory(m_szCapDir,sizeof(m_szCapDir));
  ZeroMemory(m_szSnappedName,sizeof(m_szSnappedName));
  }

  // fake out any COM ref counting
 STDMETHODIMP_(ULONG) AddRef(){return 2;}
 STDMETHODIMP_(ULONG) Release(){return 1;}

  // fake out any COM QI'ing
 STDMETHODIMP QueryInterface(REFIID riid,void **ppv)
 {
  if(riid==IID_ISampleGrabberCB||riid==IID_IUnknown)
  {
   *ppv=(void*)static_cast<ISampleGrabberCB*>(this);
   return NOERROR;
  }
  return E_NOINTERFACE;
 }

 // we don't implement this interface for this example
 STDMETHODIMP SampleCB(double SampleTime,IMediaSample *pSample)
 {
  return 0;
 }

 //下面是不用SampleCB的原因
 
    // The sample grabber is calling us back on its deliver thread.
    // This is NOT the main app thread!
    //
    //           !!!!! WARNING WARNING WARNING !!!!!
    //
    // On Windows 9x systems, you are not allowed to call most of the
    // Windows API functions in this callback.  Why not?  Because the
    // video renderer might hold the global Win16 lock so that the video
    // surface can be locked while you copy its data.  This is not an
    // issue on Windows 2000, but is a limitation on Win95,98,98SE, and ME.
    // Calling a 16-bit legacy function could lock the system, because
    // it would wait forever for the Win16 lock, which would be forever
    // held by the video renderer.
    //
    // As a workaround, copy the bitmap data during the callback,
    // post a message to our app, and write the data later.
 //

 STDMETHODIMP BufferCB( double dblSampleTime, BYTE * pBuffer, long lBufferSize )
    {
        // this flag will get set to true in order to take a picture
        //
  //每一帧图像传来时,都会进入这个函数的,需要一个判断,只有当需要的时候才
  //把图像的数据提取出来保存下来。
       if( !*pbOneShot )
            return 0;

        if (!pBuffer)
            return E_POINTER;

        //if( cb.lBufferSize < lBufferSize )
  if(pcb->lBufferSize!=lBufferSize)
        {
            delete [] pcb->pBuffer;
            pcb->pBuffer = NULL;
            pcb->lBufferSize = 0;
        }

        // Since we can't access Windows API functions in this callback, just
        // copy the bitmap data to a global structure for later reference.
        pcb->dblSampleTime = dblSampleTime;

  //-------------------------------------
  //下面是为了保存要保存的帧到达的时间,最后是要显示帧的到达速度
  //注意,下面的代码是这样写了,但是,感觉传给我的数据不对,所以,虽然现在留着,但是也可先不要

  pcb->dblSampleTimeBefore=pcb->dblSampleTimeNow;
  pcb->dblSampleTimeNow=dblSampleTime;
  pcb->dblSampleTimeBeforeAll+=pcb->dblSampleTimeBefore;
  pcb->dblSampleTimeNowAll+=pcb->dblSampleTimeNow;

  pcb->dblSampleTimeBetween=pcb->dblSampleTimeNow-pcb->dblSampleTimeBefore;

  pcb->cnt++;


  //------------------------------------------

        // If we haven't yet allocated the data buffer, do it now.
        // Just allocate what we need to store the new bitmap.
        if (!pcb->pBuffer)
        {
            pcb->pBuffer = new BYTE[lBufferSize];
            pcb->lBufferSize = lBufferSize;
        }

        if( !pcb->pBuffer )
        {
            pcb->lBufferSize = 0;
            return E_OUTOFMEMORY;
        }

        // Copy the bitmap data into our global buffer
        memcpy(pcb->pBuffer, pBuffer, lBufferSize);

  //set the size of the bitmap
  //
  pcb->lWidth=lWidth;
  pcb->lHeight=lHeight;

  //在这里应该是通知需要的帧已经捕捉好了,来处理吧
  //when the datas have been ready,
        // Post a message to our application, telling it to come back
        // and write the saved data to a bitmap file on the user's disk.
  //激活某个event

  //set the g_bOneShot to be false
  //
  *pbOneShot=false;
  //and single the event
  //
  SetEvent(*phEventProcessData);
        return 0;
    }
};

//捕捉两个摄像头的视频流的类的真正的开始
class BuildTwoCamFilterGraph
{
public:
 BuildTwoCamFilterGraph(DATABUFFER *pcb1,DATABUFFER *pcb2);
 ~BuildTwoCamFilterGraph();

 //重新建立filter graph,两个摄像头的
 //
 void rebuildFilterGraph(void);
 //下面是重建一个filter graph的函数
 //
    void rebuildFilterGraph1(void);
 void rebuildFilterGraph2(void);
 //解除filter graph之间的连接等等,两个摄像头的
 //
 void teardownFilterGraph(void);
 //解除filter graph之间的连接等等,下面的函数是针对一个摄像头的
 //
 void teardownFilterGraph1(void);
 void teardownFilterGraph2(void);
 

 //the inline functions-----------------------------------
 //
 //对于第一个camera的filter graph
 //void setPinCam1(void);no use now ,this must put to another class
 //void setCaptureFilterCam1(void);
 IVideoWindow* getIVideoWindow1()
 {
  return m_pVidWin1;
 }
 IMediaControl* getIMediaControl1()
 {
  return m_pMediaControl1;
 }
 IGraphBuilder* getIGraphBuilder1()
 {
  return m_pGraph1;
 }
 IBaseFilter* getCaptureFilter1()
 {
  return m_pVCap1;
 }
 ICaptureGraphBuilder2* getICaptureGraphBuilder21()
 {
        return m_pGraphBuilder21;
 }

 //对于第二个camera的filter graph
 //void setPinCam2(void);no use now ,this must put to another class
 //void setCaptureFilterCam2(void);
 IVideoWindow* getIVideoWindow2()
 {
  return m_pVidWin2;
 }
 IMediaControl* getIMediaControl2()
 {
  return m_pMediaControl2;
 }
 IGraphBuilder* getIGraphBuilder2()
 {
  return m_pGraph2;
 }
 IBaseFilter* getCaptureFilter2()
 {
  return m_pVCap2;
 }
 ICaptureGraphBuilder2* getICaptureGraphBuilder22()
 {
        return m_pGraphBuilder22;
 }


 //set the bool value
 void setFilterGraph1Run(bool bValue)
 {
  m_bPreview1=bValue;
 }
 void setFilterGraph2Run(bool bValue)
 {
  m_bPreview2=bValue;
 }
 void setFilterGraph1Build(bool bValue)
 {
  m_bFilterGraphCam1=bValue;
 }
 void setFilterGraph2Build(bool bValue)
 {
  m_bFilterGraphCam2=bValue;
 }


 //are the filter graph have been build?
 bool isFilterGraph1Built(void)
 {
  return m_bFilterGraphCam1;
 }
 bool isFilterGraph2Built(void)
 {
  return m_bFilterGraphCam2;
 }

 //are the filter garph running?
    bool isFilterGraph1Run(void)
 {
  return m_bPreview1;
 }

 bool isFilterGraph2Run(void)
 {
  return m_bPreview2;
 }


 private:
 //用于设置显示的一些属性等等
 IVideoWindow* m_pVidWin1;
 IVideoWindow* m_pVidWin2;

 //用于对创建好的filter graph进行控制的,即stop ,run和pause
    IGraphBuilder* m_pGraph1;
 IGraphBuilder* m_pGraph2;

 //在视频流捕捉的时候,可以方便的生成fiter graph
 ICaptureGraphBuilder2* m_pGraphBuilder21;
 ICaptureGraphBuilder2* m_pGraphBuilder22;

 //这是用于截获图像的filter
 IBaseFilter *m_pGrabberF1;
 IBaseFilter *m_pGrabberF2;

 //截取图像的filter上面的接口
 ISampleGrabber *m_pGrabber1;
 ISampleGrabber *m_pGrabber2;

 // the capture filter
 IBaseFilter* m_pVCap1;
 IBaseFilter* m_pVCap2;

 //the render filter
 IBaseFilter *m_pRenderer1;
 IBaseFilter *m_pRenderer2;

 //the media control
 IMediaControl* m_pMediaControl1;
 IMediaControl* m_pMediaControl2;

 //判断是否已经建立了filter graph
 //
 bool m_bFilterGraphCam1;
 bool m_bFilterGraphCam2;

 //判断是否在进行预览,也就是是否run了
 //
 bool m_bPreview1;
 bool m_bPreview2;

 //一些辅助性的函数----------------------------------
 //

 //Device enum function for two camera,and set the capture filter directly
 //
 void deviceEnum(void);

 void InitStillGraph(IGraphBuilder **ppGraph,
                  ICaptureGraphBuilder2 **ppGraphBuilder2,
      IBaseFilter **ppVCap,
      IBaseFilter **ppGrabberF,
      IBaseFilter **ppRenderer,
      ISampleGrabber **ppGrabber//,
      //IVideoWindow **ppVidWin,
      //IMediaControl **ppMediaControl
      );


 //用于断开filter连接的辅助函数
 //
 void NukeDownstream(IBaseFilter *pf,IGraphBuilder *pFg);
 void TearDownGraph(IGraphBuilder *pFg,IVideoWindow *pVW,IBaseFilter *pVCap);

 /*----------------------------------------------------------------------------------
 This semi_COM object will receive Sample callbacks for us
 ---------------------------------------------------------------------------------*/
 CSampleGrabberCB mCB1;
 CSampleGrabberCB mCB2;


};


#endif

//buildTwoCamFilterGraph.cpp

#include "stdafx.h"

#include "buildTwoCamFilterGraph.h"

#include "dataBuffer.h"//store the definition of the global structure

/*----------------------------------------------------------
此函数用于创建两个filter graph
首先,初始化com组件,枚举硬件设备,然后创建两个filter graph
---------------------------------------------------------*/
BuildTwoCamFilterGraph::BuildTwoCamFilterGraph(DATABUFFER *pcb1,DATABUFFER *pcb2):
m_pVidWin1(NULL),
m_pVidWin2(NULL),
m_pGraph1(NULL),
m_pGraph2(NULL),
m_pGraphBuilder21(NULL),
m_pGraphBuilder22(NULL),
m_pGrabberF1(NULL),
m_pGrabberF2(NULL),
m_pGrabber1(NULL),
m_pGrabber2(NULL),
m_pVCap1(NULL),
m_pVCap2(NULL),
m_pRenderer1(NULL),
m_pRenderer2(NULL),
m_pMediaControl1(NULL),
m_pMediaControl2(NULL),
m_bFilterGraphCam1(false),
m_bFilterGraphCam2(false),
m_bPreview1(false),
m_bPreview2(false),
mCB1(pcb1,&g_bOneShot1,&hEventProcessData1),
mCB2(pcb2,&g_bOneShot2,&hEventProcessData2)
{
 //mCB1=CSampleGrabberCB(cb1);
 //mCB2=CSampleGrabberCB(cb2);
 HRESULT hr;
 //step1--------------
    // Initialize the COM library.
 //
 CoInitialize(NULL);

 //step2---------------
 //enum capture filter
 //
 deviceEnum();

 //step3---------------
 //connect filters,build the filter graph
 //
 //1.for camera 1.................................................
 //
 InitStillGraph(&m_pGraph1,
             &m_pGraphBuilder21,
       &m_pVCap1,
       &m_pGrabberF1,
       &m_pRenderer1,
       &m_pGrabber1       );

 // ask for the connection media type so we know how big
    // it is, so we can write out bitmaps
    //
    AM_MEDIA_TYPE mt;
    hr = m_pGrabber1->GetConnectedMediaType( &mt );
    if ( FAILED( hr) )//something wrong happened
    {
        MessageBox(NULL,TEXT("Someting wrong"), TEXT("Could not read the connected media type"),0);
        //return ;
    }
   
    VIDEOINFOHEADER * vih = (VIDEOINFOHEADER*) mt.pbFormat;
    mCB1.lWidth  = vih->bmiHeader.biWidth;
    mCB1.lHeight = vih->bmiHeader.biHeight;
    FreeMediaType( mt );

 // don't buffer the samples as they pass through
    //
    hr = m_pGrabber1->SetBufferSamples( FALSE );
 if (SUCCEEDED(hr))
 {
  MessageBox(NULL,"setBufferSamples sucess","ok",0);
 }
 if (FAILED(hr))
 {
  MessageBox(NULL,"setBufferSamples FAILE","WRONG",0);
 }


    // only grab one at a time, stop stream after
    // grabbing one sample
    //
    hr = m_pGrabber1->SetOneShot( FALSE );
 if (SUCCEEDED(hr))
 {
  MessageBox(NULL,"SetOneShot sucess","ok",0);
 }
 if (FAILED(hr))
 {
  MessageBox(NULL,"SetOneShot FAILED","WRONG",0);
 }

    // set the callback, so we can grab the one sample
    //
    hr = m_pGrabber1->SetCallback(&mCB1, 1);

 hr = m_pGrabber1->SetOneShot( FALSE );
 if (SUCCEEDED(hr))
 {
  MessageBox(NULL,"SetCallback sucess","ok",0);
 }
 if (FAILED(hr))
 {
  MessageBox(NULL,"SetCallback FAILED","WRONG",0);
 }

 //get this ,just for the use to show out in the screen
 //
 hr = m_pGraph1->QueryInterface(IID_IVideoWindow, (void**)&m_pVidWin1);

 //get the Media Control object.and run the filter graph
 //
 hr = m_pGraph1->QueryInterface(IID_IMediaControl, (void **)&m_pMediaControl1);


 //2.for camera 2.....................................................
 //
 InitStillGraph( &m_pGraph2,
     &m_pGraphBuilder22,
     &m_pVCap2,
     &m_pGrabberF2,
     &m_pRenderer2,
     &m_pGrabber2     );

 // ask for the connection media type so we know how big
    // it is, so we can write out bitmaps
    //

    hr = m_pGrabber2->GetConnectedMediaType( &mt );
    if ( FAILED( hr) )
    {
        MessageBox(NULL,TEXT("Someting wrong"), TEXT("Could not read the connected media type"),0);
        return ;
    }
   
    vih = (VIDEOINFOHEADER*) mt.pbFormat;
    mCB2.lWidth  = vih->bmiHeader.biWidth;
    mCB2.lHeight = vih->bmiHeader.biHeight;
    FreeMediaType( mt );

 // don't buffer the samples as they pass through
    //
    hr = m_pGrabber2->SetBufferSamples( FALSE );

    // only grab one at a time, stop stream after
    // grabbing one sample
    //
    hr = m_pGrabber2->SetOneShot( FALSE );

    // set the callback, so we can grab the one sample
    //
    hr = m_pGrabber2->SetCallback(&mCB2, 1);

 if(SUCCEEDED(hr))
 {
  MessageBox(NULL,"SetCallback","ok",0);
 }

 //get this ,just for the use to show out in the screen
 //
 hr = m_pGraph2->QueryInterface(IID_IVideoWindow, (void**)&m_pVidWin2);

 //get the Media Control object.and run the filter graph
 //
 hr = m_pGraph2->QueryInterface(IID_IMediaControl, (void **)&m_pMediaControl2);


 //now set the flag to be true
 //
 m_bFilterGraphCam1=true;
 m_bFilterGraphCam2=true;
}

/*--------------------------------------------------------------
下面的这个函数是析构函数,进行资源的释放
----------------------------------------------------------------*/
BuildTwoCamFilterGraph::~BuildTwoCamFilterGraph()
{
 //step1:stop preview
 //
 m_pMediaControl1->Stop();
 m_pMediaControl2->Stop();

  //step2:release resource
 //
 m_pVidWin1->Release();
 m_pGraph1->Release();
 m_pGraphBuilder21->Release();
 m_pVCap1->Release();
 m_pMediaControl1->Release();
 m_pGrabberF1->Release();
 m_pGrabber1->Release();
 m_pRenderer1->Release();

 m_pVidWin2->Release();
 m_pGraph2->Release();
 m_pGraphBuilder22->Release();
 m_pVCap2->Release();
 m_pMediaControl2->Release();
 m_pGrabberF2->Release();
 m_pGrabber2->Release();
 m_pRenderer2->Release();
 //step3:
 //
 CoUninitialize();

}

/*-----------------------------------------------------------------
这个函数用于重新建立filter graph,因为可能会有需要解除filter graph
之间的连接。这时,应该只有capture filter留下了,所以,重新建立的,
可以调用在constructor里面用到的辅助函数.
^_^这下面用到的就是从constructor里面来的代码,是我封装做得不好
-------------------------------------------------------------------*/
void BuildTwoCamFilterGraph::rebuildFilterGraph(void)
{
 HRESULT hr;
    //connect filters,build the filter graph
 //
 //1.for camera 1.................................................
 //
 InitStillGraph(&m_pGraph1,
             &m_pGraphBuilder21,
       &m_pVCap1,
       &m_pGrabberF1,
       &m_pRenderer1,
       &m_pGrabber1       );

 // ask for the connection media type so we know how big
    // it is, so we can write out bitmaps
    //
    AM_MEDIA_TYPE mt;
    hr = m_pGrabber1->GetConnectedMediaType( &mt );
    if ( FAILED( hr) )
    {
        MessageBox(NULL,TEXT("Someting wrong"), TEXT("Could not read the connected media type"),0);
        return ;
    }
   
    VIDEOINFOHEADER * vih = (VIDEOINFOHEADER*) mt.pbFormat;
    mCB1.lWidth  = vih->bmiHeader.biWidth;
    mCB1.lHeight = vih->bmiHeader.biHeight;
    FreeMediaType( mt );

 // don't buffer the samples as they pass through
    //
    hr = m_pGrabber1->SetBufferSamples( FALSE );

    // only grab one at a time, stop stream after
    // grabbing one sample
    //
    hr = m_pGrabber1->SetOneShot( FALSE );

    // set the callback, so we can grab the one sample
    //
    hr = m_pGrabber1->SetCallback(&mCB1, 1);

 //get this ,just for the use to show out in the screen
 //
 hr = m_pGraph1->QueryInterface(IID_IVideoWindow, (void**)&m_pVidWin1);

 //get the Media Control object.and run the filter graph
 //
 hr = m_pGraph1->QueryInterface(IID_IMediaControl, (void **)&m_pMediaControl1);


 //2.for camera 2.....................................................
 //
 InitStillGraph( &m_pGraph2,
     &m_pGraphBuilder22,
     &m_pVCap2,
     &m_pGrabberF2,
     &m_pRenderer2,
     &m_pGrabber2     );

 // ask for the connection media type so we know how big
    // it is, so we can write out bitmaps
    //

    hr = m_pGrabber2->GetConnectedMediaType( &mt );
    if ( FAILED( hr) )
    {
        MessageBox(NULL,TEXT("Someting wrong"), TEXT("Could not read the connected media type"),0);
        return ;
    }
   
    vih = (VIDEOINFOHEADER*) mt.pbFormat;
    mCB2.lWidth  = vih->bmiHeader.biWidth;
    mCB2.lHeight = vih->bmiHeader.biHeight;
    FreeMediaType( mt );

 // don't buffer the samples as they pass through
    //
    hr = m_pGrabber2->SetBufferSamples( FALSE );

    // only grab one at a time, stop stream after
    // grabbing one sample
    //
    hr = m_pGrabber2->SetOneShot( FALSE );

    // set the callback, so we can grab the one sample
    //
    hr = m_pGrabber2->SetCallback(&mCB2, 1);

 //get this ,just for the use to show out in the screen
 //
 hr = m_pGraph2->QueryInterface(IID_IVideoWindow, (void**)&m_pVidWin2);

 //get the Media Control object.and run the filter graph
 //
 hr = m_pGraph2->QueryInterface(IID_IMediaControl, (void **)&m_pMediaControl2);


 //now set the flag to be true
 //
 m_bFilterGraphCam1=true;
 m_bFilterGraphCam2=true;

}


void BuildTwoCamFilterGraph::rebuildFilterGraph1(void)
{
 HRESULT hr;
    //connect filters,build the filter graph
 //
 //1.for camera 1.................................................
 //
 InitStillGraph(&m_pGraph1,
             &m_pGraphBuilder21,
       &m_pVCap1,
       &m_pGrabberF1,
       &m_pRenderer1,
       &m_pGrabber1       );

 // ask for the connection media type so we know how big
    // it is, so we can write out bitmaps
    //
    AM_MEDIA_TYPE mt;
    hr = m_pGrabber1->GetConnectedMediaType( &mt );
    if ( FAILED( hr) )
    {
        MessageBox(NULL,TEXT("Someting wrong"), TEXT("Could not read the connected media type"),0);
        return ;
    }
   
    VIDEOINFOHEADER * vih = (VIDEOINFOHEADER*) mt.pbFormat;
    mCB1.lWidth  = vih->bmiHeader.biWidth;
    mCB1.lHeight = vih->bmiHeader.biHeight;
    FreeMediaType( mt );

 // don't buffer the samples as they pass through
    //
    hr = m_pGrabber1->SetBufferSamples( FALSE );

    // only grab one at a time, stop stream after
    // grabbing one sample
    //
    hr = m_pGrabber1->SetOneShot( FALSE );

    // set the callback, so we can grab the one sample
    //
    hr = m_pGrabber1->SetCallback(&mCB1, 1);

 //get this ,just for the use to show out in the screen
 //
 hr = m_pGraph1->QueryInterface(IID_IVideoWindow, (void**)&m_pVidWin1);

 //get the Media Control object.and run the filter graph
 //
 hr = m_pGraph1->QueryInterface(IID_IMediaControl, (void **)&m_pMediaControl1);

 m_bFilterGraphCam1=true;
 
}
void BuildTwoCamFilterGraph::rebuildFilterGraph2(void)
{
 HRESULT hr;
 AM_MEDIA_TYPE mt;
 VIDEOINFOHEADER * vih;
 
 //2.for camera 2.....................................................
 //
 InitStillGraph( &m_pGraph2,
     &m_pGraphBuilder22,
     &m_pVCap2,
     &m_pGrabberF2,
     &m_pRenderer2,
     &m_pGrabber2     );

 // ask for the connection media type so we know how big
    // it is, so we can write out bitmaps
    //

    hr = m_pGrabber2->GetConnectedMediaType( &mt );
    if ( FAILED( hr) )
    {
        MessageBox(NULL,TEXT("Someting wrong"), TEXT("Could not read the connected media type"),0);
        return ;
    }
   
    vih = (VIDEOINFOHEADER*) mt.pbFormat;
    mCB2.lWidth  = vih->bmiHeader.biWidth;
    mCB2.lHeight = vih->bmiHeader.biHeight;
    FreeMediaType( mt );

 // don't buffer the samples as they pass through
    //
    hr = m_pGrabber2->SetBufferSamples( FALSE );

    // only grab one at a time, stop stream after
    // grabbing one sample
    //
    hr = m_pGrabber2->SetOneShot( FALSE );

    // set the callback, so we can grab the one sample
    //
    hr = m_pGrabber2->SetCallback(&mCB2, 1);

 //get this ,just for the use to show out in the screen
 //
 hr = m_pGraph2->QueryInterface(IID_IVideoWindow, (void**)&m_pVidWin2);

 //get the Media Control object.and run the filter graph
 //
 hr = m_pGraph2->QueryInterface(IID_IMediaControl, (void **)&m_pMediaControl2);


 //now set the flag to be true
 //
 m_bFilterGraphCam2=true;

}


//下面的函数是辅助函数-------------------------------------
//
void BuildTwoCamFilterGraph::teardownFilterGraph()
{
 //for camera 1
 TearDownGraph(m_pGraph1,m_pVidWin1,m_pVCap1);
 m_bFilterGraphCam1=false;
 m_bPreview1=false;

 //for camera 2
 TearDownGraph(m_pGraph2,m_pVidWin2,m_pVCap2);
 m_bFilterGraphCam2=false;
 m_bPreview2=false;
}

void BuildTwoCamFilterGraph::teardownFilterGraph1()
{
 //for camera 1
 TearDownGraph(m_pGraph1,m_pVidWin1,m_pVCap1);
 m_bFilterGraphCam1=false;
 m_bPreview1=false;
}

void BuildTwoCamFilterGraph::teardownFilterGraph2()
{
 //for camera 1
 TearDownGraph(m_pGraph2,m_pVidWin2,m_pVCap2);
 m_bFilterGraphCam2=false;
 m_bPreview2=false;
}

/*-----------------------------------------------------------
这个函数用于创建一个filter graph
------------------------------------------------------------*/
void BuildTwoCamFilterGraph::InitStillGraph(IGraphBuilder **ppGraph,
              ICaptureGraphBuilder2 **ppGraphBuilder2,
     IBaseFilter **ppVCap,
     IBaseFilter **ppGrabberF,
     IBaseFilter **ppRenderer,
     ISampleGrabber **ppGrabber     )
{
 //want to change the value of the point
 //so,first create the temp value
 IGraphBuilder *pGraph=*ppGraph;
 ICaptureGraphBuilder2 *pGraphBuilder2=*ppGraphBuilder2;
 IBaseFilter *pVCap=*ppVCap;
 IBaseFilter *pGrabberF=*ppGrabberF;
 IBaseFilter *pRenderer=*ppRenderer;
 ISampleGrabber *pGrabber=*ppGrabber;


 HRESULT hr;
    //step 1:---------------------
    //create a filter graph
 //
 hr =  CoCreateInstance(CLSID_FilterGraph, NULL,
 CLSCTX_INPROC_SERVER, IID_IGraphBuilder, (void **)&pGraph);
 if(FAILED(hr))
 {
  MessageBox(NULL,TEXT("can not create filter graph mananger!"),TEXT("Something wrong"),0);
  return;
 }

 //step 2:-----------------------
 // Create the Capture Graph Builder.
 //
 hr = CoCreateInstance(CLSID_CaptureGraphBuilder2, NULL,
  CLSCTX_INPROC_SERVER, IID_ICaptureGraphBuilder2,
  (void **)&pGraphBuilder2);
 hr=pGraphBuilder2->SetFiltergraph(pGraph);
 if (SUCCEEDED(hr))
 {
  
  MessageBox(NULL,TEXT("m_pGraphBuilder2 success"),TEXT("ok"),0);
 }
 else
 {
  MessageBox(NULL,TEXT("m_pGraphBuilder2 failed"),TEXT("Someting wrong"),0);
  return;
 }

 //step 3:---------------------------
 //connect the filters,do some setting
 //
 //create a sample grabber,stolen from sdk
 // Create the Sample Grabber.
 hr = CoCreateInstance(CLSID_SampleGrabber, NULL, CLSCTX_INPROC_SERVER,
  IID_IBaseFilter, (void**)&pGrabberF);
 if (FAILED(hr))
 {
  MessageBox(NULL,"GrabberF create failed","wrong",0);
  return ;
 }
 if (SUCCEEDED(hr))
 {
  MessageBox(NULL,"GrabberF create succeeded","right",0);
 }

 hr=pGrabberF->QueryInterface(IID_ISampleGrabber, (void**)&pGrabber);
 if(FAILED(hr))
 {
  MessageBox(NULL,"pGrabber failed","wrong",0);
 }
 if (SUCCEEDED(hr))
 {
  MessageBox(NULL,"pGrabber succeed","ok",0);
 }
 
 // force it to connect to video, 24 bit
    //
    CMediaType VideoType;
    VideoType.SetType( &MEDIATYPE_Video );
    VideoType.SetSubtype( &MEDIASUBTYPE_RGB24 );
    hr = pGrabber->SetMediaType( &VideoType ); // shouldn't fail
    if( FAILED( hr ) )
    {
        MessageBox( NULL,TEXT("Could not set media type"),TEXT("Someting wrong"),0);
        return ;
    }


 //add the capture filter to the filter graph
 //
 hr = pGraph->AddFilter(pVCap, L"Capture Filter ");
 if(SUCCEEDED(hr))
 {
  MessageBox(NULL,TEXT("add capture filter to the filter graph success"),"ok",0);
 }

 // add the grabber to the graph
    //
    hr = pGraph->AddFilter( pGrabberF, L"Grabber" );
    if( FAILED( hr ) )
    {
        MessageBox( NULL,TEXT("Could not put sample grabber in graph"),TEXT("Someting wrong"),0);
        return ;
    }

 //connect the filter
 //IBaseFilter* pRenderer;
 // try to render preview pin
    hr = pGraphBuilder2->RenderStream(
                            &PIN_CATEGORY_PREVIEW,
                            &MEDIATYPE_Video,
                            pVCap,
                            pGrabberF,
                            pRenderer);

 if (FAILED(hr))
 {
  MessageBox(NULL,"cannot renderStream","wrong",0);
 }
 if (SUCCEEDED(hr))
 {
  MessageBox(NULL,"renderStream succeed","ok",0);
 }


 *ppGraph=pGraph;
 *ppGraphBuilder2=pGraphBuilder2;
 *ppVCap=pVCap;
 *ppGrabberF=pGrabberF;
 *ppRenderer=pRenderer;
 *ppGrabber=pGrabber;
 }

/*-----------------------------------------------------------
这个函数仅仅得到两个capture fiter,并保存到两个private变量中
但是都没有把它们加入filter graph中
-----------------------------------------------------------*/

void BuildTwoCamFilterGraph::deviceEnum()
{
 HRESULT hr;
 // ------------------------------------------------------------------
 //                   enumerate all video capture devices

 // Create the System Device Enumerator.
    ICreateDevEnum *pCreateDevEnum=0;
    hr = CoCreateInstance(CLSID_SystemDeviceEnum, NULL, CLSCTX_INPROC_SERVER,
                          IID_ICreateDevEnum, (void**)&pCreateDevEnum);
 if(hr != NOERROR)
    {
        MessageBox(NULL,TEXT("Error Creating Device Enumerator"),TEXT("Error Creating Device Enumerator"),0);
        return ;
    }

 //Create an enumerator for the video capture category
 IEnumMoniker *pEnum=0;
    hr = pCreateDevEnum->CreateClassEnumerator(CLSID_VideoInputDeviceCategory, &pEnum, 0);
    if(hr != NOERROR)
    {
        MessageBox(NULL,TEXT("Sorry, you have no video capture hardware./r/n/r/n")
               TEXT("Video capture will not function properly."),TEXT("Someting wrong"),0);
  return ;
    }

 IMoniker *pMoniker;
 if (pEnum->Next(1, &pMoniker, NULL) == S_OK)  //get the first one
 {

  hr = pMoniker->BindToObject(0, 0, IID_IBaseFilter, (void**)&m_pVCap1);
  if (SUCCEEDED(hr))
  {
   MessageBox(NULL,"capture filter 1 succeeded","ok",0);
  }
  if (FAILED(hr))
  {
   MessageBox(NULL,"capture filter 1 failed","ok",0);
  }

 }
 if (pEnum->Next(1, &pMoniker, NULL) == S_OK)  //get the second one
 {

  hr = pMoniker->BindToObject(0, 0, IID_IBaseFilter, (void**)&m_pVCap2);
  if (SUCCEEDED(hr))
  {
   MessageBox(NULL,"capture filter 2 succeeded","ok",0);
  }
  if (FAILED(hr))
  {
   MessageBox(NULL,"capture filter 2 failed","ok",0);
  }

 }
 else
  MessageBox(NULL,TEXT("need another more camera on the computer"),TEXT("someting wrong"),0);
 pMoniker->Release();

}


//-----------------------------------------------------------------
//下面的两个函数是从例子程序中copy下来的,当做辅助函数,
//也就是当要set capture graph的pin的时候需要先断开所有的filter的连接
//
/*---------------------------------------------------------------------------------
 Tear down everything downstream of a given filter
将一个GraphBuilder中的给定的filter下的所有filter都删除,通过递归调用实现
stolen from amcap
----------------------------------------------------------------------------------*/
void BuildTwoCamFilterGraph::NukeDownstream(IBaseFilter *pf,IGraphBuilder *pFg)
{
    IPin *pP=0, *pTo=0;
    ULONG u;
    IEnumPins *pins = NULL;
    PIN_INFO pininfo;

    if (!pf)
        return;

    HRESULT hr = pf->EnumPins(&pins);
    pins->Reset();//通过这个函数调用,获得最新的数据?

    while(hr == NOERROR)
    {
        hr = pins->Next(1, &pP, &u);//找到filter上的pin
        if(hr == S_OK && pP)
        {
            pP->ConnectedTo(&pTo);//找到与这个pin相连的另一个filter上的pin
            if(pTo)
            {
                hr = pTo->QueryPinInfo(&pininfo);
                if(hr == NOERROR)
                {
                    if(pininfo.dir == PINDIR_INPUT)//只寻找input的pin
                    {
                        NukeDownstream(pininfo.pFilter,pFg);//递归调用
                        pFg->Disconnect(pTo);//要将pin的两端同时移除
                        pFg->Disconnect(pP);
                        pFg->RemoveFilter(pininfo.pFilter);//在递归调用中,
         //最先删除的是只有input pin的filter
                    }
                    pininfo.pFilter->Release();
                }
                pTo->Release();
            }
            pP->Release();
        }
    }//end while

    if(pins)
        pins->Release();
}

/*---------------------------------------------------------------------------------
 Tear down everything downstream of the capture filters, so we can build
 a different capture graph.  Notice that we never destroy the capture filters
 and WDM filters upstream of them, because then all the capture settings
 we've set would be lost.
将capture filter后面的filter删除,这样就可以建立一个新的capture graph。
不删除capture filter的原因在于如果删除,那么,我们原先的设置就白设了??
stolen from sdk amcap
---------------------------------------------------------------------------------*/
void BuildTwoCamFilterGraph::TearDownGraph(IGraphBuilder *pFg,IVideoWindow *pVW,IBaseFilter *pVCap)
{
    if(pVW)//IVideoWindow停止视频显示
    {
        // stop drawing in our window, or we may get wierd repaint effects
        pVW->put_Owner(NULL);
        pVW->put_Visible(OAFALSE);
        pVW->Release();
        pVW = NULL;
    }

    // destroy the graph downstream of our capture filters
    if(pVCap)
        NukeDownstream(pVCap,pFg);//这个函数 Tear down everything downstream of a given filter
                              //但是,这个filter却是会保留的
 //然后应该是对一些filter的release
}

下面是用于双目视觉类的实现,里面有些试验用的代码,比如是把捕获的视频流保存为bmp图片,现在这一部分还是留了下来,只是,在全局的多线程的函数中调用的是进行图像处理的函数就可以了,这里,进行图像处理的函数还没有编写出来

//binoview.h

#ifndef _BINOVIEW_H_
#define _BINOVIEW_H_

#include <Windows.h>
#include <stdlib.h>//want to change the number to char*


#include "dataBuffer.h"//store the definition of the global structure


//extern DATABUFFER cb1;//for camera 1
//extern DATABUFFER cb2;//for camera 2

/*---------------------------------------------------
说明:这个类用于实现双目视觉的算法binocular

-----------------------------------------------------*/
//用下面的结构存放图像中点的位置
//
typedef struct tagPositioninImage
{
 double x;
 double y;
}IMGPOS;

//用下面的结构存放物体在世界坐标系中的位置
typedef struct tagPositioninWorld
{
 double x;
 double y;
 double z;
}WORLDPOS;


//待识别物体在世界坐标系中的位置和在两幅图像中的位置
//
typedef struct tagObjectPosition
{
 //物体分别在两幅图像中的位置
 IMGPOS inImage1;
 IMGPOS inImage2;

 //物体在世界坐标系中的位置
 WORLDPOS inWorld;
}OBJECTPOS;

class BinoView
{
public:
 BinoView(DATABUFFER *dataBuffer1,DATABUFFER *dataBuffer2);
 ~BinoView();

 //处理两幅图像,最后得到要求物体在图像中的位置
 //
 void processImg1(void);
 void processImg2(void);

 //this is the last step of the binocular vision
 //
 void binoCularVision(void);

 //the inline function
 //
 OBJECTPOS getBall(void){return m_ball;}
 OBJECTPOS getDoor(void){return m_door;}
 OBJECTPOS getPillar1(void){return m_pillar1;}
 OBJECTPOS getPillar2(void){return m_pillar2;}

 //这个函数是通过摄像机的内参和外参矩阵,求M矩阵的值
 //初步设想是在每次改变了内外参数后运行。
 void cam1MMatrix(void);
 void cam2MMatrix(void);


 //下面的函数是作测试用的-----------------------------
 //
  
 //下面的两个函数,就是将相应的buffer中的数据保存为bmp图片
 //
 //save to bmp for buffer1:
 //
 void copyBitmap1(void);

 //save to bmp for buffer2:
 //
 void copyBitmap2(void);

 //a handy method ,just to store the data into bitmap
 //
 bool copyBitmap(DATABUFFER *pdataBuffer,LPCTSTR m_szSnappedName);

 //下面的这个函数是空的,也仅仅是为了测试用
 //
 void theLastStep(void);

 //---------------------------------------------
 //this is a litter try
 //
 int count;
 //---------------------------------------------

 //摄像机的参数------------------------------
 //
 //camera1:
 double cam1M[3][4];//摄像机1的M矩阵,M1×M2

 double cam1M1[3][4];//摄像机1的内参矩阵
 double cam1M2[4][4];//摄像机1的外参矩阵

 //camera2:
 double cam2M[3][4];//摄像机2的M矩阵,M1×M2

 double cam2M1[3][4];//摄像机2的内参矩阵
 double cam2M2[4][4];//摄像机2的外参矩阵


private:
 //下面是传递过来的指向要处理的数据
 //
 DATABUFFER *pbufferCam1;
 DATABUFFER *pbufferCam2;


 //下面放的是两幅图像的匹配物的位置
 //

 OBJECTPOS m_ball;//now only consider ball first
 OBJECTPOS m_door;//球门
 OBJECTPOS m_pillar1;//角柱1
 OBJECTPOS m_pillar2;//角柱2


 //----------------------------------------------
 //下面是辅助函数
 //
 void processImg(DATABUFFER *pbufferCam,int numOfImg);

 
};

#endif

//binoview.cpp

#include "stdafx.h"

#include "binoview.h"

#include "MathUtilities.h"//一个进行数值运算的类

//下面是用于多线程同步控制的事件,因为在全局的地方已经定义,所以,这里就只能是extern了
extern HANDLE hEventGetData;
extern HANDLE hEventProcessData1;
extern HANDLE hEventProcessData2;
extern HANDLE hEventProcessData1Finish;
extern HANDLE hEventProcessData2Finish;


//the method in public area------------------------------------------
//
BinoView::BinoView(DATABUFFER *pdataBuffer1,DATABUFFER *pdataBuffer2):
pbufferCam1(pdataBuffer1),
pbufferCam2(pdataBuffer2)
{
 //初始化摄像机的参数
 //
 /*cam1M[3][4]={{0,0,0,0},{0,0,0,0},{0,0,0,0}};
 cam2M[3][4]={{0,0,0,0},{0,0,0,0},{0,0,0,0}};

 cam1M1[3][4]={{0,0,0,0},{0,0,0,0},{0,0,1,0}};
 cam1M2[4][4]={{0,0,0,0},{0,0,0,0},{0,0,0,0},{0,0,0,1}};

 cam2M1[3][4]={{0,0,0,0},{0,0,0,0},{0,0,1,0}};
 cam2M2[4][4]={{0,0,0,0},{0,0,0,0},{0,0,0,0},{0,0,0,1}};*/

 for (int i=0;i<3;i++)
 {
  for (int j=0; j<4; j++)
  {
   cam1M[i][j]=0;
   cam2M[i][j]=0;
   cam1M1[i][j]=0;
   cam2M1[i][j]=0;
  }
 }


 for (int i=0;i<4;i++)
 {
  for (int j=0; j<4; j++)
  {
   cam1M2[i][j]=0;
   cam2M2[i][j]=0;
  }
 }

 //special value
 cam1M1[2][2]=1;
 cam2M1[2][2]=1;

 cam1M2[3][3]=1;
 cam2M2[3][3]=1;

 //for the ball
 m_ball.inImage1.x=0;
 m_ball.inImage1.y=0;
 m_ball.inImage2.x=0;
 m_ball.inImage2.y=0;
 m_ball.inWorld.x=0;
 m_ball.inWorld.y=0;
 m_ball.inWorld.z=0;

 //for the door
 m_door.inImage1.x=0;
 m_door.inImage1.y=0;
 m_door.inImage2.x=0;
 m_door.inImage2.y=0;
 m_door.inWorld.x=0;
 m_door.inWorld.y=0;
 m_door.inWorld.z=0;

 //for the pillar1
 m_pillar1.inImage1.x=0;
 m_pillar1.inImage1.y=0;
 m_pillar1.inImage2.x=0;
 m_pillar1.inImage2.y=0;
 m_pillar1.inWorld.x=0;
 m_pillar1.inWorld.y=0;
 m_pillar1.inWorld.z=0;

 //for the pillar2
 m_pillar2.inImage1.x=0;
 m_pillar2.inImage1.y=0;
 m_pillar2.inImage2.x=0;
 m_pillar2.inImage2.y=0;
 m_pillar2.inWorld.x=0;
 m_pillar2.inWorld.y=0;
 m_pillar2.inWorld.z=0;


 


 //just for debug
 //
 count=0;
}

BinoView::~BinoView()
{
}

//the next method just to get the position in image
//
void BinoView::processImg1(void)
{
 processImg(pbufferCam1,1);

 //set the value by hand,just for debug
 //
 m_ball.inImage1.x=20;
 m_ball.inImage1.y=20;
 //set the event
    SetEvent(hEventProcessData1Finish);
}

void BinoView::processImg2(void)
{
 processImg(pbufferCam2,2);

 //set the value by hand,just for debug
 //
 m_ball.inImage2.x=20;
 m_ball.inImage2.y=20;
 //set the event
 SetEvent(hEventProcessData2Finish);
}


//this method is to do the last step of the binoCular Vision
//
//在运行这一步之前,物体在两幅图像中的位置都已经得到了,
//而且,M矩阵也已经计算了出来,这里还没有做错误处理的部分
//注意:摄像机的M矩阵并不是在这里计算的,想在用户给定参数的时候计算。
//在这个程序中就是要做双目视觉的最后一步,就可以了
void BinoView::binoCularVision(void)
{
 //通过两幅图像上的点,用最小二乘法计算最后结果
 //这里涉及到小球,角柱和球门共4个物体的位置,
 //如果因为客观原因,不能进行双目视觉的计算,那么返回-1
 //
 double k[4][3];
 double u[4][1];
 double m[3][1];//the result

 //the middle result
 //
 double transeposeK[3][4];
 double multiKK[3][3];
 double multiKKK[3][4];

 //for ball----------------------------
 //
 if ((m_ball.inImage1.x>0)
  &&(m_ball.inImage1.y>0)
  &&(m_ball.inImage2.x>0)
  &&(m_ball.inImage2.y>0))
 {
  //give value to matrix k and matrix u
  //
  k[0][0]=m_ball.inImage1.x*cam1M[2][0]-cam1M[0][0];
  k[0][1]=m_ball.inImage1.x*cam1M[2][1]-cam1M[0][1];
  k[0][2]=m_ball.inImage1.x*cam1M[2][2]-cam1M[0][2];

  k[1][0]=m_ball.inImage1.y*cam1M[2][0]-cam1M[1][0];
  k[1][1]=m_ball.inImage1.y*cam1M[2][1]-cam1M[1][1];
  k[1][2]=m_ball.inImage1.y*cam1M[2][2]-cam1M[1][2];

  k[2][0]=m_ball.inImage2.x*cam2M[2][0]-cam2M[0][0];
  k[2][1]=m_ball.inImage2.x*cam2M[2][1]-cam2M[0][1];
  k[2][2]=m_ball.inImage2.x*cam2M[2][2]-cam2M[0][2];

  k[3][0]=m_ball.inImage2.y*cam2M[2][0]-cam2M[1][0];
  k[3][1]=m_ball.inImage2.y*cam2M[2][1]-cam2M[1][1];
  k[3][2]=m_ball.inImage2.y*cam2M[2][2]-cam2M[1][2];

  u[0][0]=cam1M[0][3]-m_ball.inImage1.x*cam1M[2][3];
  u[1][0]=cam1M[1][3]-m_ball.inImage1.y*cam1M[2][3];
  u[2][0]=cam2M[0][3]-m_ball.inImage2.x*cam2M[2][3];
  u[3][0]=cam2M[1][3]-m_ball.inImage2.y*cam2M[2][3];

  //最小二乘法begin
  //
  MathUtilities::MatrixTransepose(k[0],4,3,transeposeK[0]);
  MathUtilities::MatrixMultiple(transeposeK[0],k[0],3,4,3,multiKK[0]);
  MathUtilities::MatrixInv(multiKK[0],3);
  MathUtilities::MatrixMultiple(multiKK[0],transeposeK[0],3,3,4,multiKKK[0]);
  MathUtilities::MatrixMultiple(multiKKK[0],u[0],3,4,1,m[0]);

  //now store the result
  //
  m_ball.inWorld.x=m[0][0];
  m_ball.inWorld.y=m[1][0];
  m_ball.inWorld.z=m[2][0];


 }
 else
 {
  m_ball.inWorld.x=-1;  
  m_ball.inWorld.y=-1;
  m_ball.inWorld.z=-1;
 }

 //for door------------------------------
 if ((m_door.inImage1.x>0)
  &&(m_door.inImage1.y>0)
  &&(m_door.inImage2.x>0)
  &&(m_door.inImage2.y>0))
 {
  //give value to matrix k and matrix u
  //
  k[0][0]=m_door.inImage1.x*cam1M[2][0]-cam1M[0][0];
  k[0][1]=m_door.inImage1.x*cam1M[2][1]-cam1M[0][1];
  k[0][2]=m_door.inImage1.x*cam1M[2][2]-cam1M[0][2];

  k[1][0]=m_door.inImage1.y*cam1M[2][0]-cam1M[1][0];
  k[1][1]=m_door.inImage1.y*cam1M[2][1]-cam1M[1][1];
  k[1][2]=m_door.inImage1.y*cam1M[2][2]-cam1M[1][2];

  k[2][0]=m_door.inImage2.x*cam2M[2][0]-cam2M[0][0];
  k[2][1]=m_door.inImage2.x*cam2M[2][1]-cam2M[0][1];
  k[2][2]=m_door.inImage2.x*cam2M[2][2]-cam2M[0][2];

  k[3][0]=m_door.inImage2.y*cam2M[2][0]-cam2M[1][0];
  k[3][1]=m_door.inImage2.y*cam2M[2][1]-cam2M[1][1];
  k[3][2]=m_door.inImage2.y*cam2M[2][2]-cam2M[1][2];

  u[0][0]=cam1M[0][3]-m_door.inImage1.x*cam1M[2][3];
  u[1][0]=cam1M[1][3]-m_door.inImage1.y*cam1M[2][3];
  u[2][0]=cam2M[0][3]-m_door.inImage2.x*cam2M[2][3];
  u[3][0]=cam2M[1][3]-m_door.inImage2.y*cam2M[2][3];

  //最小二乘法begin
  //
  MathUtilities::MatrixTransepose(k[0],4,3,transeposeK[0]);
  MathUtilities::MatrixMultiple(transeposeK[0],k[0],3,4,3,multiKK[0]);
  MathUtilities::MatrixInv(multiKK[0],3);
  MathUtilities::MatrixMultiple(multiKK[0],transeposeK[0],3,3,4,multiKKK[0]);
  MathUtilities::MatrixMultiple(multiKKK[0],u[0],3,4,1,m[0]);

  //now store the result
  //
  m_door.inWorld.x=m[0][0];
  m_door.inWorld.y=m[1][0];
  m_door.inWorld.z=m[2][0];

 }
 else
 {
  m_door.inWorld.x=-1;  
  m_door.inWorld.y=-1;
  m_door.inWorld.z=-1;
 }

 //for pillar1--------------------------------
 //
 if ((m_pillar1.inImage1.x>0)
  &&(m_pillar1.inImage1.y>0)
  &&(m_pillar1.inImage2.x>0)
  &&(m_pillar1.inImage2.y>0))
 {
  //give value to matrix k and matrix u
  //
  k[0][0]=m_pillar1.inImage1.x*cam1M[2][0]-cam1M[0][0];
  k[0][1]=m_pillar1.inImage1.x*cam1M[2][1]-cam1M[0][1];
  k[0][2]=m_pillar1.inImage1.x*cam1M[2][2]-cam1M[0][2];

  k[1][0]=m_pillar1.inImage1.y*cam1M[2][0]-cam1M[1][0];
  k[1][1]=m_pillar1.inImage1.y*cam1M[2][1]-cam1M[1][1];
  k[1][2]=m_pillar1.inImage1.y*cam1M[2][2]-cam1M[1][2];

  k[2][0]=m_pillar1.inImage2.x*cam2M[2][0]-cam2M[0][0];
  k[2][1]=m_pillar1.inImage2.x*cam2M[2][1]-cam2M[0][1];
  k[2][2]=m_pillar1.inImage2.x*cam2M[2][2]-cam2M[0][2];

  k[3][0]=m_pillar1.inImage2.y*cam2M[2][0]-cam2M[1][0];
  k[3][1]=m_pillar1.inImage2.y*cam2M[2][1]-cam2M[1][1];
  k[3][2]=m_pillar1.inImage2.y*cam2M[2][2]-cam2M[1][2];

  u[0][0]=cam1M[0][3]-m_pillar1.inImage1.x*cam1M[2][3];
  u[1][0]=cam1M[1][3]-m_pillar1.inImage1.y*cam1M[2][3];
  u[2][0]=cam2M[0][3]-m_pillar1.inImage2.x*cam2M[2][3];
  u[3][0]=cam2M[1][3]-m_pillar1.inImage2.y*cam2M[2][3];

  //最小二乘法begin
  //
  MathUtilities::MatrixTransepose(k[0],4,3,transeposeK[0]);
  MathUtilities::MatrixMultiple(transeposeK[0],k[0],3,4,3,multiKK[0]);
  MathUtilities::MatrixInv(multiKK[0],3);
  MathUtilities::MatrixMultiple(multiKK[0],transeposeK[0],3,3,4,multiKKK[0]);
  MathUtilities::MatrixMultiple(multiKKK[0],u[0],3,4,1,m[0]);

  //now store the result
  //
  m_pillar1.inWorld.x=m[0][0];
  m_pillar1.inWorld.y=m[1][0];
  m_pillar1.inWorld.z=m[2][0];

 }
 else
 {
  m_pillar1.inWorld.x=-1;  
  m_pillar1.inWorld.y=-1;
  m_pillar1.inWorld.z=-1;
 }

 //for pillar2-----------------------------
 //
 if ((m_pillar2.inImage1.x>0)
  &&(m_pillar2.inImage1.y>0)
  &&(m_pillar2.inImage2.x>0)
  &&(m_pillar2.inImage2.y>0))
 {
  //give value to matrix k and matrix u
  //
  k[0][0]=m_pillar2.inImage1.x*cam1M[2][0]-cam1M[0][0];
  k[0][1]=m_pillar2.inImage1.x*cam1M[2][1]-cam1M[0][1];
  k[0][2]=m_pillar2.inImage1.x*cam1M[2][2]-cam1M[0][2];

  k[1][0]=m_pillar2.inImage1.y*cam1M[2][0]-cam1M[1][0];
  k[1][1]=m_pillar2.inImage1.y*cam1M[2][1]-cam1M[1][1];
  k[1][2]=m_pillar2.inImage1.y*cam1M[2][2]-cam1M[1][2];

  k[2][0]=m_pillar2.inImage2.x*cam2M[2][0]-cam2M[0][0];
  k[2][1]=m_pillar2.inImage2.x*cam2M[2][1]-cam2M[0][1];
  k[2][2]=m_pillar2.inImage2.x*cam2M[2][2]-cam2M[0][2];

  k[3][0]=m_pillar2.inImage2.y*cam2M[2][0]-cam2M[1][0];
  k[3][1]=m_pillar2.inImage2.y*cam2M[2][1]-cam2M[1][1];
  k[3][2]=m_pillar2.inImage2.y*cam2M[2][2]-cam2M[1][2];

  u[0][0]=cam1M[0][3]-m_pillar2.inImage1.x*cam1M[2][3];
  u[1][0]=cam1M[1][3]-m_pillar2.inImage1.y*cam1M[2][3];
  u[2][0]=cam2M[0][3]-m_pillar2.inImage2.x*cam2M[2][3];
  u[3][0]=cam2M[1][3]-m_pillar2.inImage2.y*cam2M[2][3];

  //最小二乘法begin
  //
  MathUtilities::MatrixTransepose(k[0],4,3,transeposeK[0]);
  MathUtilities::MatrixMultiple(transeposeK[0],k[0],3,4,3,multiKK[0]);
  MathUtilities::MatrixInv(multiKK[0],3);
  MathUtilities::MatrixMultiple(multiKK[0],transeposeK[0],3,3,4,multiKKK[0]);
  MathUtilities::MatrixMultiple(multiKKK[0],u[0],3,4,1,m[0]);

  //now store the result
  //
  m_pillar2.inWorld.x=m[0][0];
  m_pillar2.inWorld.y=m[1][0];
  m_pillar2.inWorld.z=m[2][0];

 }
 else
 {
  m_pillar2.inWorld.x=-1;  
  m_pillar2.inWorld.y=-1;
  m_pillar2.inWorld.z=-1;
 }


 //now everything finished
 //
 //set the event
 //
 SetEvent(hEventGetData);

}

 

//the method in private area----------------------------------------------
//
//这个函数的传递参数设计得不好,其中,第二个参数表示的是处理第几幅图像,好把相应的结果放好
//
void BinoView::processImg(DATABUFFER *pbufferCam,int numOfImg)
{
 //step1:-----------------------------------------------------
 //图像处理的部分,里面对图像进行处理,并且提取出小球,球门,角柱的在图像中的
 //相应坐标,如果确定图像中没有出现这些物体,就把它们相应在图像中的坐标设为-1
 //
 //定义一些临时的变量
 //
 IMGPOS ball;//now only consider ball first
 IMGPOS door;//球门
 IMGPOS pillar1;//角柱1
 IMGPOS pillar2;//角柱2


 //图像处理部分开始
 //
 //just give the result ,only for debug
 //
 door.x=-1;
 door.y=-1;
 pillar1.x=-1;
 pillar1.y=-1;
 pillar2.x=-1;
 pillar2.y=-1;


 //step2:--------------------------------------------
 //根据传入的参数,具体的图像
 //
 if (numOfImg==1)
 {
  m_ball.inImage1=ball;
  m_door.inImage1=door;
  m_pillar1.inImage1=pillar1;
  m_pillar2.inImage1=pillar2;
 }
 else
 {
  m_ball.inImage2=ball;
  m_door.inImage2=door;
  m_pillar1.inImage2=pillar1;
  m_pillar2.inImage2=pillar2;
 }
}

//计算摄像机1的M矩阵
void BinoView::cam1MMatrix(void)
{
 MathUtilities::MatrixMultiple(cam1M1[0],cam1M2[0],3,4,4,cam1M[0]);
}

//计算摄像机1的M矩阵
void BinoView::cam2MMatrix(void)
{
 MathUtilities::MatrixMultiple(cam2M1[0],cam2M2[0],3,4,4,cam2M[0]);
}
//---------------------------------------------------------------------
//下面的函数主要是进行测试的
//
void BinoView::theLastStep()
{
 Sleep(100);

 //and this just try
 //
    count++;
 //set the event
 //
 SetEvent(hEventGetData);
}

bool BinoView::copyBitmap(DATABUFFER *pdataBuffer,LPCTSTR m_szSnappedName)
{
 //write out a bmp file
 //
 HANDLE hf=CreateFile(m_szSnappedName,GENERIC_WRITE, FILE_SHARE_READ, NULL,
                            CREATE_ALWAYS, NULL, NULL );
 if(hf==INVALID_HANDLE_VALUE)
  return 0;

 //write out the file header
 //
 BITMAPFILEHEADER bfh;
 memset(&bfh,0,sizeof(BITMAPFILEHEADER));
 bfh.bfType='MB';
 //设置位图文件的大小
 bfh.bfSize=sizeof(BITMAPFILEHEADER)+sizeof(BITMAPINFOHEADER)+pdataBuffer->lBufferSize;
 //设置位图像素所在的位置
 bfh.bfOffBits=sizeof(BITMAPFILEHEADER)+sizeof(BITMAPINFOHEADER);

 DWORD dwWritten=0;
 WriteFile(hf,&bfh,sizeof(bfh),&dwWritten,NULL);

 //and the bitmap format
 //
 BITMAPINFOHEADER bih;
 memset(&bih,0,sizeof(BITMAPINFOHEADER));
 bih.biSize=sizeof(BITMAPINFOHEADER);
 bih.biWidth=pdataBuffer->lWidth;
 bih.biHeight=pdataBuffer->lHeight;
 bih.biPlanes=1;
 bih.biBitCount=24;

 dwWritten=0;
 WriteFile(hf,&bih,sizeof(bih),&dwWritten,NULL);

 //and the bits themselfs
 dwWritten=0;
 WriteFile(hf,pdataBuffer->pBuffer,pdataBuffer->lBufferSize,&dwWritten,NULL);

 CloseHandle(hf);
 //bFileWritten=TRUE;

 save bitmapinfoheader for later use when repainting the window
 //memcpy(&(cb2.bih),&bih,sizeof(bih));

 return TRUE;
}

void BinoView::copyBitmap1(void)
{
 //copyBitmap(pbufferCam1,"E://captureBmp//cam1.bmp");
 static int cnt=0;
 char buffer[50];
 _itoa(cnt,buffer,10);

 char *pathName=new char[100];
 char *path="E://captureBmp//cam1";
 strcpy(pathName,path);
 strcat(pathName,buffer);
 strcat(pathName,".bmp");

 copyBitmap(pbufferCam1,pathName);

    cnt++;
 SetEvent(hEventProcessData1Finish);
}
void BinoView::copyBitmap2(void)
{
 //copyBitmap(pbufferCam2,"E://captureBmp//cam2.bmp");

 static int cnt=0;
 char buffer[50];
 _itoa(cnt,buffer,10);

 char *pathName=new char[100];
 char *path="E://captureBmp//cam2";
 strcpy(pathName,path);
 strcat(pathName,buffer);
 strcat(pathName,".bmp");

 copyBitmap(pbufferCam2,pathName);

    cnt++;


 SetEvent(hEventProcessData2Finish);

}

对于双目视觉,数值运算是不能缺少的一部分,下面的有关数值运算的类,我做的很简单,也就是将一本有关数值运算的书上的源代码封装好了,称为一个现在只有几个函数的类而已,不过,这仅有的几个进行数值运算的函数,对于双目视觉来说,也是够用了的。

//MathUtilities.h

#ifndef MATHUTILITIES_H
#define MATHUTILITIES_H

#include "math.h"

class MathUtilities
{
public:
 //下面的这个函数用于求两个矩阵相乘,最后得到的结果值
 //
 static void MatrixMultiple(double a[],//乘号左边的矩阵
                    double b[],//乘号右边的矩阵
        int  m,//矩阵a的行数,也是最后结果矩阵c的行数
        int n,//矩阵a的列数,也是矩阵b的行数
        int k,//矩阵b的列数,也是结果矩阵c的列数
        double c[]//结果矩阵c
        );
                  
   //下面的这个函数用于对a矩阵进行转置,结果放到b中
 //
 static void MatrixTransepose(double a[],//
                           int m,//这是a矩阵的行数
         int n,//这是a矩阵的列数
                           double b[]);

 //实矩阵求逆的全选主元高斯约当法,如果是奇异矩阵返回0
 //
 static int MatrixInv(double a[],//输入时是要求逆的矩阵,返回其逆矩阵
                   int n);//矩阵的阶数


};

#endif

//MathUtilities.cpp

#include "stdafx.h"
#include "MathUtilities.h"

//矩阵的乘法------------------------------------------------------------------------------
//
void MathUtilities::MatrixMultiple(double a[],//乘号左边的矩阵
                  double b[],//乘号右边的矩阵
      int  m,//矩阵a的行数,也是最后结果矩阵c的行数
      int n,//矩阵a的列数,也是矩阵b的行数
      int k,//矩阵b的列数,也是结果矩阵c的列数
      double c[]//结果矩阵c
      )
  { int i,j,l,u;
    for (i=0; i<=m-1; i++)
    for (j=0; j<=k-1; j++)
      { u=i*k+j; c[u]=0.0;
        for (l=0; l<=n-1; l++)
          c[u]=c[u]+a[i*n+l]*b[l*k+j];
      }
    return;
  }


                    
//下面的这个函数用于对a矩阵进行转置,结果放到b中
//
void MathUtilities::MatrixTransepose(double a[],//输入的初始矩阵
                          int m,//这是a矩阵的行数
        int n,//这是a矩阵的列数
                          double b[])//得到的结果矩阵
{
 for (int i=0;i<m;i++)
 {
  for (int j=0;j<n;j++)
  {
   b[j*m+i]=a[i*n+j];
  }
 }
}

//实矩阵求逆的全选主元高斯约当法
//
int MathUtilities::MatrixInv(double a[],//输入时是要求逆的矩阵,返回其逆矩阵
        int n)//矩阵的阶数
  { int *is,*js,i,j,k,l,u,v;
    double d,p;
    is=(int *)malloc(n*sizeof(int));
    js=(int *)malloc(n*sizeof(int));
    for (k=0; k<=n-1; k++)
      { d=0.0;
        for (i=k; i<=n-1; i++)
        for (j=k; j<=n-1; j++)
          { l=i*n+j; p=fabs(a[l]);
            if (p>d) { d=p; is[k]=i; js[k]=j;}
          }
        if (d+1.0==1.0)
          { free(is); free(js); printf("err**not inv/n");
            return(0);
          }
        if (is[k]!=k)
          for (j=0; j<=n-1; j++)
            { u=k*n+j; v=is[k]*n+j;
              p=a[u]; a[u]=a[v]; a[v]=p;
            }
        if (js[k]!=k)
          for (i=0; i<=n-1; i++)
            { u=i*n+k; v=i*n+js[k];
              p=a[u]; a[u]=a[v]; a[v]=p;
            }
        l=k*n+k;
        a[l]=1.0/a[l];
        for (j=0; j<=n-1; j++)
          if (j!=k)
            { u=k*n+j; a[u]=a[u]*a[l];}
        for (i=0; i<=n-1; i++)
          if (i!=k)
            for (j=0; j<=n-1; j++)
              if (j!=k)
                { u=i*n+j;
                  a[u]=a[u]-a[i*n+k]*a[k*n+j];
                }
        for (i=0; i<=n-1; i++)
          if (i!=k)
            { u=i*n+k; a[u]=-a[u]*a[l];}
      }
    for (k=n-1; k>=0; k--)
      { if (js[k]!=k)
          for (j=0; j<=n-1; j++)
            { u=k*n+j; v=js[k]*n+j;
              p=a[u]; a[u]=a[v]; a[v]=p;
            }
        if (is[k]!=k)
          for (i=0; i<=n-1; i++)
            { u=i*n+k; v=i*n+is[k];
              p=a[u]; a[u]=a[v]; a[v]=p;
            }
      }
    free(is); free(js);
    return(1);
  }

这个函数中定义的结构体是存放每一帧的图像的信息的地方

//dataBuffer.h

#ifndef _DATABUFFER_H_
#define _DATABUFFER_H_


//Structures
//the following structure used to store the data
typedef struct _callbackinfo
{
 double dblSampleTime;
 long lBufferSize;
 BYTE *pBuffer;
 //--------------
 //下面的数据成员用于最后得到每秒多少帧
 double dblSampleTimeBefore;
 double dblSampleTimeNow;
 double dblSampleTimeBeforeAll;
 double dblSampleTimeNowAll;
 double dblSampleTimeBetween;

 double cnt;
 //-------------
 //BITMAPINFOHEADER bih;
 long lWidth;
 long lHeight;
}DATABUFFER;


#endif

一些主要的类也就实现完了,下面是主程序的地方,主要是调用前面的类来实现实时的视频流采集和处理,也就是这样了,里面会有几个全局的线程,几个事件,还有两个用于放视频帧数据的全局数据区,而且,被我放到了对话框那个类实现的文件中,其实,对话框类在这个程序中也就是一个很普通的实现了界面的一个类而已。所以,会显得很乱了。

// twoCamCaptureBmpDlg.h : 头文件
//

#pragma once

#include <DShow.h>
#include <Qedit.h> //ISampleGrabberCB
#include <Windows.h>

#include <Streams.h>
#include "afxwin.h"

#pragma comment(lib,"strmiids.lib") 
#pragma comment(lib,"strmbase.lib")

// CtwoCamCaptureBmpDlg 对话框
class CtwoCamCaptureBmpDlg : public CDialog
{
// 构造
public:
 CtwoCamCaptureBmpDlg(CWnd* pParent = NULL); // 标准构造函数

// 对话框数据
 enum { IDD = IDD_TWOCAMCAPTUREBMP_DIALOG };

 protected:
 virtual void DoDataExchange(CDataExchange* pDX); // DDX/DDV 支持


// 实现
protected:
 HICON m_hIcon;

 // 生成的消息映射函数
 virtual BOOL OnInitDialog();
 afx_msg void OnSysCommand(UINT nID, LPARAM lParam);
 afx_msg void OnPaint();
 afx_msg HCURSOR OnQueryDragIcon();
 DECLARE_MESSAGE_MAP()

private:
 //一些辅助函数
 //

 //用于设置capture filter
 //
 void setCaptureFilter(IBaseFilter *pVCap);

 //the handle of the thread
 //
 HANDLE hGetData;
 HANDLE hProcessData1;
 HANDLE hProcessData2;
 HANDLE hGetResult;

public:
 // the window for the preview of camera 1
 CStatic m_previewCam1;
 // the window for the preview of camera 2
 CStatic m_previewCam2;

 //----------------------
 //the public method
 //
 void showResult(void);//这个函数在这里只是一个没有什么用处的函数而已,不过将来会有用的

 afx_msg BOOL OnEraseBkgnd(CDC* pDC);
 afx_msg void OnBnClickedBegin();
 afx_msg void OnBnClickedSavegrf();
 afx_msg void OnBnClickedSetpincam1();
 afx_msg void OnBnClickedSetfiltercam1();
 afx_msg void OnBnClickedSetpincam2();
 afx_msg void OnBnClickedSetfiltercam2();
 afx_msg void OnBnClickedStop();
 afx_msg void OnBnClickedStopcap();
 afx_msg void OnBnClickedCamval1();
 afx_msg void OnBnClickedCamval2();
 afx_msg void OnBnClickedProcessbmp();
};

// twoCamCaptureBmpDlg.cpp : 实现文件
//

#include "stdafx.h"
#include "twoCamCaptureBmp.h"
#include "twoCamCaptureBmpDlg.h"

#include "./twocamcapturebmpdlg.h"


#ifdef _DEBUG
#define new DEBUG_NEW
#endif

//---------------------------------------
//headers
//

#include "dataBuffer.h"//store the definition of the global structure
#include "BuildTwoCamFilterGraph.h"
#include "binoview.h"

//显示和设置摄像机参数的对话框
#include "CamValue1.h"
#include "CamValue2.h"

//------------------------------------------------------
//the global data
//
DATABUFFER cb1={0};
DATABUFFER cb2={0};
bool g_bOneShot1=false;
bool g_bOneShot2=false;

//这仅仅是为了方便做的一个对话框的指针,把它做成了全局函数
//
CtwoCamCaptureBmpDlg *pDlg=NULL;

BuildTwoCamFilterGraph *twoCam=new BuildTwoCamFilterGraph(&cb1,&cb2);

BinoView *binoCular=new BinoView(&cb1,&cb2);

//-----------------------------------------------
//the events
//最开始的让获取数据的事件激发是由用户按下开始按钮激发的
//
HANDLE hEventGetData=CreateEvent(NULL,
         FALSE,//激发后自动变成非激发状态
         FALSE,//一开始出于非激发状态
         TEXT("getDataEvent")
         );
HANDLE hEventProcessData1=CreateEvent(NULL,
           FALSE,
           FALSE,
           TEXT("processData1Event")
           );
HANDLE hEventProcessData2=CreateEvent(NULL,
           FALSE,
           FALSE,
           TEXT("processData2Event")
           );
HANDLE hEventProcessData1Finish=CreateEvent(NULL,
             FALSE,
             FALSE,
             TEXT("finishProcessData1Event")
             );
HANDLE hEventProcessData2Finish=CreateEvent(NULL,
           FALSE,
           FALSE,
           TEXT("finishProcessData2Event")
           );
//used for WaitForMultipleObjects
//
HANDLE hEventProcessDataFinish[2]=
{hEventProcessData1Finish,hEventProcessData2Finish};


//-----------------------------------
//the global function
//
/*-------------------------------------------------------
下面的线程启动获取数据的线程(也就是设置一下g_bOneShot)
但是,这个函数本身要等待event
--------------------------------------------------------*/
DWORD WINAPI g_getData(LPVOID p)
{
 while(true)
 {
  WaitForSingleObject(hEventGetData,INFINITE);
  //MessageBox(NULL,"getData singled","ok",0);
  g_bOneShot1=true;
  g_bOneShot2=true;
  //注意,此时并不激活处理数据的线程,而是在
  //真正的完全把数据放到缓冲区后,才激活处理数据的线程
 }
}

/*--------------------------------------------------------
下面的线程用于启动对camera1获取的图像数据进行处理的程序,
然后,要等待数据完全的获取后,才能处理,所以,也要等待event
----------------------------------------------------------*/
DWORD WINAPI g_processData1(LPVOID p)
{
 while(true)
 {
  WaitForSingleObject(hEventProcessData1,INFINITE);
  //MessageBox(NULL,"hEventProcessData1 singled","ok",0);
  //now process the data from camera 1
  //
  //this method is for debug
  //
  //binoCular->copyBitmap1();
  binoCular->processImg1();
  
  //when finish process the data,激活event hEventProcessData1Finish
  //
  //i think the event must be set by the uper method
  //SetEvent(hEventProcessData1Finish);
 }
}

/*---------------------------------------------------------
下面的线程用于启动对camera2获取的数据进行处理的程序,
然后,要等待数据完全的获取后,才能进行处理,所以,需要等待event
-----------------------------------------------------------*/
DWORD WINAPI g_processData2(LPVOID p)
{
 while(true)
 {
  WaitForSingleObject(hEventProcessData2,INFINITE);
  //MessageBox(NULL,"hEventProcessData2 singled","ok",0);
  //now process the data from camera 2
  //
  //this method just for debug
  //
  //binoCular->copyBitmap2();

  binoCular->processImg2();
  
  //when finish process the data,激活event hEventProcessData2Finish
  //
  //I think the event must be set by the upper method ,
  //so could conform that the function finished

  //SetEvent(hEventProcessData2Finish);
 }
}

/*---------------------------------------------------------------
下面的线程用于融合上面两个线程处理的数据的结果,所以,要等待
上面的两个线程都运行一遍以后才能运行,所以,需要等待两个event,
并且,由于数据都处理完成呢,所以,激活一下event,这样,获取数据
的线程才能继续运行
---------------------------------------------------------------*/
DWORD WINAPI g_getResult(LPVOID p)
{
 while(true)
 {
  WaitForMultipleObjects(2,hEventProcessDataFinish,TRUE,INFINITE);
  //MessageBox(NULL,"hEventProcessDataFinish singled","ok",0);
  //here all data in the databuffer has been finished,
  //so ,you could use the data ,and get the result at last
  //
  //this method just for debug
  //
        //binoCular->theLastStep();

  binoCular->binoCularVision();

  //这里调用了对话框中的函数
  //
  pDlg->showResult();


  //now single the event hEventGetData
  //
  //may be this event must be set by the upper function
  //
  //SetEvent(hEventGetData);
 }
}


// 用于应用程序“关于”菜单项的 CAboutDlg 对话框

class CAboutDlg : public CDialog
{
public:
 CAboutDlg();

// 对话框数据
 enum { IDD = IDD_ABOUTBOX };

 protected:
 virtual void DoDataExchange(CDataExchange* pDX);    // DDX/DDV 支持

// 实现
protected:
 DECLARE_MESSAGE_MAP()
public:
 afx_msg void OnClose();
};

CAboutDlg::CAboutDlg() : CDialog(CAboutDlg::IDD)
{
}

void CAboutDlg::DoDataExchange(CDataExchange* pDX)
{
 CDialog::DoDataExchange(pDX);
}

BEGIN_MESSAGE_MAP(CAboutDlg, CDialog)
 ON_WM_CLOSE()
END_MESSAGE_MAP()


// CtwoCamCaptureBmpDlg 对话框

CtwoCamCaptureBmpDlg::CtwoCamCaptureBmpDlg(CWnd* pParent /*=NULL*/)
 : CDialog(CtwoCamCaptureBmpDlg::IDD, pParent)
{
 m_hIcon = AfxGetApp()->LoadIcon(IDR_MAINFRAME);
}

void CtwoCamCaptureBmpDlg::DoDataExchange(CDataExchange* pDX)
{
 CDialog::DoDataExchange(pDX);
 DDX_Control(pDX, IDC_WINDOWCAM1, m_previewCam1);
 DDX_Control(pDX, IDC_WINDOWCAM2, m_previewCam2);
}

BEGIN_MESSAGE_MAP(CtwoCamCaptureBmpDlg, CDialog)
 ON_WM_SYSCOMMAND()
 ON_WM_PAINT()
 ON_WM_QUERYDRAGICON()
 //}}AFX_MSG_MAP
 ON_WM_ERASEBKGND()
 ON_BN_CLICKED(IDC_BEGIN, OnBnClickedBegin)
 ON_BN_CLICKED(IDC_SAVEGRF, OnBnClickedSavegrf)
 ON_BN_CLICKED(IDC_SETPINCAM1, OnBnClickedSetpincam1)
 ON_BN_CLICKED(IDC_SETFILTERCAM1, OnBnClickedSetfiltercam1)
 ON_BN_CLICKED(IDC_SETPINCAM2, OnBnClickedSetpincam2)
 ON_BN_CLICKED(IDC_SETFILTERCAM2, OnBnClickedSetfiltercam2)
 ON_BN_CLICKED(IDC_STOP, OnBnClickedStop)
 ON_BN_CLICKED(IDC_STOPCAP, OnBnClickedStopcap)
 ON_BN_CLICKED(IDC_CAMVAL1, OnBnClickedCamval1)
 ON_BN_CLICKED(IDC_CAMVAL2, OnBnClickedCamval2)
 ON_BN_CLICKED(IDC_PROCESSBMP, OnBnClickedProcessbmp)
END_MESSAGE_MAP()


// CtwoCamCaptureBmpDlg 消息处理程序

BOOL CtwoCamCaptureBmpDlg::OnInitDialog()
{
 CDialog::OnInitDialog();

 // 将/“关于.../”菜单项添加到系统菜单中。

 // IDM_ABOUTBOX 必须在系统命令范围内。
 ASSERT((IDM_ABOUTBOX & 0xFFF0) == IDM_ABOUTBOX);
 ASSERT(IDM_ABOUTBOX < 0xF000);

 CMenu* pSysMenu = GetSystemMenu(FALSE);
 if (pSysMenu != NULL)
 {
  CString strAboutMenu;
  strAboutMenu.LoadString(IDS_ABOUTBOX);
  if (!strAboutMenu.IsEmpty())
  {
   pSysMenu->AppendMenu(MF_SEPARATOR);
   pSysMenu->AppendMenu(MF_STRING, IDM_ABOUTBOX, strAboutMenu);
  }
 }

 // 设置此对话框的图标。当应用程序主窗口不是对话框时,框架将自动
 //  执行此操作
 SetIcon(m_hIcon, TRUE);   // 设置大图标
 SetIcon(m_hIcon, FALSE);  // 设置小图标

 // TODO: 在此添加额外的初始化代码


 // Since we're embedding video in a child window of a dialog,
    // we must set the WS_CLIPCHILDREN style to prevent the bounding
    // rectangle from drawing over our video frames.
    //
    // Neglecting to set this style can lead to situations when the video
    // is erased and replaced with the default color of the bounding rectangle.
    m_previewCam1.ModifyStyle(0, WS_CLIPCHILDREN);
 m_previewCam2.ModifyStyle(0, WS_CLIPCHILDREN);


 
 //---------------------------------------------------------------------
 //show out in the screen,进行一些设置,使捕获的图像能显示出来
 //Setting the Video Window,
 //注意这些设置必须是当filter graph创建后才能进行设置的,否则不行

 //for the camera 1
 //
 HRESULT hr;
 hr = (twoCam->getIVideoWindow1())->put_Owner((OAHWND) m_previewCam1.GetSafeHwnd());
 if (SUCCEEDED(hr))
 {
  // The video window must have the WS_CHILD style
  hr = (twoCam->getIVideoWindow1())->put_WindowStyle(WS_CHILD);
  // Read coordinates of video container window
  RECT rc;
  m_previewCam1.GetClientRect(&rc);
  long width =  rc.right - rc.left;
  long height = rc.bottom - rc.top;
  // Ignore the video's original size and stretch to fit bounding rectangle
  hr = (twoCam->getIVideoWindow1())->SetWindowPosition(rc.left, rc.top, width, height);
  (twoCam->getIVideoWindow1())->put_Visible(OATRUE);
 }

 //for the camera2
 //
 hr = (twoCam->getIVideoWindow2())->put_Owner((OAHWND) m_previewCam2.GetSafeHwnd());
 if (SUCCEEDED(hr))
 {
  // The video window must have the WS_CHILD style
  hr = (twoCam->getIVideoWindow2())->put_WindowStyle(WS_CHILD);
  // Read coordinates of video container window
  RECT rc;
  m_previewCam2.GetClientRect(&rc);
  long width =  rc.right - rc.left;
  long height = rc.bottom - rc.top;
  // Ignore the video's original size and stretch to fit bounding rectangle
  hr = (twoCam->getIVideoWindow2())->SetWindowPosition(rc.left, rc.top, width, height);
  (twoCam->getIVideoWindow2())->put_Visible(OATRUE);
 }

 //now set something in order to preview as soon as the window built
 //

 hr=(twoCam->getIMediaControl1())->Run();
 if (SUCCEEDED(hr))
 {
  MessageBox("run the graph");
 }
 (twoCam->getIMediaControl2())->Run();

 twoCam->setFilterGraph1Run(true);
 twoCam->setFilterGraph2Run(true);

 //now give the pDlg a value
 //
 pDlg=this;
 return TRUE;  // 除非设置了控件的焦点,否则返回 TRUE
}

void CtwoCamCaptureBmpDlg::OnSysCommand(UINT nID, LPARAM lParam)
{
 if ((nID & 0xFFF0) == IDM_ABOUTBOX)
 {
  CAboutDlg dlgAbout;
  dlgAbout.DoModal();
 }
 else
 {
  CDialog::OnSysCommand(nID, lParam);
 }
}

// 如果向对话框添加最小化按钮,则需要下面的代码
//  来绘制该图标。对于使用文档/视图模型的 MFC 应用程序,
//  这将由框架自动完成。

void CtwoCamCaptureBmpDlg::OnPaint()
{
 if (IsIconic())
 {
  CPaintDC dc(this); // 用于绘制的设备上下文

  SendMessage(WM_ICONERASEBKGND, reinterpret_cast<WPARAM>(dc.GetSafeHdc()), 0);

  // 使图标在工作矩形中居中
  int cxIcon = GetSystemMetrics(SM_CXICON);
  int cyIcon = GetSystemMetrics(SM_CYICON);
  CRect rect;
  GetClientRect(&rect);
  int x = (rect.Width() - cxIcon + 1) / 2;
  int y = (rect.Height() - cyIcon + 1) / 2;

  // 绘制图标
  dc.DrawIcon(x, y, m_hIcon);
 }
 else
 {
  CDialog::OnPaint();
 }
}

//当用户拖动最小化窗口时系统调用此函数取得光标显示。
HCURSOR CtwoCamCaptureBmpDlg::OnQueryDragIcon()
{
 return static_cast<HCURSOR>(m_hIcon);
}

/*------------------------------------------------------------
下面这个函数用于对capture filter进行设置
-------------------------------------------------------------*/
void CtwoCamCaptureBmpDlg::setCaptureFilter(IBaseFilter *pVCap)
{
 CWnd tt;
 tt.GetActiveWindow();

 ISpecifyPropertyPages *pProp;
 HRESULT hr = pVCap->QueryInterface(IID_ISpecifyPropertyPages, (void **)&pProp);
 if (SUCCEEDED(hr))
 {
  // Get the filter's name and IUnknown pointer.
  FILTER_INFO FilterInfo;
  hr = pVCap->QueryFilterInfo(&FilterInfo);
  IUnknown *pFilterUnk;
  pVCap->QueryInterface(IID_IUnknown, (void **)&pFilterUnk);

  // Show the page.
  CAUUID caGUID;
  pProp->GetPages(&caGUID);
  pProp->Release();
  OleCreatePropertyFrame(
   tt.m_hWnd,                   // Parent window
   0, 0,                   // Reserved
   FilterInfo.achName,     // Caption for the dialog box
   1,                      // Number of objects (just the filter)
   &pFilterUnk,            // Array of object pointers.
   caGUID.cElems,          // Number of property pages
   caGUID.pElems,          // Array of property page CLSIDs
   0,                      // Locale identifier
   0, NULL                 // Reserved
  );

  // Clean up.
  pFilterUnk->Release();
  FilterInfo.pGraph->Release();
  CoTaskMemFree(caGUID.pElems);
 }
}

BOOL CtwoCamCaptureBmpDlg::OnEraseBkgnd(CDC* pDC)
{
 // TODO: 在此添加消息处理程序代码和/或调用默认值
 CRect rc;
 m_previewCam1.GetWindowRect(&rc);
 ScreenToClient(&rc);
 pDC->ExcludeClipRect(&rc);

 m_previewCam2.GetWindowRect(&rc);
 ScreenToClient(&rc);
 pDC->ExcludeClipRect(&rc);

 return CDialog::OnEraseBkgnd(pDC);
}

void CAboutDlg::OnClose()
{
 // TODO: 在此添加消息处理程序代码和/或调用默认值
 delete twoCam;

delete binoCular;

 CDialog::OnClose();
}

void CtwoCamCaptureBmpDlg::OnBnClickedBegin()
{
 // TODO: 在此添加控件通知处理程序代码
 //在开始处理这个按钮中,首先判断filter graph是否连接上,如果没有连接上,则
 //重新连接,其次,进行视频流的显示的设置,让捕捉到的视频流显示出来
 //然后,run filter graph
 //最后,重新建立线程,激发事件,让线程运行起来
 //
  //for camera 1-----------------
 //
 if (!twoCam->isFilterGraph1Built())
 {
  twoCam->rebuildFilterGraph1();
  twoCam->setFilterGraph1Build(true);

  //for the camera 1
  //
  HRESULT hr;
  hr = (twoCam->getIVideoWindow1())->put_Owner((OAHWND) m_previewCam1.GetSafeHwnd());
  if (SUCCEEDED(hr))
  {
   // The video window must have the WS_CHILD style
   hr = (twoCam->getIVideoWindow1())->put_WindowStyle(WS_CHILD);
   // Read coordinates of video container window
   RECT rc;
   m_previewCam1.GetClientRect(&rc);
   long width =  rc.right - rc.left;
   long height = rc.bottom - rc.top;
   // Ignore the video's original size and stretch to fit bounding rectangle
   hr = (twoCam->getIVideoWindow1())->SetWindowPosition(rc.left, rc.top, width, height);
   (twoCam->getIVideoWindow1())->put_Visible(OATRUE);
  }
 }
 if (!twoCam->isFilterGraph1Run())
 {
  twoCam->getIMediaControl1()->Run();
  twoCam->setFilterGraph1Run(true);
 }

 //for camera 2---------------------------------
 //
 HRESULT hr;
 if (!twoCam->isFilterGraph2Built())
 {
  twoCam->rebuildFilterGraph2();
  twoCam->setFilterGraph2Build(true);

  //for the camera 2
  //
  hr = (twoCam->getIVideoWindow2())->put_Owner((OAHWND) m_previewCam2.GetSafeHwnd());
  if (SUCCEEDED(hr))
  {
   // The video window must have the WS_CHILD style
   hr = (twoCam->getIVideoWindow2())->put_WindowStyle(WS_CHILD);
   // Read coordinates of video container window
   RECT rc;
   m_previewCam2.GetClientRect(&rc);
   long width =  rc.right - rc.left;
   long height = rc.bottom - rc.top;
   // Ignore the video's original size and stretch to fit bounding rectangle
   hr = (twoCam->getIVideoWindow2())->SetWindowPosition(rc.left, rc.top, width, height);
   (twoCam->getIVideoWindow2())->put_Visible(OATRUE);
  }
 }
 if (!twoCam->isFilterGraph2Run())
 {
  twoCam->getIMediaControl2()->Run();
  twoCam->setFilterGraph2Run(true);
 }

  
 //here create the thread ,so could control them to do the things
 //
 hGetData=CreateThread(NULL,
  0,
  g_getData,
  0,
  0,
  NULL);
  hProcessData1=CreateThread(NULL,
  0,
  g_processData1,
  0,
  0,
  NULL);
 hProcessData2=CreateThread(NULL,
  0,
  g_processData2,
  0,
  0,
  NULL);
 hGetResult=CreateThread(NULL,
  0,
  g_getResult,
  0,
  0,
  NULL);

 //激发事件hEventGetData,从而开始捕捉处理的过程
 //
 SetEvent(hEventGetData);
}

void CtwoCamCaptureBmpDlg::OnBnClickedSavegrf()
{
 // TODO: 在此添加控件通知处理程序代码
 HRESULT hr;
 CFileDialog dlg(TRUE);

 if (dlg.DoModal()==IDOK)
 {
  WCHAR wFileName[MAX_PATH];
    MultiByteToWideChar(CP_ACP, 0, dlg.GetPathName(), -1, wFileName, MAX_PATH);

  IStorage* pStorage=NULL;

  // First, create a document file that will hold the GRF file
  hr = ::StgCreateDocfile(
  wFileName,
  STGM_CREATE|STGM_TRANSACTED|STGM_READWRITE|STGM_SHARE_EXCLUSIVE,
  0, &pStorage);
  if (FAILED(hr))
  {
   AfxMessageBox(TEXT("Can not create a document"));
   return;
  }

   // Next, create a stream to store.
  WCHAR wszStreamName[] = L"ActiveMovieGraph";
  IStream *pStream;
  hr = pStorage->CreateStream(
    wszStreamName,
    STGM_WRITE|STGM_CREATE|STGM_SHARE_EXCLUSIVE,
    0, 0, &pStream);
  if(FAILED(hr))
  {
   AfxMessageBox(TEXT("Can not create a stream"));
   pStorage->Release();
   return;
  }

  // The IpersistStream::Save method converts a stream
  // into a persistent object.
  IPersistStream *pPersist = NULL;
  (twoCam->getIGraphBuilder1())->QueryInterface(IID_IPersistStream,
   reinterpret_cast<void**>(&pPersist));
  hr = pPersist->Save(pStream, TRUE);
  pStream->Release();
  pPersist->Release();

  if(SUCCEEDED(hr))
  {
   hr = pStorage->Commit(STGC_DEFAULT);
   if (FAILED(hr))
   {
    AfxMessageBox(TEXT("can not store it"));
   }
  }
  pStorage->Release();

 }

}

void CtwoCamCaptureBmpDlg::OnBnClickedSetpincam1()
{
 // TODO: 在此添加控件通知处理程序代码
 CWnd tt;
 tt.GetActiveWindow();
 

    HRESULT hr;
 IAMStreamConfig *pSC;

 if (twoCam->isFilterGraph1Run())
 {
  twoCam->getIMediaControl1()->Stop();
  twoCam->setFilterGraph1Run(false);
 }

 if(twoCam->isFilterGraph1Built())
    {
  twoCam->setFilterGraph1Build(false);
        twoCam->teardownFilterGraph1();   // graph could prevent dialog working

    }

 hr = (twoCam->getICaptureGraphBuilder21())->FindInterface(&PIN_CATEGORY_CAPTURE,
                        &MEDIATYPE_Video, twoCam->getCaptureFilter1(),
                        IID_IAMStreamConfig, (void **)&pSC);

    ISpecifyPropertyPages *pSpec;
    CAUUID cauuid;
 hr = pSC->QueryInterface(IID_ISpecifyPropertyPages,
       (void **)&pSpec);
 if(hr == S_OK)
 {
  hr = pSpec->GetPages(&cauuid);
  //显示属性页
  hr = OleCreatePropertyFrame(tt.m_hWnd, 30, 30, NULL, 1,
   (IUnknown **)&pSC, cauuid.cElems,
   (GUID *)cauuid.pElems, 0, 0, NULL);

  // !!! What if changing output formats couldn't reconnect
  // and the graph is broken?  Shouldn't be possible...
  CoTaskMemFree(cauuid.pElems);
        pSpec->Release();
  pSC->Release();
 }
}

void CtwoCamCaptureBmpDlg::OnBnClickedSetfiltercam1()
{
 // TODO: 在此添加控件通知处理程序代码
 setCaptureFilter(twoCam->getCaptureFilter1());
}

void CtwoCamCaptureBmpDlg::OnBnClickedSetpincam2()
{
 // TODO: 在此添加控件通知处理程序代码
 CWnd tt;
 tt.GetActiveWindow();
 

    HRESULT hr;
 IAMStreamConfig *pSC;

 if (twoCam->isFilterGraph2Run())
 {
  twoCam->getIMediaControl2()->Stop();
  twoCam->setFilterGraph2Run(false);
 }

 if(twoCam->isFilterGraph2Built())
    {
  twoCam->setFilterGraph2Build(false);
        twoCam->teardownFilterGraph2();   // graph could prevent dialog working

    }

 hr = (twoCam->getICaptureGraphBuilder22())->FindInterface(&PIN_CATEGORY_CAPTURE,
                        &MEDIATYPE_Video, twoCam->getCaptureFilter2(),
                        IID_IAMStreamConfig, (void **)&pSC);

    ISpecifyPropertyPages *pSpec;
    CAUUID cauuid;
 hr = pSC->QueryInterface(IID_ISpecifyPropertyPages,
       (void **)&pSpec);
 if(hr == S_OK)
 {
  hr = pSpec->GetPages(&cauuid);
  //显示属性页
  hr = OleCreatePropertyFrame(tt.m_hWnd, 30, 30, NULL, 1,
   (IUnknown **)&pSC, cauuid.cElems,
   (GUID *)cauuid.pElems, 0, 0, NULL);

  // !!! What if changing output formats couldn't reconnect
  // and the graph is broken?  Shouldn't be possible...
  CoTaskMemFree(cauuid.pElems);
        pSpec->Release();
  pSC->Release();
 }
}

void CtwoCamCaptureBmpDlg::OnBnClickedSetfiltercam2()
{
 // TODO: 在此添加控件通知处理程序代码
 setCaptureFilter(twoCam->getCaptureFilter2());
}

void CtwoCamCaptureBmpDlg::OnBnClickedStop()
{
 // TODO: 在此添加控件通知处理程序代码
 //now here teminate all the thread
 //在这里,也就是仅仅结束了所有的线程而已,这些线程也就是做截图的,和视频流的运行与否没有关系的。
 //视频流还是在运行中
 //
 DWORD dwExitCode=0;
 BOOL bSucess;
 bSucess=TerminateThread(hGetData,dwExitCode);
 bSucess=TerminateThread(hProcessData1,dwExitCode);
 bSucess=TerminateThread(hProcessData2,dwExitCode);
 bSucess=TerminateThread(hGetResult,dwExitCode);

}

void CtwoCamCaptureBmpDlg::showResult(void)
{
 char buffer[50];

 _itoa(binoCular->count,buffer,10);
 SetDlgItemText(IDC_EDIT1,buffer);

 _itoa(cb1.cnt,buffer,10);
 SetDlgItemText(IDC_EDIT2,buffer);
 


}

void CtwoCamCaptureBmpDlg::OnBnClickedStopcap()
{
 // TODO: 在此添加控件通知处理程序代码
 //实现这个函数是为了调试的方便,能把需要的帧显示出来,供图像处理用
 //1.结束线程
 //2.停止filter graph的运行
 //3.将最后捕捉到的图像放到对话框相应的地方,显示出来
 //
 //step1: kill the thread-----------------------------
 //
 DWORD dwExitCode=0;
 BOOL bSucess;
 bSucess=TerminateThread(hGetData,dwExitCode);
 bSucess=TerminateThread(hProcessData1,dwExitCode);
 bSucess=TerminateThread(hProcessData2,dwExitCode);
 bSucess=TerminateThread(hGetResult,dwExitCode);

 //step2:stop the filter graph---------------------------
 //
 //for camera 1:
 if (twoCam->isFilterGraph1Run())
 {
  twoCam->getIMediaControl1()->Stop();
  twoCam->setFilterGraph1Run(false);
 }
 //for camera 2:
 if (twoCam->isFilterGraph2Run())
 {
  twoCam->getIMediaControl2()->Stop();
  twoCam->setFilterGraph2Run(false);
 }

 //step3:show the last bitmap we have gotten----------------
 //
 //for camera 1
 //
 BITMAPINFOHEADER bih1;
 memset(&bih1,0,sizeof(BITMAPINFOHEADER));
 bih1.biSize=sizeof(BITMAPINFOHEADER);
 bih1.biWidth=cb1.lWidth;
 bih1.biHeight=cb1.lHeight;
 bih1.biPlanes=1;
 bih1.biBitCount=24;

 CWnd* pWndCam1=GetDlgItem(IDC_WINDOWCAM1);
 CDC* theDCCam1=pWndCam1->GetDC();
 CRect rectCam1;
 pWndCam1->GetClientRect(&rectCam1);

 StretchDIBits(theDCCam1->m_hDC,
     rectCam1.left, rectCam1.top,
     rectCam1.right-rectCam1.left,
     rectCam1.bottom-rectCam1.top,
     0,0,
     bih1.biWidth,
     bih1.biHeight,
     cb1.pBuffer,
     (LPBITMAPINFO)&bih1,
     DIB_RGB_COLORS,
     SRCCOPY);

 //for camera 2
 //
 BITMAPINFOHEADER bih2;
 memset(&bih2,0,sizeof(BITMAPINFOHEADER));
 bih2.biSize=sizeof(BITMAPINFOHEADER);
 bih2.biWidth=cb1.lWidth;
 bih2.biHeight=cb1.lHeight;
 bih2.biPlanes=1;
 bih2.biBitCount=24;

 CWnd* pWndCam2=GetDlgItem(IDC_WINDOWCAM2);
 CDC* theDCCam2=pWndCam2->GetDC();
 CRect rectCam2;
 pWndCam2->GetClientRect(&rectCam2);

 StretchDIBits(theDCCam2->m_hDC,
     rectCam2.left, rectCam2.top,
     rectCam2.right-rectCam2.left,
     rectCam2.bottom-rectCam2.top,
     0,0,
     bih2.biWidth,
     bih2.biHeight,
     cb2.pBuffer,
     (LPBITMAPINFO)&bih2,
     DIB_RGB_COLORS,
     SRCCOPY);


 //step3:store the bitmap into memory
 //
 binoCular->copyBitmap1();
 binoCular->copyBitmap2();

}

void CtwoCamCaptureBmpDlg::OnBnClickedCamval1()
{
 // TODO: 在此添加控件通知处理程序代码
 CCamValue1 dlg;

 if(dlg.DoModal()!=IDOK)
 {
  return;
 }

}

void CtwoCamCaptureBmpDlg::OnBnClickedCamval2()
{
 // TODO: 在此添加控件通知处理程序代码
 CCamValue2 dlg;

 if(dlg.DoModal()!=IDOK)
 {
  return;
 }
}

void CtwoCamCaptureBmpDlg::OnBnClickedProcessbmp()
{
 // TODO: 在此添加控件通知处理程序代码

 //step1:------------------------------------
 //for cam1
 //
 binoCular->processImg1();

 //for cam2
 //
 binoCular->processImg2();

 //step2:show the bitmap changed---------------
 //
 //for camera 1
 //
 BITMAPINFOHEADER bih1;
 memset(&bih1,0,sizeof(BITMAPINFOHEADER));
 bih1.biSize=sizeof(BITMAPINFOHEADER);
 bih1.biWidth=cb1.lWidth;
 bih1.biHeight=cb1.lHeight;
 bih1.biPlanes=1;
 bih1.biBitCount=24;

 CWnd* pWndCam1=GetDlgItem(IDC_WINDOWCAM1);
 CDC* theDCCam1=pWndCam1->GetDC();
 CRect rectCam1;
 pWndCam1->GetClientRect(&rectCam1);

 StretchDIBits(theDCCam1->m_hDC,
     rectCam1.left, rectCam1.top,
     rectCam1.right-rectCam1.left,
     rectCam1.bottom-rectCam1.top,
     0,0,
     bih1.biWidth,
     bih1.biHeight,
     cb1.pBuffer,
     (LPBITMAPINFO)&bih1,
     DIB_RGB_COLORS,
     SRCCOPY);

 //for camera 2
 //
 BITMAPINFOHEADER bih2;
 memset(&bih2,0,sizeof(BITMAPINFOHEADER));
 bih2.biSize=sizeof(BITMAPINFOHEADER);
 bih2.biWidth=cb1.lWidth;
 bih2.biHeight=cb1.lHeight;
 bih2.biPlanes=1;
 bih2.biBitCount=24;

 CWnd* pWndCam2=GetDlgItem(IDC_WINDOWCAM2);
 CDC* theDCCam2=pWndCam2->GetDC();
 CRect rectCam2;
 pWndCam2->GetClientRect(&rectCam2);

 StretchDIBits(theDCCam2->m_hDC,
     rectCam2.left, rectCam2.top,
     rectCam2.right-rectCam2.left,
     rectCam2.bottom-rectCam2.top,
     0,0,
     bih2.biWidth,
     bih2.biHeight,
     cb2.pBuffer,
     (LPBITMAPINFO)&bih2,
     DIB_RGB_COLORS,
     SRCCOPY);
 

 //step3:----------------------------------------------
 //
 binoCular->binoCularVision();

 //step4:show the result out-----------
 //
 CString str;
 str.Format("x=%f /n  y=%f  /n  z=%f",
           binoCular->getBall().inWorld.x,
     binoCular->getBall().inWorld.y,
     binoCular->getBall().inWorld.z);
 MessageBox(str);

 }

上面就是这个双目视觉程序框架的大部分代码,其中还有不少的地方需要完善的。还有,里面的设置摄像机参数是分别用两个对话框做的,但是,里面的代码就是体力活,所以,不在这里写下来。程序中的视频流捕捉还是有一部分参照的是directshow例子中的代码。



  • 0
    点赞
  • 15
    收藏
    觉得还不错? 一键收藏
  • 9
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 9
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值