在做一个关于相机库SDK的二次开发demo,记录此过程
工业相机都配有自己的相机软件,在软件中会集成这家相机的各种型号下的硬件配置,所以这部分不需要我们去关心,仅需要知道它干了些什么就可以,关于工业相机的编程模型和流程在:
https://blog.csdn.net/wenzhou1219/article/details/45874779
工业相机的编程原理及流程我们知道了,接下来就是找到这些工业相机为我们准备的SDK;
以Basler为例,在Basler的官方网站中找到它的Software,下载对应电脑系统的版本
下载完毕之后是安装,安装过程有一项选择开发者(Development)安装模式,其余按照提示即可;
以开发者模式安装后的pylon会比非开发者模式多出一个文件夹,在安装路径(Normal:C:\Program Files\Basler\pylon 6\Development)下找到它,文件夹名为Development。
该文件夹下有Basler为我们提供的include、lib及对应的Documentation,此外还提供了一些sample供我们玩耍。
打开sample文件夹,可以看到这是一个Vs工程,使用VS加载,通常不会成功。
这是因为你的VS里缺少了支持Basler开发者们写的工程的组件,按照提示去安装这些组件:
Tips:在快要安装完成时,会非常慢,查了一下,据说是在配置系统原点,慢就慢点吧,总比安装个组件搞坏了原有的系统好。
。。。Thousand Years Later。。。
以pylon的相机流程为例
总体流程:
1、加载相机对象
2、加载数据流抓取对象
3、连续或单帧采图
4、卸载数据流抓取对象
5、卸载相机对象
通常来说在编程上步骤1和步骤5是通用的,对应的事件对象也有这种机制,加载事件对象–完成功能–卸载事件对象。
下面对每个过程进行分析:
1、加载相机对象
1.1 初始化动态系统资源
PylonInitialize //建立内在的结构体,为一些全局参数提供空间
1.2 检测当前相机数目
PylonEnumerateDevices //枚举相机
1.3 创建相机对象
PylonCreateDeviceByIndex //找到对应相机index并创建对象
1.4 打开相机对象
PylonDeviceOpen
2、加载数据流抓取对象
2.1 检测Stream 采集通道数目
PylonDeviceGetNumStreamGrabberChannels
2.2 获取Stream Grabber抓取对象
PylonDeviceGetStreamGrabber
2.3 打开对象
PylonStreamGrabberOpen
2.4 获取同步对象
PylonStreamWaitObjectWait
2.5 设置抓图参数
PylonStreamGrabberSetMaxNumBuffer //Buffer数目
PylonStreamGrabberSetMaxBuffersize //Buff大小
2.6 分配每个Buffer内存
new
2.7 分配抓图要的资源
PylonStreamGrabberPrepareGrab //分配资源包括以下四个部分:
//1、动态内存分配
//2、驱动数据内存分配
//3、xxx相机同步通道
//4、xxx相机同步带宽
//(此函数调用后,影响资源分配的参数均为只读型)
2.8 将Buffer 注册到抓取对象
PylonStreamGrabberRegisterBuffer
2.9 Buffer依次放入抓图的输入对象中
PylonStreamGrabberQueueBuffer
步骤2中,2.1-2.4为打开数据流抓取对象部分;2.5-2.9为分配抓图所需资源部分;
3、单帧或者连续抓图过程
3.1 设置抓图模式为连续,并开始抓图
PylonDeviceFeatureFromString //设置连续模式
PylonDeviceExcuteCommandFeature //开始抓图命令
3.2 等待下一个要抓取的Buffer被填充
PylonWaitObjectWait
3.3 抓取Buffer结果并显示
PylonStreamGrabberRetriveResult
3.4 将Buffer再放入抓图的输入队列中
PylonStreamGrabberQueueBuffer
步骤3.2–3.3重复一次或者多次可实现单帧或者连续抓图;
4、卸载数据流抓取对象
4.1 将所有Buffer放到数据流抓取对象输出队列上
PylonStreamGrabberCancelGrab
4.2 将输出队列上的数据全部取出
PylonStreamGrabberRetriveResult
4.3 取消注册分配的Buffer
PylonStreamGrabberDeregisterBuffer
4.4 释放Buffer内存
delete
4.5 释放相关的分配资源
PylonStreamGrabberFinishGrab
4.6 关闭数据流抓取对象
PylonStreamGrabberClose
步骤4.1 - 4.5 均为释放抓图资源
5、卸载相机对象
5.1 关闭相机对象
PylonDeviceClose
5.2 销毁相机对象
PylonDestrouDevice
5.3 卸载动态系统资源
PylonTerminate //释放内在结构体
:
:
Recommend sample | Description |
---|---|
ParametrizeCamera_LoadAndSave.cpp | This sample application demonstrates how to save or load the features of a camera to or from a file. |
GUI_ImageWindow.cpp | This sample illustrates how to show images using the CPylonImageWindow class. Here, images are grabbed, split into multiple tiles and and each tile is shown in a separate image windows. |
Grab_CameraEvents.cpp | It is shown in this sample how to register event handlers indicating the arrival of events sent by the camera. For demonstration purposes, several different handlers are registered for the same event. |
Grab_Strategies.cpp | This sample shows the use of the Instant Camera grab strategies. |
Grab.cpp | Grab flow. |
以上为Basler相机的采图流程,下面将其封装成动态库,并使用QT写一个Ui进行调用。
可能是由于电脑被公司的加密软件加密了,导致Pylon的相机库编译报错,问题是出现在枚举设备参数设置头文件EnumParameterT.h中
virtual CEnumParameterT<EnumT>& operator=(EnumT value){}
多方求解未果,遂弃Basler投HiK:
对于相机的操作原理都是一样的,海康的SDK提供两种方式采图:
一是先注册图像采集回调函数,然后操作相机 StartGrabbing,采集的图像在回调函数中返回;
二是不注册图像采集回调函数,然后在应用层循环调用GetOneFrame获取指定像素格式的帧数据,但是获取帧数据时,上层应用程序需要根据帧率控制好调用接口的频率;
我采用的时第二种方式,因为第一种方式,我尝试失败了,错误出在注册图像采集回调函数后,不能获取到曝光时间、曝光频率、帧率等信息,也可能与公司的加密软件有关,所以就直接采用了第二种方式:
这里未单独分一个新的线程去处理读取图像,所以效率来说不是很高,仅供了解原理使用,在图像的处理上采用了cv的Mat类型,此外还可以使用halcon的Hobject进行图像的显示、处理、存储等.
直接贴上槽函数,有注释:
QtGrabImage::QtGrabImage(QWidget *parent)
: QMainWindow(parent)
{
ui.setupUi(this);
nRet = MV_OK;
handle = NULL;
pData = NULL;
pDataForRGB = NULL;
g_bExit = false;
g_nPayloadSize = 0;
nIndex = 0;
nPacketSize = 0;
//connect(ui.EnumCam, &QPushButton::clicked, this, &QtGrabImage::on_EnumCam_clicked);
//connect(ui.OpenCam, &QPushButton::clicked, this, &QtGrabImage::on_OpenCam_clicked);
//connect(ui.StartCam, &QPushButton::clicked, this, &QtGrabImage::on_StartCam_clicked);
//connect(ui.StopCam, &QPushButton::clicked, this, &QtGrabImage::on_StopCam_clicked);
//connect(ui.CloseCam, &QPushButton::clicked, this, &QtGrabImage::on_CloseCam_clicked);
//connect(ui.SaveImg, &QPushButton::clicked, this, &QtGrabImage::on_SaveImg_clicked);
//connect(ui.FrameModel, &QRadioButton::toggled, this, &QtGrabImage::on_FrameModel_clicked);//使用一个新的pushbutton去采集的话就不需要这两个状态了
//connect(ui.ConstantModel, &QRadioButton::toggled, this, &QtGrabImage::on_ConstantModel_clicked);//直接用ui.RadioButton->isChecked()去判断
connect(ui.comboBox, SIGNAL(currentIndexChanged(int)), this, SLOT(comboxSlots()));
}
bool QtGrabImage::PrintDeviceInfo (MV_CC_DEVICE_INFO* pstMVDevInfo)
{
if (NULL == pstMVDevInfo)
{
QMessageBox::information(this, tr("Error"), QString::fromLocal8Bit("The Pointer of pstMVDevInfo is NULL!"));
//printf("The Pointer of pstMVDevInfo is NULL!\n");
return false;
}
if (pstMVDevInfo->nTLayerType == MV_GIGE_DEVICE)
{
int nIp1 = ((pstMVDevInfo->SpecialInfo.stGigEInfo.nCurrentIp & 0xff000000) >> 24);
int nIp2 = ((pstMVDevInfo->SpecialInfo.stGigEInfo.nCurrentIp & 0x00ff0000) >> 16);
int nIp3 = ((pstMVDevInfo->SpecialInfo.stGigEInfo.nCurrentIp & 0x0000ff00) >> 8);
int nIp4 = (pstMVDevInfo->SpecialInfo.stGigEInfo.nCurrentIp & 0x000000ff);
//打印当前相机ip和用户自定义名字
QString strIp = QString::number(nIp1) + "." + QString::number(nIp2) + "." + QString::number(nIp3) + "." +
QString::number(nIp4);
unsigned char* cusrname = pstMVDevInfo->SpecialInfo.stGigEInfo.chUserDefinedName; //相机用户名是存储成unsigned char 类型
std::string cstr = std::string((char *)cusrname);
QString strName = QString::fromStdString(cstr);
QString strMesg = "Current User: " + strName + " , Current Ip: " + strIp;
ui.statusBar->showMessage(strMesg);
ui.comboBox->addItem(strMesg);//调用之前先声明一下 disconnected
}
else if (pstMVDevInfo->nTLayerType == MV_USB_DEVICE)
{
QMessageBox::information(this, tr("Sorry : "), QString::fromLocal8Bit("Not USB Device!"));
}
else
{
QMessageBox::information(this, tr("Error : "), QString::fromLocal8Bit("Not support!"));
//printf("Not support.\n");
}
return true;
}
//WorkThread线程函数,用于读取相机的图像流
//unsigned int _stdcall QtGrabImage::WorkThread(void *pUser)
//{
// int nRet = MV_OK;
// stImageInfo = { 0 };
// memset(&stImageInfo, 0, sizeof(MV_FRAME_OUT_INFO_EX));
// unsigned char * pData = (unsigned char *)malloc(sizeof(unsigned char) * (g_nPayloadSize));
// if (pData == NULL)
// {
// return 0;
// }
// unsigned int nDataSize = g_nPayloadSize;
// while (1)
// {
// nRet = MV_CC_GetOneFrameTimeout(pUser, pData, nDataSize, &stImageInfo, 1000);
// if (nRet == MV_OK)
// {
// printf("Get one frame : Width:[%d], Height[%d], nFrameNum[%d]\n",
// stImageInfo.nWidth, stImageInfo.nHeight, stImageInfo.nFrameNum);
// }
// else
// {
// printf("Get frame : No data. nRet = [0x%x]\n", nRet);
// }
// if (g_bExit)
// {
// break;
// }
// }
// free(pData);
// return 0;
//}
void QtGrabImage::on_EnumCam_clicked()
{
memset(&stDeviceList, 0, sizeof(MV_CC_DEVICE_INFO_LIST));
int nRet = MV_CC_EnumDevices(MV_GIGE_DEVICE | MV_USB_DEVICE, &stDeviceList);
if (MV_OK != nRet)
{
QString qstrerror = "Enum Device Faile! nRet:" + QString(nRet, 16);
QMessageBox::information(this, tr("Error : "), qstrerror);
//printf("Enum Devices fail! nRet [0x%x]\n", nRet);
return;
}
QObject::disconnect(ui.comboBox, SIGNAL(currentIndexChanged(int)), this, SLOT(comboxSlots()));
if (stDeviceList.nDeviceNum > 0)
{
for (unsigned int i = 0; i < stDeviceList.nDeviceNum; i++)
{
//printf("[device %d]:\n", i);
MV_CC_DEVICE_INFO* pDeviceInfo = stDeviceList.pDeviceInfo[i];
if (NULL == pDeviceInfo)
{
break;
}
PrintDeviceInfo(pDeviceInfo);
}
}
else
{
QMessageBox::information(this, tr("Error: "), QString::fromLocal8Bit("Find No Device!"));
//printf("Find No Devices!\n");
return;
}
//printf("Please Intput camera index:");
//scanf_s("%d", &nIndex);
//if (nIndex >= stDeviceList.nDeviceNum)
//{
// printf("Intput error!\n");
// return;
//}
}
void QtGrabImage::comboxSlots()
{
nIndex = ui.comboBox->currentIndex();
}
void QtGrabImage::on_OpenCam_clicked()
{
int nRet = MV_CC_CreateHandle(&handle, stDeviceList.pDeviceInfo[nIndex]);
if (MV_OK != nRet)
{
QString strerr = "Create Handle Fail! nRet :" + QString(nRet, 16);
QMessageBox::information(this, tr("Error: "), strerr);
//printf("Creat Handle Failed! nRet = [0x%x]\n", nRet);
return;
}
nRet = MV_CC_OpenDevice(handle);
if (MV_OK != nRet)
{
QString strerr = "Open Device Fail! nRet:" + QString::number(nRet, 16);
QMessageBox::information(this, tr("Error "), strerr);
//printf("Open Device Failed! nRet = [0x%x]\n", nRet);
return;
}
//探测网络最佳包大小(只对GigE相机有效)
if (stDeviceList.pDeviceInfo[nIndex]->nTLayerType == MV_GIGE_DEVICE)
{
nPacketSize = MV_CC_GetOptimalPacketSize(handle);
if (nPacketSize > 0)
{
nRet = MV_CC_SetIntValue(handle, "GevSCPSPacketSize", nPacketSize);
if (nRet != MV_OK)
{
QString strerr = "Set Packet Size fail nRet:" + QString(nRet, 16);
QMessageBox::information(this, tr("Warning:"), strerr);
//printf("Warning: Set Packet Size fail nRet [0x%x]!", nRet);
}
}
else
{
QString strerr = "Get Packet Size fail nRet:" + QString(nRet, 16);
QMessageBox::information(this, tr("Warning:"), strerr);
//printf("Warning: Get Packet Size fail nRet [0x%x]!", nRet);
}
}
//获取数据包大小
memset(&stParam, 0, sizeof(MVCC_INTVALUE));
nRet = MV_CC_GetIntValue(handle, "PayloadSize", &stParam);
if (MV_OK != nRet)
{
QString strerr = "Get PayloadSize fail! nRet:" + QString(nRet, 16);
QMessageBox::information(this, tr("Warning:"), strerr);
//printf("Get PayloadSize fail! nRet [0x%x]\n", nRet);
return;
}
g_nPayloadSize = stParam.nCurValue;
}
void QtGrabImage::on_StartCam_clicked()
{
//设置触发模式为off
int nRet = MV_CC_SetEnumValue(handle, "TriggerMode", 0);
if (MV_OK != nRet)
{
QString strerr = "Set Trigger Model Fail! nRet:" + QString(nRet, 16);
QMessageBox::information(this, tr("Error"), strerr);
//printf("Set Trigger Mode fail! nRet [0x%x]\n", nRet);
return;
}
//开始取流
nRet = MV_CC_StartGrabbing(handle);
if (nRet != MV_OK)
{
QString strerr = "Start Grabbing Fail! nRet:" + QString(nRet, 16);
QMessageBox::information(this, tr("Error"), strerr);
//printf("Start Grabbing Failed. nRet = [0x%x]\n", nRet);
return;
}
//创建新线程去操作取流
//unsigned int nThreadID = 0;
//void *hThreadHandle = (void *)_beginthreadex(NULL, 0, WorkThread, handle, 0, &nThreadID);
//if (NULL == hThreadHandle)
//{
// return;
//}
stImageInfo = { 0 };
memset(&stImageInfo, 0, sizeof(MV_FRAME_OUT_INFO_EX));
pData = (unsigned char *)malloc(sizeof(unsigned char) * (g_nPayloadSize));
if (pData == NULL)
{
QMessageBox::information(this, tr("Error"), QString::fromLocal8Bit("malloc pData fail!"));
return;
}
unsigned nDataSize = g_nPayloadSize;
//
// nRet = MV_CC_GetOneFrameTimeout(handle, pData, nDataSize, &stImageInfo, 1000);
// if (nRet == MV_OK)
// {
// QString strInfo = "Get one frame: Width:" + QString(stImageInfo.nWidth, 10) +
// " Height:" + QString(stImageInfo.nHeight, 10) + " nFrameNum:" + QString(stImageInfo.nFrameNum);
//
// pDataForRGB = (unsigned char *)malloc(stImageInfo.nWidth * stImageInfo.nHeight * 3);
// if (pDataForRGB == NULL)
// {
// QMessageBox::information(this, tr("Error"), QString::fromLocal8Bit("pDataForRGB malloc fail!"));
// return;
// }
// unsigned nDataSizeForRGB = stImageInfo.nWidth * stImageInfo.nHeight * 3;
// //像素格式转换
// MV_CC_PIXEL_CONVERT_PARAM stConvertParam = { 0 };
// memset(&stConvertParam, 0, sizeof(MV_CC_PIXEL_CONVERT_PARAM));
//
// stConvertParam.nWidth = stImageInfo.nWidth; //图像的宽
// stConvertParam.nHeight = stImageInfo.nHeight; //图像的高
// stConvertParam.pSrcData = pData; //输入数据缓存
// stConvertParam.nSrcDataLen = stImageInfo.nFrameLen; //输入数据大小
// stConvertParam.enSrcPixelType = stImageInfo.enPixelType; //源图像像素类型
// stConvertParam.enDstPixelType = PixelType_Gvsp_RGB8_Packed; //目标像素类型
// stConvertParam.pDstBuffer = pDataForRGB; //输出数据缓存地址
// stConvertParam.nDstBufferSize = nDataSizeForRGB; //输出数据大小
// nRet = MV_CC_ConvertPixelType(handle, &stConvertParam);
// if (nRet != MV_OK)
// {
// QMessageBox::information(this, tr("Error"), QString::fromLocal8Bit("Image Pixel convert fail!"));
// return;
// }
// /*bool Convert2Mat(MV_FRAME_OUT_INFO_EX* pstImageInfo, unsigned char * pData, int i)*/
// int cvheight = stImageInfo.nHeight;
// int cvwidth = stImageInfo.nWidth;
// cv::Mat cvImg(cvheight, cvwidth, CV_8UC3, pDataForRGB);
// //cvImg 转换到QImage
// cv::cvtColor(cvImg, cvImg, CV_BGR2RGB);
// QImage qImg = QImage((const unsigned char*)(cvImg.data), cvImg.cols, cvImg.rows,
// cvImg.cols*cvImg.channels(), QImage::Format_RGB888);
// //显示
// ui.label->clear();
// ui.label->setPixmap(QPixmap::fromImage(qImg.scaled(ui.label->size())));//设置图像大小和label大小一致
ui.label->resize(ui.label->pixmap()->size());//设置label的大小和图像的大小一致
DispGrabImgFrame();
return;
//}
}
void QtGrabImage::on_CamGrab_clicked()
{
//判断Radio 的状态 及运行接下来的事情
if (ui.FrameModel->isChecked())
{
DispGrabImgFrame();
return;
}
if (ui.ConstantModel->isChecked())
{
DispGrabImgConstant();
return;
}
else
{
QMessageBox::information(this, tr("Error"), QString::fromLocal8Bit("Please choose a Model first!"));
return;
}
}
void QtGrabImage::on_StopCam_clicked()
{
nRet = MV_CC_StopGrabbing(handle);
if (MV_OK != nRet)
{
QString strerr = "Stop Grabbing fail! \n nRet:" + QString::number(nRet, 16);
QMessageBox::information(this, tr("Error"), strerr);
return;
}
}
void QtGrabImage::on_SaveImg_clicked()
{
QString saveImgname = QFileDialog::getSaveFileName(this, tr("Open Config"), " ", tr("config files (*.jpg)"));
if (!saveImgname.isNull())
{
QImage qimg = ui.label->pixmap()->toImage();
if (qimg.isNull())
{
QMessageBox::information(this, tr("Error"), QString::fromLocal8Bit("There is no Image to save!"));
return;
}
else
{
qimg.save(saveImgname, "JPG", 100);
QMessageBox::information(this, tr(" "), QString::fromLocal8Bit("Image saved Successed!"));
}
}
else
{
QMessageBox::information(this, tr(" "), QString::fromLocal8Bit("Save Canceled!"));
return;
}
}
void QtGrabImage::on_CloseCam_clicked()
{
//关闭设备
nRet = MV_CC_CloseDevice(handle);
if (MV_OK != nRet)
{
QString strerr = "CloseDevice fail! \n nRet:" + QString::number(nRet, 16);
QMessageBox::information(this, tr("Error"), strerr);
return;
}
//销毁句柄
nRet = MV_CC_DestroyHandle(handle);
if (MV_OK != nRet)
{
QString strerr = "Destroy handle fail! \n nRet:" + QString::number(nRet, 16);
QMessageBox::information(this, tr("Error"), strerr);
return;
}
}
void QtGrabImage::DispGrabImgFrame()
{
unsigned nDataSize = g_nPayloadSize;
nRet = MV_CC_GetOneFrameTimeout(handle, pData, nDataSize, &stImageInfo, 1000);
if (nRet == MV_OK)
{
QString strInfo = "Get one frame: Width:" + QString(stImageInfo.nWidth, 10) +
" Height:" + QString(stImageInfo.nHeight, 10) + " nFrameNum:" + QString(stImageInfo.nFrameNum);
pDataForRGB = (unsigned char *)malloc(stImageInfo.nWidth * stImageInfo.nHeight * 3);
if (pDataForRGB == NULL)
{
QMessageBox::information(this, tr("Error"), QString::fromLocal8Bit("pDataForRGB malloc fail!"));
return;
}
unsigned nDataSizeForRGB = stImageInfo.nWidth * stImageInfo.nHeight * 3;
//像素格式转换
MV_CC_PIXEL_CONVERT_PARAM stConvertParam = { 0 };
memset(&stConvertParam, 0, sizeof(MV_CC_PIXEL_CONVERT_PARAM));
stConvertParam.nWidth = stImageInfo.nWidth; //图像的宽
stConvertParam.nHeight = stImageInfo.nHeight; //图像的高
stConvertParam.pSrcData = pData; //输入数据缓存
stConvertParam.nSrcDataLen = stImageInfo.nFrameLen; //输入数据大小
stConvertParam.enSrcPixelType = stImageInfo.enPixelType; //源图像像素类型
stConvertParam.enDstPixelType = PixelType_Gvsp_RGB8_Packed; //目标像素类型
stConvertParam.pDstBuffer = pDataForRGB; //输出数据缓存地址
stConvertParam.nDstBufferSize = nDataSizeForRGB; //输出数据大小
nRet = MV_CC_ConvertPixelType(handle, &stConvertParam);
if (nRet != MV_OK)
{
QMessageBox::information(this, tr("Error"), QString::fromLocal8Bit("Image Pixel convert fail!"));
return;
}
/*bool Convert2Mat(MV_FRAME_OUT_INFO_EX* pstImageInfo, unsigned char * pData, int i)*/
int cvheight = stImageInfo.nHeight;
int cvwidth = stImageInfo.nWidth;
cv::Mat cvImg(cvheight, cvwidth, CV_8UC3, pDataForRGB);
//cvImg 转换到QImage
cv::cvtColor(cvImg, cvImg, CV_BGR2RGB);
QImage qImg = QImage((const unsigned char*)(cvImg.data), cvImg.cols, cvImg.rows,
cvImg.cols*cvImg.channels(), QImage::Format_RGB888);
//显示
ui.label->clear();
ui.label->setPixmap(QPixmap::fromImage(qImg.scaled(ui.label->size())));
Sleep(1000);
//ui.label->resize(ui.label->size());
}
}
void QtGrabImage::DispGrabImgConstant()
{
for (int i = 0; i < 1000; i++)
{
++i;
DispGrabImgFrame();
while (waitKey())
{
return;
}
}
return;
}
头文件也贴一下:
#pragma once
#include <Windows.h>
#include <stdio.h>
#include <string>
#include <conio.h>
#include <process.h>
#include "MvCameraControl.h"
//#include "TlFactory.h"
//#include "MvGigEDevice.h"
#include <qfiledialog.h>
#include <QtWidgets/QMainWindow>
#include <qmessagebox.h>
#include "ui_qtgrabimage.h"
#include <iostream>
#include <opencv2\opencv.hpp>
using namespace cv;
//using namespace MvCamCtrl;
class QtGrabImage : public QMainWindow
{
Q_OBJECT
public:
QtGrabImage(QWidget *parent = Q_NULLPTR);
bool PrintDeviceInfo(MV_CC_DEVICE_INFO* pstMVDevInfo);
//WorkThread线程函数,用于读取相机的图像
unsigned int _stdcall QtGrabImage::WorkThread(void *pUser);
int nRet;
void *handle;
unsigned char* pData;
unsigned char* pDataForRGB;
bool g_bExit;
unsigned int g_nPayloadSize;
MV_CC_DEVICE_INFO_LIST stDeviceList;
unsigned int nIndex; //设备列表信息index
int nPacketSize; //网络包大小
MVCC_INTVALUE stParam; //数据包大小设置参数
MV_FRAME_OUT_INFO_EX stImageInfo; //用于接收单帧图像流
void DispGrabImgFrame();
void DispGrabImgConstant();
//MvCamCtrl::CTlFactory& tlFactory; //使用MvCamCtrl 下的功能,用于设置单帧/连续模式
private:
Ui::QtGrabImageClass ui;
signals:
void currentIndexChanged(int i);
private slots:
void on_EnumCam_clicked();//在此函数中实现下拉列表选取枚举设备
void on_OpenCam_clicked();
void on_StartCam_clicked();
//void on_FrameModel_clicked(bool);
//void on_ConstantModel_clicked(bool);
void on_CamGrab_clicked();
void on_StopCam_clicked();
void on_CloseCam_clicked();
void on_SaveImg_clicked();
void comboxSlots();
};
效果图如下(因为我用的是工业相机和FA镜头,正常拍摄到的图像就是这样,图中亮的地方是灯):