最近在做一个关于openGL的课程项目,需要在openGL中渲染出一个视频,网上找了一下方法,发现都是处理的都是yuv格式和rgb格式视频,而我需要处理mp4,avi等格式。最后想了一下使用openCV+ openGL解决这个问题。
配置使用openGL的openCV:
先下载openCV,我使用的版本是opencv2.4.13.6
Window环境下:
- 安装cmake-gui
- cmake source -code路径选择opencv安装文件夹中的source文件夹,然后自己创建一个新的build2文件夹(opencv安装路径已有build文件夹)
- 编译,注意第一次选configure时选择编译器不要选x64的,选什么也没加的,我选的是vs 15 2017
- 在build目录下打开OpenCV.sln,找到ALL_BUILD项目,右键build(debug、release要分别build),等待build结束且没有报错,则成功生成了OpenCV的链接库文件。
配置opencv环境变量:
在系统环境变量Path中加入build文件夹中的bin的debug文件夹,例如我的是F:\openCV\opencv2.4.13.6\opencv\mybuild3\bin\Debug
注意:如果这里引用的是未编译前的opencv的bin路径,运行程序会报找不到openGL模块的错误:OpenCV Error: No OpenGL support (Library was built without OpenGL support) in cvNameWindow
打开vs,新建一个空项目,右键项目,打开属性页,包含目录,设置成未编译openGL模块之前的opencv的include路径
引用目录设置为已编译opencv+openGL的新build文件夹(我的是build3)的lib/debug文件夹
打开添加 链接器-输入-附加依赖项:
opencv_videostab2413d.lib
opencv_ts2413d.lib
opencv_superres2413d.lib
opencv_stitching2413d.lib
opencv_contrib2413d.lib
opengl32.lib
glu32.lib
opencv_nonfree2413d.lib
opencv_ocl2413d.lib
opencv_gpu2413d.lib
opencv_photo2413d.lib
opencv_objdetect2413d.lib
opencv_legacy2413d.lib
opencv_video2413d.lib
opencv_ml2413d.lib
opencv_calib3d2413d.lib
opencv_features2d2413d.lib
opencv_highgui2413d.lib
opencv_imgproc2413d.lib
opencv_flann2413d.lib
opencv_core2413d.lib
kernel32.lib
user32.lib
gdi32.lib
winspool.lib
shell32.lib
ole32.lib
oleaut32.lib
uuid.lib
comdlg32.lib
advapi32.lib
实现倒是挺简单的,VideoCapture读取视频,然后openGL将视频每一帧当作一张texture渲染出来。
#include <gl/glut.h>
#include<vector>
#include <string>
#include <iostream>
#include "opencv2/opencv.hpp"
using namespace cv;
using namespace std;
GLuint myTex;
Mat myVide;
VideoCapture cap;
Mat frame;
int flags = 0;
long currentFrame = 0;
long totalFrameNumber = 0;
void display() {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
int w = myVide.cols;
int h = myVide.rows;
glGenTextures(1, &myTex);
glBindTexture(GL_TEXTURE_2D, myTex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glPixelStorei(GL_PACK_ALIGNMENT, 1);
if (myVide.channels() == 3)
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, w, h, 0, GL_BGR_EXT, GL_UNSIGNED_BYTE, myVide.data);
else if (myVide.channels() == 4)
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, w, h, 0, GL_BGRA_EXT, GL_UNSIGNED_BYTE, myVide.data);
else if (myVide.channels() == 1)
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, w, h, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, myVide.data);
glEnable(GL_TEXTURE_2D); // 启用纹理
glBegin(GL_QUADS);
glNormal3f(0.0f, 0.0f, -1.0f); // Normal Pointing Away From Viewer
glTexCoord2f(0.0f, 1.0f);
glVertex3f(-1.0f, -1.0f, -1.0f); // Point 1 (Back)
glTexCoord2f(0.0f, 0.0f);
glVertex3f(-1.0f, 1.0f, -1.0f); // Point 2 (Back)
glTexCoord2f(1.0f, 0.0f);
glVertex3f(1.0f, 1.0f, -1.0f); // Point 3 (Back)
glTexCoord2f(1.0f, 1.0f);
glVertex3f(1.0f, -1.0f, -1.0f); // Point 4 (Back)
glEnd();
glPopAttrib();
glDisable(GL_TEXTURE_2D);
glFlush();
glutSwapBuffers();
}
void stepDisplay() {
if (flags == 0) {
// 读取视频帧
cap.read(frame);
// 设置每n帧获取一次帧
if (currentFrame % 1 == 0) {
myVide = frame;
if (currentFrame >= totalFrameNumber) {
flags = 1;
}
currentFrame++;
}
display();
}
}
int main(int argc, char* argv[])
{
myVide = imread("timg.jpg");
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB);
glutInitWindowSize(500, 400);
glutCreateWindow("example");
glEnable(GL_TEXTURE_2D); // 启用纹理
cap.open("C:\\Users\\admin\\Videos\\myVideo.mp4");
// 获取视频总帧数
totalFrameNumber = cap.get(CV_CAP_PROP_FRAME_COUNT);
cout << "total frames: " << totalFrameNumber << endl;
glutDisplayFunc(display);
glutIdleFunc(stepDisplay);
glutMainLoop();
system("pause");
return 0;
}
效果: