OSG实时纹理提取 离屏渲染 输出到OpenCV 反向渲染 OSG摄像头反转 OSG动态模型 摄像机实时渲染

代码效果

做了一个OSG离屏渲染,即将OSG渲染结果的图片纹理提取出来,提取到OpenCV并转化为Mat类型来显示,便于后续操作,还是比较有价值的工作。其中模型是动态的模型。

OSG 离屏渲染

上面这个视频中(b站链接在这里),左边是调用viewer.frame()进行渲染时自动显示的图像,是反的,这个无所谓了,不是我们要用的东西;右边是我将纹理提取到OpenCV的Mat中然后用imshow展示出来的,是便于在OpenCV中做进一步处理的。

其实还是比较难做的,具体解释和做法详见下文。

主要难点

参考了很多资料,但是多数资料都只能把渲染后的图片纹理直接保存到本地,主要有两种思路:

  • 调用attach将OSG的camera与image连接起来,image就可以导出纹理到本地;
  • 读viewer的缓存,转化为图像。

然而OSG有一个特性,即OSG中camera看到的图像是上下颠倒的,并且存储camera看到的图像的内存是会实时刷新的,这给离屏渲染造成了很大困扰,以上两种方法保存到本地都没有问题,但是如果要实时纹理提取,得到的纹理就是上下颠倒的。

如果将得到的纹理再做一次上下反转,你会惊奇的发现你得到的纹理一直在闪烁。这是因为缓存是不断刷新的,总会有一部分被下一帧(颠倒的)覆盖掉。

解决方案

解决方案就是反向渲染,即进行渲染前先把背景反转一下。模型也反向渲染。
当然还有一种取巧的办法,就是渲染时不上传背景图,只把模型渲染到默认幕布北京上,然后把幕布替换为我们的背景图。这种方法比较好写。本文走的是比较复杂的路子。。

代码

环境:win10, vs2015, opencv3.4.13, OSG3.6.5
代码结构如下图:
在这里插入图片描述
其中 modelrend.h为:

#pragma once

#include <windows.h>
#include <osg/Camera>
#include <osg/PolygonMode>
#include <osg/Texture2D>
#include <osg/Geode>
#include <osg/Geometry>

#include <osgViewer/Viewer>
#include <osgDB/ReadFile>
#include <osgDB/WriteFile>
#include <osg/PositionAttitudeTransform>
#include <osgAnimation/BasicAnimationManager>         

class VirtualCamera {
public:
	void createVirtualCamera(osg::ref_ptr<osg::Camera> cam, int width, int height);
	void updatePosition(double r, double p, double h, double x, double y, double z);

	double angle;
	osg::Matrix rotation;
	osg::Matrix translation;
	osg::ref_ptr<osg::Camera> camera;
	osg::ref_ptr<osg::Image> image;
};

class BackgroundCamera {
public:
	BackgroundCamera();
	void update(uint8_t* data, int cols, int rows);
	osg::Geode* createCameraPlane(int textureWidth, int textureHeight);
	osg::Camera* createCamera(int textureWidth, int textureHeight);

	osg::ref_ptr<osg::Image> img;
};

class Modelrender {
public:
	osgViewer::Viewer viewer;
	BackgroundCamera bgCamera;
	VirtualCamera vCamera;
	double angleRoll;
	int width, height;

	Modelrender(int cols, int rows);
	uint8_t* rend(uint8_t* inputimag);
};

modelrend.cpp为:

#include "modelrend.h"

#include <windows.h>
#include <osgViewer/Viewer>
#include <osgDB/ReadFile>
#include <osgDB/WriteFile>
#include <osg/PositionAttitudeTransform>
#include <osgAnimation/BasicAnimationManager>

#include <osgViewer/GraphicsWindow>
#include <osg/Node>
#include <osg/Geode>
#include <osg/Group>
#include <osg/Camera>
#include <osg/Image>
#include <osg/BufferObject>
#include <osgUtil/Optimizer>
#include <osgGA/GUIEventHandler>
#include <osgGA/TrackballManipulator>

osg::ref_ptr<osg::Image> _image;

void VirtualCamera::createVirtualCamera(osg::ref_ptr<osg::Camera> cam, int width, int height)
{
	camera = cam;
	// Initial Values
	camera->setProjectionMatrixAsPerspective(320, 1., 1., 100.); //角度取了320,角度大于180时为反向渲染模型
	camera->setRenderTargetImplementation(osg::Camera::FRAME_BUFFER);
	image = new osg::Image;
	image->allocateImage(width, height, 1, GL_BGR, GL_UNSIGNED_BYTE);
	camera->attach(osg::Camera::COLOR_BUFFER, image.get());
}

void VirtualCamera::updatePosition(double r, double p, double h, double x, double y, double z)
{ //模型旋转模块
	osg::Matrixd myCameraMatrix;

	// Update Rotation
	rotation.makeRotate(
		osg::DegreesToRadians(r), osg::Vec3(0, 1, 0), // roll
		osg::DegreesToRadians(p), osg::Vec3(1, 0, 0), // pitch
		osg::DegreesToRadians(h), osg::Vec3(0, 0, 1)); // heading

													   // Update Translation
	translation.makeTranslate(x, y, z);
	myCameraMatrix = rotation * translation;
	osg::Matrixd i = myCameraMatrix.inverse(myCameraMatrix);
	camera->setViewMatrix(i*osg::Matrix::rotate(-(osg::PI_2), 1, -0, 0));
}



BackgroundCamera::BackgroundCamera() {
	// Create OSG Image from CV Mat
	img = new osg::Image;
}

void flipImageV(unsigned char* top, unsigned char* bottom, unsigned int rowSize, unsigned int rowStep)
{ //此函数用于在给输入的背景图片做反转
	while (top<bottom)
	{
		unsigned char* t = top;
		unsigned char* b = bottom;
		for (unsigned int i = 0; i<rowSize; ++i, ++t, ++b)
		{
			unsigned char temp = *t;
			*t = *b;
			*b = temp;
		}
		top += rowStep;
		bottom -= rowStep;
	}
}

void BackgroundCamera::update(uint8_t* data, int width, int height)
{ //接收输入背景图像并做反转
	// img->setImage(width, height, 3,
	// 	GL_RGB, GL_BGR, GL_UNSIGNED_BYTE,
	// 	data,
	// 	osg::Image::AllocationMode::NO_DELETE, 1);
	// img->dirty();
	unsigned char* top = data;
	unsigned char* bottom = top + (height - 1)*3*width;

	flipImageV(top, bottom, width*3, width*3);

	img->setImage(width, height, 3,
		GL_RGB, GL_BGR, GL_UNSIGNED_BYTE,
		data,
		osg::Image::AllocationMode::NO_DELETE, 1);
	img->dirty();
}

osg::Geode* BackgroundCamera::createCameraPlane(int textureWidth, int textureHeight)
{
	// CREATE PLANE TO DRAW TEXTURE
	osg::ref_ptr<osg::Geometry> quadGeometry = osg::createTexturedQuadGeometry(osg::Vec3(0.0f, 0.0f, 0.0f),
		osg::Vec3(textureWidth, 0.0f, 0.0f),
		osg::Vec3(0.0, textureHeight, 0.0),
		0.0f,
		1.0f,
		1.0f,
		0.0f);
	// PUT PLANE INTO NODE
	osg::ref_ptr<osg::Geode> quad = new osg::Geode;
	quad->addDrawable(quadGeometry);
	// DISABLE SHADOW / LIGHTNING EFFECTS
	int values = osg::StateAttribute::OFF | osg::StateAttribute::PROTECTED;
	quad->getOrCreateStateSet()->setAttribute(new osg::PolygonMode(osg::PolygonMode::FRONT_AND_BACK, osg::PolygonMode::FILL), values);
	quad->getOrCreateStateSet()->setMode(GL_LIGHTING, values);

	osg::ref_ptr<osg::Texture2D> texture = new osg::Texture2D;
	texture->setTextureSize(textureWidth, textureHeight);
	texture->setFilter(osg::Texture::MIN_FILTER, osg::Texture::LINEAR);
	texture->setFilter(osg::Texture::MAG_FILTER, osg::Texture::LINEAR);
	texture->setWrap(osg::Texture::WRAP_S, osg::Texture::REPEAT);
	texture->setWrap(osg::Texture::WRAP_T, osg::Texture::REPEAT);
	texture->setResizeNonPowerOfTwoHint(false);

	texture->setImage(img);

	// Apply texture to quad
	osg::ref_ptr<osg::StateSet> stateSet = quadGeometry->getOrCreateStateSet();
	stateSet->setTextureAttributeAndModes(0, texture, osg::StateAttribute::ON);

	return quad.release();
}

osg::Camera* BackgroundCamera::createCamera(int textureWidth, int textureHeight)
{
	osg::ref_ptr<osg::Geode> quad = createCameraPlane(textureWidth, textureHeight);
	//Bind texture to the quadGeometry, then use the following camera:
	osg::Camera* camera = new osg::Camera;
	// CAMERA SETUP
	camera->setReferenceFrame(osg::Camera::ABSOLUTE_RF);
	// use identity view matrix so that children do not get (view) transformed
	camera->setViewMatrix(osg::Matrix::identity());
	camera->setClearMask(GL_DEPTH_BUFFER_BIT);
	camera->setClearColor(osg::Vec4(0.f, 0.f, 0.f, 1.0));
	camera->setProjectionMatrixAsOrtho(0.f, textureWidth, 0.f, textureHeight, 1.0, 500.f);
	// set resize policy to fixed
	camera->setProjectionResizePolicy(osg::Camera::ProjectionResizePolicy::FIXED);
	// we don't want the camera to grab event focus from the viewers main camera(s).
	camera->setAllowEventFocus(false);
	// only clear the depth buffer
	camera->setClearMask(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
	//camera->setViewport( 0, 0, screenWidth, screenHeight );
	camera->setRenderOrder(osg::Camera::NESTED_RENDER);
	camera->addChild(quad);
	return camera;
}

Modelrender::Modelrender(int cols, int rows)
{
	width = cols;
	height = rows;
	// OSG STUFF
	// Create viewer
	viewer.setUpViewInWindow(50, 50, width, height);

	// Main Camera
	osg::ref_ptr<osg::Camera>  camera = viewer.getCamera();
	vCamera.createVirtualCamera(camera, width, height);

	// Background-Camera (OpenCV Feed)
	osg::Camera* backgroundCamera = bgCamera.createCamera(width, height);

	// Load Truck Model as Example Scene
	osg::ref_ptr<osg::Node> truckModel = osgDB::readNodeFile("avatar.osg"); //spaceship.osgt; º¬¶¯»­µÄ£º nathan.osg, avatar.osg, bignathan.osg,
	osg::Group* truckGroup = new osg::Group();
	// Position of truck
	osg::PositionAttitudeTransform* position = new osg::PositionAttitudeTransform();

	osgAnimation::BasicAnimationManager* anim =
		dynamic_cast<osgAnimation::BasicAnimationManager*>(truckModel->getUpdateCallback());
	const osgAnimation::AnimationList& list = anim->getAnimationList();
	anim->playAnimation(list[0].get()); //模型动画的调用

	truckGroup->addChild(position);
	position->addChild(truckModel);

	// Set Position of Model
	osg::Vec3 modelPosition(0, 100, 0);
	position->setPosition(modelPosition);

	// Create new group node
	osg::ref_ptr<osg::Group> group = new osg::Group;
	osg::Node* background = backgroundCamera;
	osg::Node* foreground = truckGroup;
	background->getOrCreateStateSet()->setRenderBinDetails(1, "RenderBin");
	foreground->getOrCreateStateSet()->setRenderBinDetails(2, "RenderBin");
	group->addChild(background);
	group->addChild(foreground);
	background->getOrCreateStateSet()->setMode(GL_DEPTH_TEST, osg::StateAttribute::OFF);
	foreground->getOrCreateStateSet()->setMode(GL_DEPTH_TEST, osg::StateAttribute::ON);

	// Add the groud to the viewer、
	// _image = new osg::Image();
	viewer.setSceneData(group.get());

	angleRoll = 0.0;
}

uint8_t* Modelrender::rend(uint8_t * inputimage)
{
	bgCamera.update(inputimage, width, height);

	//angleRoll += 0.5;

	// Position Parameters: Roll, Pitch, Heading, X, Y, Z
	vCamera.updatePosition(angleRoll, 0, 0, 0, 0, 0);
	viewer.frame();
	//vCamera.image->flipVertical();
	return vCamera.image->data();
}

main.cpp为:

#include <iostream>
#include <algorithm>

#include "modelrend.h"

#include "opencv2/highgui/highgui.hpp"
#include "opencv2/core/core.hpp"

using namespace std;

int main(int argc, char** argv)
{
	//cv::VideoCapture cap("movie.mkv");
	cv::VideoCapture cap(0); //打开摄像头

	if (!cap.isOpened())
	{
		std::cout << "Webcam cannot open!\n";
		return 0;
	}
	int frameH = cap.get(CV_CAP_PROP_FRAME_HEIGHT);
	int frameW = cap.get(CV_CAP_PROP_FRAME_WIDTH);
	int fps = cap.get(CV_CAP_PROP_FPS);
	int numFrames = cap.get(CV_CAP_PROP_FRAME_COUNT);
	printf("height=%d, width=%d, fps=%d, totalframes=%d", frameH, frameW, fps, numFrames);

	Modelrender render(frameW, frameH);

	while (1)
	{
		// Refresh Background Image
		cv::Mat frame;
		cap >> frame;

		cv::Mat dst1 = cv::Mat(frame.size(), CV_8UC3, cv::Scalar(255, 255, 255));

		dst1.data = render.rend(frame.data); //将摄像头采集到的图像作为背景输入进行渲染,然后取出纹理到dst1,完成离屏渲染

		cv::imshow("test", dst1); //将dst1显示出来
		cvWaitKey(5);
	}
	return 0;
}


  • 2
    点赞
  • 15
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值