使用Kinect驱动OGRE顶点动画的实现

        在实现kinect驱动的人脸动画时,需要先了解blendshape技术是什么,blendshape作为一种常用的表情动画技术,在maya ,3dMAX里面均可以用插件实现。

        blendshape实际上也可以叫做“带权重的顶点动画”,动画在条件表情时,可以使用若干基本表情来线性组合成需要的表情。

上图中控件A,U,O等等以及左边的Neutral,Happy等,分别对应面部不同区域的基本表情权重。

具体公式描述如下:

其中E表示任意的给定输入人脸表情,E0表示平均表情(或者叫做无表情脸),ei表示表情基,si表示表情基的权重,Nbs表示表情基的个数,Nv表示面部模型点向量的个数。

    因为使用了blendshape技术,所以所有的人脸表情动画控制问题就转化为,如何使用一组表情基的加权组合来逼近给定输入表情的问题了。

    由于这篇博客的主要目的是告诉大家怎么使用kinect驱动ogre的人脸动画,因此关于表情动画的其他有关知识我就不多介绍了。关于人脸表情动画比较好的参考论文可见Geometry-DrivenPhotorealistic Facial Expression Synthesis(2006)后面很多做人脸动画,以及语音控制的人脸动画的工作都是参考这篇文章。

    在使用kinect驱动OGRE的表情动画之前,需要做的工作是:提取出Face Tracking Visualization 中的有关表情动画和设备初始化的部分来。具体怎么做,视自己的需求而定,这里不多说了,直接贴代码…………  

KinectSensor.h   

//------------------------------------------------------------------------------
// <copyright file="KinectSensor.h" company="Microsoft">
//     Copyright (c) Microsoft Corporation.  All rights reserved.
// </copyright>
//------------------------------------------------------------------------------

#pragma once

#include <windows.h>
#include <Shellapi.h>

// C RunTime Header Files
#include <stdio.h>
#include <stdlib.h>
#include <malloc.h>
#include <memory.h>
#include <crtdbg.h>

#include <FaceTrackLib.h>
#include <NuiApi.h>

class KinectSensor                //Kinect类 负责进行设备的初始化, 进行基本参数设置,以及设备激活事件。 获得颜色、深度图
{
public:
	KinectSensor();
	~KinectSensor();

	//初始化
	HRESULT Init(NUI_IMAGE_TYPE depthType, NUI_IMAGE_RESOLUTION depthRes, BOOL bNearMode, BOOL bFallbackToDefault, NUI_IMAGE_TYPE colorType, NUI_IMAGE_RESOLUTION colorRes, BOOL bSeatedSkeletonMode);
	void Release();

	HRESULT     GetVideoConfiguration(FT_CAMERA_CONFIG* videoConfig);    //获得视频配置
	HRESULT     GetDepthConfiguration(FT_CAMERA_CONFIG* depthConfig);    //获得深度配置

	IFTImage*   GetVideoBuffer(){ return(m_VideoBuffer); };              //获得视频缓存
	IFTImage*   GetDepthBuffer(){ return(m_DepthBuffer); };              //获得深度缓存
	float       GetZoomFactor() { return(m_ZoomFactor); };               //获得放缩因子
	POINT*      GetViewOffSet() { return(&m_ViewOffset); };              //获得视域偏移
	HRESULT     GetClosestHint(FT_VECTOR3D* pHint3D);                    //获取最近点击点(用于鼠标拾取物体)

	bool        IsTracked(UINT skeletonId) { return(m_SkeletonTracked[skeletonId]);};     //是否跟踪
	FT_VECTOR3D NeckPoint(UINT skeletonId) { return(m_NeckPoint[skeletonId]);};           //脖子点
	FT_VECTOR3D HeadPoint(UINT skeletonId) { return(m_HeadPoint[skeletonId]);};           //头点

private:
	IFTImage*   m_VideoBuffer;                                                            //视频缓存
	IFTImage*   m_DepthBuffer;                                                            //深度缓存
	FT_VECTOR3D m_NeckPoint[NUI_SKELETON_COUNT];
	FT_VECTOR3D m_HeadPoint[NUI_SKELETON_COUNT];
	bool        m_SkeletonTracked[NUI_SKELETON_COUNT];
	FLOAT       m_ZoomFactor;   // video frame zoom factor (it is 1.0f if there is no zoom)
	POINT       m_ViewOffset;   // Offset of the view from the top left corner.

	HANDLE      m_hNextDepthFrameEvent;
	HANDLE      m_hNextVideoFrameEvent;
	HANDLE      m_hNextSkeletonEvent;
	HANDLE      m_pDepthStreamHandle;
	HANDLE      m_pVideoStreamHandle;
	HANDLE      m_hThNuiProcess;
	HANDLE      m_hEvNuiProcessStop;

	bool        m_bNuiInitialized; 
	int         m_FramesTotal;
	int         m_SkeletonTotal;

	static DWORD WINAPI ProcessThread(PVOID pParam);                                         //线程函数 进行Kinect的操作
	void GotVideoAlert();
	void GotDepthAlert();
	void GotSkeletonAlert();
};

 

KinectSensor.cpp

//------------------------------------------------------------------------------
// <copyright file="KinectSensor.cpp" company="Microsoft">
//     Copyright (c) Microsoft Corporation.  All rights reserved.
// </copyright>
//------------------------------------------------------------------------------

#include "KinectSensor.h"
#include <math.h>


KinectSensor::KinectSensor()
{
	m_hNextDepthFrameEvent = NULL;    
	m_hNextVideoFrameEvent = NULL;
	m_hNextSkeletonEvent = NULL;
	m_pDepthStreamHandle = NULL;
	m_pVideoStreamHandle = NULL;
	m_hThNuiProcess=NULL;
	m_hEvNuiProcessStop=NULL;
	m_bNuiInitialized = false;
	m_FramesTotal = 0;
	m_SkeletonTotal = 0;
	m_VideoBuffer = NULL;
	m_DepthBuffer = NULL;
	m_ZoomFactor = 1.0f;
	m_ViewOffset.x = 0;
	m_ViewOffset.y = 0;
}


KinectSensor::~KinectSensor()
{
	Release();
}


HRESULT KinectSensor::GetVideoConfiguration(FT_CAMERA_CONFIG* videoConfig)
{
	if (!videoConfig)
	{
		return E_POINTER;
	}

	UINT width = m_VideoBuffer ? m_VideoBuffer->GetWidth() : 0;
	UINT height =  m_VideoBuffer ? m_VideoBuffer->GetHeight() : 0;
	FLOAT focalLength = 0.f;

	if(width == 640 && height == 480)
	{
		focalLength = NUI_CAMERA_COLOR_NOMINAL_FOCAL_LENGTH_IN_PIXELS;
	}
	else if(width == 1280 && height == 960)
	{
		focalLength = NUI_CAMERA_COLOR_NOMINAL_FOCAL_LENGTH_IN_PIXELS * 2.f;
	}

	if(focalLength == 0.f)
	{
		return E_UNEXPECTED;
	}


	videoConfig->FocalLength = focalLength;
	videoConfig->Width = width;
	videoConfig->Height = height;
	return(S_OK);
}

HRESULT KinectSensor::GetDepthConfiguration(FT_CAMERA_CONFIG* depthConfig)
{
	if (!depthConfig)
	{
		return E_POINTER;
	}

	UINT width = m_DepthBuffer ? m_DepthBuffer->GetWidth() : 0;
	UINT height =  m_DepthBuffer ? m_DepthBuffer->GetHeight() : 0;
	FLOAT focalLength = 0.f;

	if(width == 80 && height == 60)
	{
		focalLength = NUI_CAMERA_DEPTH_NOMINAL_FOCAL_LENGTH_IN_PIXELS / 4.f;
	}
	else if(width == 320 && height == 240)
	{
		focalLength = NUI_CAMERA_DEPTH_NOMINAL_FOCAL_LENGTH_IN_PIXELS;
	}
	else if(width == 640 && height == 480)
	{
		focalLength = NUI_CAMERA_DEPTH_NOMINAL_FOCAL_LENGTH_IN_PIXELS * 2.f;
	}

	if(focalLength == 0.f)
	{
		return E_UNEXPECTED;
	}

	depthConfig->FocalLength = focalLength;
	depthConfig->Width = width;
	depthConfig->Height = height;

	return S_OK;
}

HRESULT KinectSensor::Init(NUI_IMAGE_TYPE depthType, NUI_IMAGE_RESOLUTION depthRes, BOOL bNearMode, BOOL bFallbackToDefault, NUI_IMAGE_TYPE colorType, NUI_IMAGE_RESOLUTION colorRes, BOOL bSeatedSkeletonMode)
{
	HRESULT hr = E_UNEXPECTED;

	Release(); // Deal with double initializations.

	//do not support NUI_IMAGE_TYPE_COLOR_RAW_YUV for now
	if(colorType != NUI_IMAGE_TYPE_COLOR && colorType != NUI_IMAGE_TYPE_COLOR_YUV
		|| depthType != NUI_IMAGE_TYPE_DEPTH_AND_PLAYER_INDEX && depthType != NUI_IMAGE_TYPE_DEPTH)
	{
		return E_INVALIDARG;
	}

	m_VideoBuffer = FTCreateImage();
	if (!m_VideoBuffer)
	{
		return E_OUTOFMEMORY;
	}

	DWORD width = 0;
	DWORD height = 0;

	NuiImageResolutionToSize(colorRes, width, height);

	hr = m_VideoBuffer->Allocate(width, height, FTIMAGEFORMAT_UINT8_B8G8R8X8);
	if (FAILED(hr))
	{
		return hr;
	}

	m_DepthBuffer = FTCreateImage();
	if (!m_DepthBuffer)
	{
		return E_OUTOFMEMORY;
	}

	NuiImageResolutionToSize(depthRes, width, height);

	hr = m_DepthBuffer->Allocate(width, height, FTIMAGEFORMAT_UINT16_D13P3);
	if (FAILED(hr))
	{
		return hr;
	}

	m_FramesTotal = 0;
	m_SkeletonTotal = 0;

	for (int i = 0; i < NUI_SKELETON_COUNT; ++i)
	{
		m_HeadPoint[i] = m_NeckPoint[i] = FT_VECTOR3D(0, 0, 0);
		m_SkeletonTracked[i] = false;
	}

	m_hNextDepthFrameEvent = CreateEvent(NULL, TRUE, FALSE, NULL);
	m_hNextVideoFrameEvent = CreateEvent(NULL, TRUE, FALSE, NULL);
	m_hNextSkeletonEvent = CreateEvent(NULL, TRUE, FALSE, NULL);

	DWORD dwNuiInitDepthFlag = (depthType == NUI_IMAGE_TYPE_DEPTH)? NUI_INITIALIZE_FLAG_USES_DEPTH : NUI_INITIALIZE_FLAG_USES_DEPTH_AND_PLAYER_INDEX;

	hr = NuiInitialize(dwNuiInitDepthFlag | NUI_INITIALIZE_FLAG_USES_SKELETON | NUI_INITIALIZE_FLAG_USES_COLOR);
	if (FAILED(hr))
	{
		return hr;
	}
	m_bNuiInitialized = true;

	DWORD dwSkeletonFlags = NUI_SKELETON_TRACKING_FLAG_ENABLE_IN_NEAR_RANGE;
	if (bSeatedSkeletonMode)
	{
		dwSkeletonFlags |= NUI_SKELETON_TRACKING_FLAG_ENABLE_SEATED_SUPPORT;
	}
	hr = NuiSkeletonTrackingEnable( m_hNextSkeletonEvent, dwSkeletonFlags );
	if (FAILED(hr))
	{
		return hr;
	}

	hr = NuiImageStreamOpen(
		colorType,
		colorRes,
		0,
		2,
		m_hNextVideoFrameEvent,
		&m_pVideoStreamHandle );
	if (FAILED(hr))
	{
		return hr;
	}

	hr = NuiImageStreamOpen(
		depthType,
		depthRes,
		(bNearMode)? NUI_IMAGE_STREAM_FLAG_ENABLE_NEAR_MODE : 0,
		2,
		m_hNextDepthFrameEvent,
		&m_pDepthStreamHandle );
	if (FAILED(hr))
	{
		if(bNearMode && bFallbackToDefault)
		{
			hr = NuiImageStreamOpen(
				depthType,
				depthRes,
				0,
				2,
				m_hNextDepthFrameEvent,
				&m_pDepthStreamHandle );
		}

		if(FAILED(hr))
		{
			return hr;
		}
	}

	// Start the Nui processing thread
	m_hEvNuiProcessStop=CreateEvent(NULL,TRUE,FALSE,NULL);
	m_hThNuiProcess=CreateThread(NULL,0,ProcessThread,this,0,NULL);

	return hr;
}

void KinectSensor::Release()
{
	// Stop the Nui processing thread
	if(m_hEvNuiProcessStop!=NULL)
	{
		// Signal the thread
		SetEvent(m_hEvNuiProcessStop);

		// Wait for thread to stop
		if(m_hThNuiProcess!=NULL)
		{
			WaitForSingleObject(m_hThNuiProcess,INFINITE);
			CloseHandle(m_hThNuiProcess);
			m_hThNuiProcess = NULL;
		}
		CloseHandle(m_hEvNuiProcessStop);
		m_hEvNuiProcessStop = NULL;
	}

	if (m_bNuiInitialized)
	{
		NuiShutdown();
	}
	m_bNuiInitialized = false;

	if (m_hNextSkeletonEvent && m_hNextSkeletonEvent != INVALID_HANDLE_VALUE)
	{
		CloseHandle(m_hNextSkeletonEvent);
		m_hNextSkeletonEvent = NULL;
	}
	if (m_hNextDepthFrameEvent && m_hNextDepthFrameEvent != INVALID_HANDLE_VALUE)
	{
		CloseHandle(m_hNextDepthFrameEvent);
		m_hNextDepthFrameEvent = NULL;
	}
	if (m_hNextVideoFrameEvent && m_hNextVideoFrameEvent != INVALID_HANDLE_VALUE)
	{
		CloseHandle(m_hNextVideoFrameEvent);
		m_hNextVideoFrameEvent = NULL;
	}
	if (m_VideoBuffer)
	{
		m_VideoBuffer->Release();
		m_VideoBuffer = NULL;
	}
	if (m_DepthBuffer)
	{
		m_DepthBuffer->Release();
		m_DepthBuffer = NULL;
	}
}

DWORD WINAPI KinectSensor::ProcessThread(LPVOID pParam)
{
	KinectSensor*  pthis=(KinectSensor *) pParam;
	HANDLE          hEvents[4];

	// Configure events to be listened on
	hEvents[0]=pthis->m_hEvNuiProcessStop;
	hEvents[1]=pthis->m_hNextDepthFrameEvent;
	hEvents[2]=pthis->m_hNextVideoFrameEvent;
	hEvents[3]=pthis->m_hNextSkeletonEvent;

	// Main thread loop
	while (true)
	{
		// Wait for an event to be signaled
		WaitForMultipleObjects(sizeof(hEvents)/sizeof(hEvents[0]),hEvents,FALSE,100);

		// If the stop event is set, stop looping and exit
		if (WAIT_OBJECT_0 == WaitForSingleObject(pthis->m_hEvNuiProcessStop, 0))
		{
			break;
		}

		// Process signal events
		if (WAIT_OBJECT_0 == WaitForSingleObject(pthis->m_hNextDepthFrameEvent, 0))
		{
			pthis->GotDepthAlert();
			pthis->m_FramesTotal++;
		}
		if (WAIT_OBJECT_0 == WaitForSingleObject(pthis->m_hNextVideoFrameEvent, 0))
		{
			pthis->GotVideoAlert();
		}
		if (WAIT_OBJECT_0 == WaitForSingleObject(pthis->m_hNextSkeletonEvent, 0))
		{
			pthis->GotSkeletonAlert();
			pthis->m_SkeletonTotal++;
		}
	}

	return 0;
}

void KinectSensor::GotVideoAlert( )
{
	const NUI_IMAGE_FRAME* pImageFrame = NULL;

	HRESULT hr = NuiImageStreamGetNextFrame(m_pVideoStreamHandle, 0, &pImageFrame);
	if (FAILED(hr))
	{
		return;
	}

	INuiFrameTexture* pTexture = pImageFrame->pFrameTexture;
	NUI_LOCKED_RECT LockedRect;
	pTexture->LockRect(0, &LockedRect, NULL, 0);
	if (LockedRect.Pitch)
	{   // Copy video frame to face tracking
		memcpy(m_VideoBuffer->GetBuffer(), PBYTE(LockedRect.pBits), min(m_VideoBuffer->GetBufferSize(), UINT(pTexture->BufferLen())));
	}
	else
	{
		OutputDebugString("Buffer length of received texture is bogus\r\n");
	}

	hr = NuiImageStreamReleaseFrame(m_pVideoStreamHandle, pImageFrame);
}


void KinectSensor::GotDepthAlert( )
{
	const NUI_IMAGE_FRAME* pImageFrame = NULL;

	HRESULT hr = NuiImageStreamGetNextFrame(m_pDepthStreamHandle, 0, &pImageFrame);

	if (FAILED(hr))
	{
		return;
	}

	INuiFrameTexture* pTexture = pImageFrame->pFrameTexture;
	NUI_LOCKED_RECT LockedRect;
	pTexture->LockRect(0, &LockedRect, NULL, 0);
	if (LockedRect.Pitch)
	{   // Copy depth frame to face tracking
		memcpy(m_DepthBuffer->GetBuffer(), PBYTE(LockedRect.pBits), min(m_DepthBuffer->GetBufferSize(), UINT(pTexture->BufferLen())));
	}
	else
	{
		OutputDebugString( "Buffer length of received depth texture is bogus\r\n" );
	}

	hr = NuiImageStreamReleaseFrame(m_pDepthStreamHandle, pImageFrame);
}

void KinectSensor::GotSkeletonAlert()
{
	NUI_SKELETON_FRAME SkeletonFrame = {0};

	HRESULT hr = NuiSkeletonGetNextFrame(0, &SkeletonFrame);
	if(FAILED(hr))
	{
		return;
	}

	for( int i = 0 ; i < NUI_SKELETON_COUNT ; i++ )
	{
		if( SkeletonFrame.SkeletonData[i].eTrackingState == NUI_SKELETON_TRACKED &&
			NUI_SKELETON_POSITION_TRACKED == SkeletonFrame.SkeletonData[i].eSkeletonPositionTrackingState[NUI_SKELETON_POSITION_HEAD] &&
			NUI_SKELETON_POSITION_TRACKED == SkeletonFrame.SkeletonData[i].eSkeletonPositionTrackingState[NUI_SKELETON_POSITION_SHOULDER_CENTER])
		{
			m_SkeletonTracked[i] = true;
			m_HeadPoint[i].x = SkeletonFrame.SkeletonData[i].SkeletonPositions[NUI_SKELETON_POSITION_HEAD].x;
			m_HeadPoint[i].y = SkeletonFrame.SkeletonData[i].SkeletonPositions[NUI_SKELETON_POSITION_HEAD].y;
			m_HeadPoint[i].z = SkeletonFrame.SkeletonData[i].SkeletonPositions[NUI_SKELETON_POSITION_HEAD].z;
			m_NeckPoint[i].x = SkeletonFrame.SkeletonData[i].SkeletonPositions[NUI_SKELETON_POSITION_SHOULDER_CENTER].x;
			m_NeckPoint[i].y = SkeletonFrame.SkeletonData[i].SkeletonPositions[NUI_SKELETON_POSITION_SHOULDER_CENTER].y;
			m_NeckPoint[i].z = SkeletonFrame.SkeletonData[i].SkeletonPositions[NUI_SKELETON_POSITION_SHOULDER_CENTER].z;
		}
		else
		{
			m_HeadPoint[i] = m_NeckPoint[i] = FT_VECTOR3D(0, 0, 0);
			m_SkeletonTracked[i] = false;
		}
	}
}

HRESULT KinectSensor::GetClosestHint(FT_VECTOR3D* pHint3D)
{
	int selectedSkeleton = -1;
	float smallestDistance = 0;

	if (!pHint3D)
	{
		return(E_POINTER);
	}

	if (pHint3D[1].x == 0 && pHint3D[1].y == 0 && pHint3D[1].z == 0)
	{
		// Get the skeleton closest to the camera
		for (int i = 0 ; i < NUI_SKELETON_COUNT ; i++ )
		{
			if (m_SkeletonTracked[i] && (smallestDistance == 0 || m_HeadPoint[i].z < smallestDistance))
			{
				smallestDistance = m_HeadPoint[i].z;
				selectedSkeleton = i;
			}
		}
	}
	else
	{   // Get the skeleton closest to the previous position
		for (int i = 0 ; i < NUI_SKELETON_COUNT ; i++ )
		{
			if (m_SkeletonTracked[i])
			{
				float d = abs(m_HeadPoint[i].x - pHint3D[1].x) +
					abs(m_HeadPoint[i].y - pHint3D[1].y) +
					abs(m_HeadPoint[i].z - pHint3D[1].z);
				if (smallestDistance == 0 || d < smallestDistance)
				{
					smallestDistance = d;
					selectedSkeleton = i;
				}
			}
		}
	}
	if (selectedSkeleton == -1)
	{
		return E_FAIL;
	}

	pHint3D[0] = m_NeckPoint[selectedSkeleton];
	pHint3D[1] = m_HeadPoint[selectedSkeleton];

	return S_OK;
}

在完成kinect的初始化和跟踪人脸等步骤之后(这些步骤自己看face tracking 里面的实现,直接cpoy过来就可以了)。

使用kinectAPI的方法(GetAUCoefficients)(THIS_ FLOAT** ppCoefficients, UINT* pAUCount);  来获取各个基本表情的权重,

以及使用Get3DPose()方法来获取头部的旋转信息。

这里的旋转和一般的旋转有点小小的不同(坐标系方向的问题)

最后把pAU数组的几个值读出来,实时的赋给OGRE中对应动画滑块的值,整个kienct到ogre表情动画的驱动就完成了,其实很简单的东西,一说出来就明白了,之前在做这个程序的时候因为找不到好的人脸模型,求人给个模型都死活不肯。现在想想,其实就是怕别人知道自己的那点小伎俩。 OGRE中的代码因为我实现的比较丑陋,不太好意思贴出来。有需求的可以找我要,不过自己实现起来并不难。

      大致的思想是自己写一个类,放在FacialAnimation中进行kinect的初始化,以及进行人脸的跟踪,并实时的读出从GetAUCoefficients方法和Get3DPose方法获得的动画单元数据和选择信息。最后传递给OGRE中的updatePoseReference(动画单元对应的索引,权重值);用于动画帧的刷新。然后调用_notifyDirty();通知游戏更新即可。
 

放两张实验结果,由于模型自动blendshape动画本身的限制,动画看起来不是太明显,左边的图是皱眉的,右边的是张嘴的,头的旋转和我真实的旋转基本一致。如果有美工做模型并调节好blendshape的话,效果会比这个好很多的。

 //本文谢绝转载,如有违反,违者必究!

评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值