KinectFusion:使用 Kinect 的 3D 重建以及 AR

http://kheresy.wordpress.com/2011/08/27/kinectfusion%EF%BC%9A%E4%BD%BF%E7%94%A8-kinect-%E7%9A%84-3d-%E9%87%8D%E5%BB%BA%E4%BB%A5%E5%8F%8A-ar/


這是 Microsoft Research 在今年的 Siggraph 上展出的東西,名字叫做「KinectFusion」,顧名思義,就是用 Kinect(之前的介紹)來對拍攝到的現實場景做處理、並和虛擬物體作融合(Fusion)了!官方的影片和說明網頁是在http://research.microsoft.com/apps/video/default.aspx?id=152815;而下方,則是 YouTube 上的展示影片:

基本上,這個程式會去把目前抓到的深度資訊,重新建立出 3D 的模型出來;而當 Kinect 移動的時候,他則會連續性地追蹤 Kinect 的位置變化(6DOF、而且是不需要特徵點的方法)、並將新的資料結合到目前的模型裡;這些動作,都是使用 GPGPU 的技術、即時做到的!另外,根據官方的說明,他在實際記錄空間中的資料時,似乎是採用 volumetric 來做處理的,這點倒是滿有趣的~以影片裡的成果看來,他們做的,不管是 Kinect 的持續追蹤、或是 3D 重建,感覺都做得相當好啊!

而到了影片中段(3:50),他們也還有展示加上虛擬物體、並且和真實環境拿來做物理模擬的結果,感覺效果也相當地不錯!更後面(7:00),也有用手指在虛擬環境裡塗鴉的示意,雖然感覺沒有貼得很準,但是也算是相當好了~而如果拿這個來做 AR 遊戲的話,應該可以有相當的突破吧!不過硬體的需求可能也要非常高就是了。 ^^"

最後,官方說明:

We present KinectFusion, a system that takes live depth data from a moving depth camera and in real-time creates high-quality 3D models. The system allows the user to scan a whole room and its contents within seconds. As the space is explored, new views of the scene and objects are revealed and these are fused into a single 3D model. The system continually tracks the 6DOF pose of the camera and rapidly builds a volumetric representation of arbitrary scenes.

Our technique for tracking is directly suited to the point-based depth data of Kinect, and requires no feature extraction or feature tracking. Once the 3D pose of the camera is known, each depth measurement from the sensor can be integrated into a volumetric representation. We describe the benefits of this representation over mesh-based approaches. In particular, the representation implicitly encodes predictions of the geometry of surfaces within a scene, which can be extracted readily from the volume. As the camera moves through the scene, new depth data can be added or removed from this volumetric representation, continually refining the 3D model acquired. We describe novel GPU-based implementations for both camera tracking and surface reconstruction. These take two well-understood methods from the computer vision and graphics literature as a starting point, defining new instantiations designed specifically for parallelizable GPGPU hardware. This allows for interactive real-time rates that have not previously been demonstrated.

We demonstrate the interactive possibilities enabled when high-quality 3D models can be acquired in real-time, including: extending multi-touch interactions to arbitrary surfaces; advanced features for augmented reality; real-time physics simulations of the dynamic model; novel methods for segmentation and tracking of scanned objects


基本上,他們算是已經把其中一個 Heresy 想用 Kinect 做的東西做掉了,而且做得比 Heresy 預期的更好…某種程度上,這應該也算是必然的吧…但是想想,這也滿可悲的,全球化的結果,就是除非你是頂級的人才,不然所有你想到可以的東西,都可以找到有人已經做了,而且做得比你更好… orz


评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值