Kinect RGB Demo v0.5.0

接上篇的标定!其实没搞明白这是什么?

 

http://nicolas.burrus.name/index.php/Research/KinectRgbDemoV5

 

Kinect RGB Demo v0.5.0

 

Demo software to visualize, calibrate and process Kinect cameras output

This software was partly developed in the RoboticsLab and aims at providing a simple toolkit to start playing with Kinect data and develop standalone computer vision programs without the hassle of integrating existing libraries. The project is divided in a library called nestk and some demo programs using it. The library itself is easy to integrate to an existing project using cmake : just copy the nestk folder as a subfolder of your project and you should be able to start working with Kinect data. You can get more information on the nestk page .

Current features include:

  • Grab kinect images and visualize / replay them
  • Support for libfreenect and OpenNI/Nite backends
  • Extract skeleton data / hand point position (Nite backend)
  • Integration with OpenCV and PCL
  • Calibrate the camera to get point clouds in metric space (libfreenect)
  • Export to meshlab/blender using .ply files
  • Demo of 3D scene reconstruction using a freehand Kinect
  • Demo of people detection and localization
  • Demo of gesture recognition and skeleton tracking using Nite
  • Linux, MacOSX and Windows support

Support

Please send your questions, patches, … to rgbdemo@googlegroups.com .

Download

New features since v0.4.0

  • OpenNI / Nite backend support. No more fun with the chessboard-based calibration, sorry. Thanks to Diererick/Roxlu for the initial C Make integration.
  • Basic skeleton / gesture support using Nite.
  • Much improved 3D freehand reconstruction with optional ICP refinement. Thanks to Cristobal Belles.
  • PCL integration.

You can have a look at the new 3D freehand reconstruction on the following video:

http://www.youtube.com/watch?v=Cldf7UdFq1k&feature=player_embedded#at=18

And at the people detection feature on the following video:

http://www.youtube.com/watch?v=nnCDOKLuu0g&feature=player_embedded

 

And a snapshot of the skeleton and hand point tracking here:

Running test programs from binaries

Mac binaries

You can get MacOSX universal (Intel only) binaries from there: RGBDemo-0.5.0-Darwin.dmg (LGPL License).

Windows binaries

You can get Win32 binaries from there: RGBDemo-0.5.0-Win32.zip (LGPL License).

  • You will have to install OpenNI/Nite drivers. You can download them from OpenNI website, or you can use the copy provided in the Drivers directory.
  • Important: You first need to install OpenNI, then SensorKinect, then Nite, in this order. You can find the license key for Nite on the same website. The free license for Kinect devices is 0KOIk2JeIBYClPWVnMoRKn5cdY4=

Compiling from source

Compilation on Linux (Ubuntu)
  • The source includes a copy of OpenCV since Ubuntu packages are buggy. If you want to use an external Open CV installation (>= 2.2), disable the USE_EXTERNAL_OPENCV flag in C Make or directly use the ./linux_configure_external_opencv.sh script.
  • Install required packages, e.g. on Ubuntu 10.10:
sudo apt-get install libboost-all-dev libusb-1.0-0-dev libqt4-dev libgtk2.0-dev cmake libglew1.5-dev libgsl0-dev libglut3-dev libxmu-dev

  • Untar the source, use the provided scripts to launch cmake and compile:
tar xvfz rgbdemo-0.5.0-Source.tar.gz
cd rgbdemo-0.5.0-Source
./linux_configure.sh
./linux_build.sh

Compilation on Mac

You will need:

  • An install of QT
  • Note: as of version 0.5.0, libusb is included in the library, so no need to install it.

Then run the following commands:

tar xvfz rgbdemo-0.5.0-Source.tar.gz
cd rgbdemo-0.5.0-Source ./macosx_configure.sh
./macosx_build.sh

The configure script might ask for libusb installation. Say yes if you don’t have it installed.

If you still experience some issues with libusb, or have a custom install, you can try:

cmake -DLIBUSB_1_INCLUDE_DIR=$HOME/libusb/include -DLIBUSB_1_LIBRARY=$HOME/libusb/lib/libusb-1.0.dylib build

supposing that you have it installed in $HOME/libusb .

Compilation on Windows

It has been tested with MinGW and Visual Studio 10 so far. Note that OpenNI backend is NOT available for Mingw.

You cannot use both libfreenect and OpenNI backends on Windows. You have to choose between one of them. By default, OpenNI backend will be compiled.

If you want to compile with libfreenect backend, you will first need to install the libfreenect drivers from OpenKinect Windows .

If you want to compile using Visual Studio 2008:

  • Install QT binaries for MSVC 2008 .
  • Install OpenNI, SensorKinect, and Nite (in this order).
  • Add QT bin path to the Path environment variable, or specify QMAKE path in CMake
  • Run CMake
  • Open the generated solution in Visual Studio .

If you want to compile using Visual Studio 2010:

  • Recompile QT for MSVC 2010 . Binaries provided for MSVC 2008 unfortunately do not work with VS 2010 (Runtime Error).
  • Install OpenNI, SensorKinect, and Nite (in this order).
  • Add QT bin path to the Path environment variable, or specify QMAKE path in CMake
  • Run CMake
  • Open the generated solution in MSVC 2010 .

Here is a step-by-step procedure for Min GW , in case you want to use libfreenect:

  • Install QT opensource for Windows. This will also install Min GW .
  • Add C:/Qt/2010.05/Min GW /bin to the Path environment variable
  • Install and run cmake on rgbdemo
  • Disable the NESTK_USE_OPENNI cmake variable
  • Open the CMakeLists.txt in Qt Creator or compile manually using mingw-make.

Running the viewer

  • Binaries are in the build/bin/ directory, you can give it a try without calibration using:
build/bin/rgbd-viewer

If you get an error such as:

libusb couldn't open USB device /dev/bus/usb/001/087: Permission denied.
libusb requires write access to USB device nodes.
FATAL failure: freenect_open_device() failed

Give access rights to your user with:

sudo chmod 666 /dev/bus/usb/001/087

Or install the udev rules provided by libfreenect.

Switching between backends

There are two supported backends for Kinect devices, libfreenect and Open NI /Nite . By default, is the NESTK_USE_OPENNI Cmake variable is enabled, demo programs will choose the Open NI backend. If you want to switch to the libfreenect backend, you can use the freenect command line option:

build/bin/rgbd-viewer --freenect
High resolution mode

When using the Open NI backend, you can enable high RGB resolution mode to get 1280×1024 color images @ 10Hz with the highres option:

build/bin/rgbd-viewer --highres

Calibrating your Kinect (libfreenect backend)

Note: this is only necessary if you want to use the libfreenect backend.

A sample calibration file is provided in data/kinect_calibration.yml . However, you should be able to get a more accurate mapping by estimating new parameters for each Kinect. Below is the procedure I follow.

1. Build a calibration pattern as shown in KinectCalibration . You can use the Chessboard_A4.pdf or Chessboard_A3.pdf file in the data/ directory for this. I recommend printing the chessboard on a sheet of paper and glue it on a peace of carton. It is not necessary anymore to cut the carton around the paper.

2. Grab some images of your chessboard using the viewer (File / Grab frame or Ctrl-G). WARNING: you need to grab images in Dual IR/RGB more (enable it in the Capture menu). By default it will save them into directories grab1/view???? . These directories contain the raw files, raw/color.png , raw/depth.yml , raw/intensity.png that corresponds to the color image, the depth image (in meters), and the IR image normalized to grayscale. You will also get an additional raw/depth.png which is the depth image normalized to grayscale.

To get an optimal calibration, grabbed images should ensure the following:

  • Cover as most image area as possible. Especially check for coverage of the image corners.
  • Try to get the chessboard as close as possible to the camera to get better precision.
  • For depth calibration, you will need some images with IR and depth. But for stereo calibration, the depth information is not required, so feel free to cover the IR projector and get very close to the camera to better estimate IR intrinsics and stereo parameters. The calibration algorithm will automatically determine which grabbed images can be used for depth calibration.
  • Move the chessboard with various angles.
  • I usually grab a set of 30 images to average the errors.
  • Typical reprojection error is < 1 pixel. If you get significantly higher values, it means the calibration failed.

3. Run the calibration program:

build/bin/calibrate_kinect_ir --pattern-size 0.025 grab1

The pattern size correspond to the size in meters of one chessboard square. It should be 0.025 (25mm) for the A4 model.

This will generate the kinect_calibration.yml file storing the parameters for the viewer, and two files calibration_rgb.yaml and calibration_depth.yaml for ROS compatibility.

Note with Mac binaries : if there is a grab1 directory in the current directory, it will be loaded automatically.

Running the viewer with calibration
  • Just give it the path to the calibration file:
build/bin/rgbd-viewer --calibration kinect_calibration.yml

New since RGB Demo v0.4.0 : if there is a kinect_calibration.yml file in the current directory, it will be loaded automatically.

New since RGB Demo v0.5.0 : if you are using the Open NI backend, then the calibration parameters will be determined automatically.

  • You should get a window similar to this:
  • The main frame is the color-encoded depth image. By moving the mouse, you can see the distance in meters towards a particular pixel. Images are now undistorted.
  • You can filter out some value and normalize the depth color range with the filter window (Show / Filters). The Edge filter is recommended.
  • You can get a very simple depth-threshold based segmentation with Show / Object Detector
  • You can get a 3D view in Show / 3D Window.
  • By default you get a grayscale point cloud. You can activate color:
  • And finally textured triangles :
  • You can also save the mesh using the Save current mesh button, it will store in into a current_mesh.ply file that you can open with Meshlab Meshlab :
  • The associated texture is written into a current_mesh.ply.texture.png file and can be loaded into the UV editor in Blender.

Getting Infrared Images

  • You can activate the IR mode in the capture menu. There is also a dual RGB/IR mode alternating between the two modes.

Note: this is currently only available with libfreenect backend

Moving the Tilt motor

This is only possible with the libfreenect backend. Open the Filters window and you can set the Kinect tilt on the bottom slider.

Replay mode

  • You can grab RGBD Images using the File/Grab Frame command. This stores the files into viewXXXX directories (see the Calibration section), that can be replayed later using the fake image grabber. This can be activated using the —image option:
build/bin/rgbd-viewer --calibration kinect_calibration.yml --image grab1/view0000
  • You can also replay a sequence of images stored in a directory with the —directory option:
build/bin/rgbd-viewer --calibration kinect_calibration.yml --directory grab1

This will cycle through the set of viewXXXX images inside the grab1 directory.

Note: You will also need a calibration file if you used Open NI backend to grab the images. You can get one by running the viewer and selecting File/Save calibration parameters .

Interactive scene reconstruction

  • You can try an experimental interactive scene reconstruction mode using the build/bin/rgbd-reconstructor program. This is similar to the interative mapping of Intel RGBD but still in a preliminar stage. The relative pose between image captures is determined using feature points matching and mean squares minimization.

In this mode, point clouds will progressively be aggregated in a single reference frame using a Surfel representation.

  • Note: As of version 0.5.0, you can enable ICP refinement if NESTK_USE_PCL is enabled (by default on Linux) and using the —icp option.

People detection

  • Launch rgbd-people-tracker . You need to specify a configuration file. Here an example of full command line:
build/bin/rgbd-people-tracker --calibration kinect_calibration.yml --config data/tracker_config.yml

Calibration and config files will be loaded automatically is they are in the current directory.

Body tracking and gesture recognition

  • Launch rgbd-skeletor .

If you make the calibration pose, you should be able to see your joints. If you are interested into a minimal body tracking example, you can have a look at nestk/tests/test-nite.cpp . Enable the NESTK_BUILD_TESTS cmake variable to compile it.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值