kl3m3n's blog

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 

Using 2 Kinect sensors with calibrated Point Cloud in Labview

Klemen
Active Participant

This is an add-on to the blog post #2 – "Using Microsoft Kinect to visualize 3D objects with texture in LabVIEW in real-time".


The idea is basically the same, with the difference of using two Kinect sensors in LabVIEW simultaneously. The acquisition from both sensors is also based on OpenNI and PCL libraries where instead of a single cloud callback, two cloud callbacks are performed. Thus, the calibrated X, Y, Z coordinates can be obtained from two different viewpoints.


Video1 shows the calibrated depth image (Z coordinates) for two Kinect sensors from different viewpoints (QVGA resolution). Some interference can be seen, since the projection patterns from both sensors overlap. This can be effectively remedied according to this article for example:


Maimone A., Fuchs H. Reducing interference between multiple structured light depth sensors using motion, Virtual Reality, IEEE (2012), p. 51-54.

Here is the video and the youtube link: http://www.youtube.com/watch?v=-g7emWGKzHU


Video 1. Calibrated depth image from two Kinect sensors simultaneously.

merged.bmp

Figure 1. Merged data form 2 Kinect sensors (this was a quick merge, didn't pay too much attention to it).

P.S.: I've tested this using VGA resolution and Kinects also easily achieve 30 fps.

Thanks for reading. Until next time…

Be creative.


https://decibel.ni.com/content/blogs/kl3m3n



"Kudos: Users may give one another Kudos on the forums for posts that they found particularly helpful or insightful."