Wednesday, January 26, 2011

Intel Labs RGB-Depth section

RGB-D: Techniques and usages for Kinect style depth cameras RGBD Project

The RGB-D project is a joint research effort between Intel Labs Seattle and the University of Washington Department of Computer Science & Engineering. The goal of this project is to develop techniques that enable future use cases of depth cameras. Using the Primesense* depth cameras underlying the Kinect * technology, we've been working on areas ranging from 3D modeling of indoor environments to interactive projection systems and object recognition torobotic manipulation and interaction.
Below, you find a list of videos illustrating our work. More detailed technical background can be found in our research areas and at the UW Robotics and State Estimation Lab. Enjoy!

3D Modeling of Indoor Environments

Depth cameras provide a stream of color images and depth per pixel. They can be used to generate 3D maps of indoor environments. Here we show some 3D maps built in our lab. What you see is not the raw data collected by the camera, but a walk through the model generated by our mapping technique. The maps are not complete; they were generated by simply carrying a depth camera through the lab and aligning the data into a globally consistent model using statistical estimation techniques.
3D indoor models could be used to automatically generate architectural drawings, allow virtual flythroughs for real estate, or remodeling and furniture shopping by inserting 3D furniture models into the map.

Interactive Flythrough

Here we show interactive navigation through a 3D model. The visualization can be done in stereoscopic 3D using shutter glasses (just like in the movie Avatar). The system uses a depth camera to control the navigation.

3D Mapping

This video demonstrates the mapping process. Shown is a top view of the 3D map generated by walking with the depth camera through the lab. The system automatically estimates the motion of the camera and detects loop closures, which help it to globally align the camera frames. No external information or sensor is used.

Interactive Mapping

We want to enable novel users to build 3D maps with depth cameras. Here you see our interactive mapping system. The system processes the depth camera data in real time and warns the user when the collected data is not suitable for a map. The approach also suggests areas that have not yet been modeled appropriately.

To know more : Intel Labs RGBD

No comments:

Post a Comment