Tuesday, March 30, 2010

Stereo vision: algorithms and applications

If you want an excellent review of formulation of stereovision/depth computation with a state of the art like approach you need to read this.

  "Stereo vision: algorithms and applications"

Source :Website of Stefano Mattoccia

Tuesday, March 23, 2010

Photosculpt (From a stereo pair to dense Depth map)


I want introduce the PhotoSculpt software. A soft that allow from two images ( a stereo pair) to obtain depth/normal/color maps. The process seems to work on a multi-resolution approach. (Something like a dynamic approach or ...) No more information about how it works.

Obviously the computed depth are not metrics. But the results looks good but do not works on all stereo image pair. The stereo pair seems require a viewpoint angle of 10 to 20 °.

On other it works quite well (it's a pity that the soft have made a confusion on small blobby things)



Source : PhotoSculpt et 3DVF


Friday, March 19, 2010

ICE Is 1.3.3 and Now Synthy

Microsoft ICE is the Microsoft free solution to compute panoramic images.

What's new in the 1.3.3 version (A lot of stuff !)

  1. - Accelerated stitching on multiple CPU cores
  2. - Ability to publish, view, and share panoramas on the Photosynth web site
  3. - Support for "structured panoramas" — panoramas consisting of hundreds of photos taken in a rectangular grid of rows and columns (usually by a robotic device like the GigaPan tripod heads)
  4. - No image size limitation — stitch gigapixel panoramas
  5. - Support for input images with 8 or 16 bits per component

Additional features

  1. - State-of-the-art stitching engine
  2. - Automatic exposure blending
  3. - Choice of planar, cylindrical, or spherical projection
  4. - Orientation tool for adjusting panorama rotation
  5. - Automatic cropping to maximum image area
  6. - Native support for 64-bit operating systems
  7. - Wide range of output formats, including JPEG, TIFF, BMP, PNG, HD Photo, and Silverlight

So what are missing :

  1. - Fish eye images are not supported,
  2. - HDR 32 bits,
  3. - MAC/Linux builds.

Thursday, March 18, 2010

PS Move (The PS3 WII like sensor)

A lot of names have been seen on the web for the object : Arc, Gem, Sphere. But the official name is Playstation Move. This accessory allow movement detection for the PS3 and was officially shown at GDC2010. This device is a response of Sony over the Microsoft Natal and Nintendo Wii Mote.



Here some videos show the device at work :

The guy that play at the FPS seems quite boring experience.. It do not move at all.. No immersion inside the action... What novelties beside a classic gamepad ?


The following video is clearly more fun : A real interaction is visible ( over the ball detection and sensor reactivity)

And more interesting the player smile... that it's share a good user experience.

On some video on the web we can see two "Move" pad on the hand player that show that the used camera can track different Light Ball so different "Move" device.


Wednesday, March 17, 2010

VisualSize comes back with PhotoModel3D

VisualSize share their last results "PhotoModel3D" and compare them to Bundler alternative.

They compare : RunTime and 3D points computed.

They cannot test accuracy of the method. A lot of bad 3D points seems to be estimated.

It looks like a Bundler + PMVS couple.

But's it cool to see alternative to the PhotoSynth machine...



How photosynth compute FOV

It's not so easy to get information about the core of photosynth (How all the steps are done).

We all have ideas but without real veracity.

The more interesting information could be found on http://getsatisfaction.com/livelabs a forum where microsoft  guys answer to customer questions.

So on the question how photosynth compute FOV values (aka Field of view). It's for sure from exif data but with a little help. If the camera is known they use a internal database of CDD sensor width. One years ago they claims to get 600 camera reference in their database. (I do not imagine how many they have today).

So from this information and test we could deduce that Photosynth is not a autocalibration process (If they do not find the FOV they use an arbitrary value).

At your turn could you find more information ?

Source  : GetStatisfaction

"We have a database with sensor sizes for about 600 cameras in the product now. We combine the sensor size with the focal length in the EXIF to compute the Field of View (FOV). Why do we need this? Imaging taking two photos of the same scene that look identical, one taken very near with wide angle and one taken further away with telephoto. By computing the FOV from the lens/camera combination we have a more accurate estimate of distance which improves the overall output of the synth. If we don't know the FOV we use a default starting point and the synther actually figures out what the FOV was as it pieces things together. This usually works fine but loosely connected synths do benefit from having this knowledge ahead of time.

Right now the sensor sizes are baked into the synther."

A curve drawer inovative tool

Curver, allows to draw curve based shape.

Instead of working with control points and tangents, the user draws curves and then manipulates them with tools by pushing them around, smoothing them, cutting and attaching them, modifying their thickness, painting masking weights, etc).

I definitively love the approach. If you can make modify the thickness directly with a brush tool instead a local thickness control point it looks more like real drawing.

Source : Gamedev The Author Blog


Monday, March 15, 2010

Natal effectivness overview

Do you want to see Natal works on quite complex moves :

It works well considering the fact that there is low light. It's seems to be "quite" real time... But we clearly see some latency (especially when the guy turn around).

Source : Project Natal

The eyeWriter

The EyeWriter project is an ongoing collaborative research effort to empower people who are suffering from ALS (Amyotrophic lateral sclerosis) with creative technologies.

It is a low-cost eye-tracking apparatus & custom software that allows graffiti writers and artists with paralysis resulting from Amyotrophic lateral sclerosis to draw using only their eyes.

The Eyewriter from Evan Roth on Vimeo.

Source : eyewriter

Surface on your desk ! aka Mobile Surface

Between Natal and Microsoft Surface, Microsoft research seems to explore new way of interaction.

It's pretty cool to see those concept based on a camera and a pico projector that allow you to use any non reflective surface as a virtual desk.


It's seems that works already as a concept :

The result is quite nice and look promising. Imagine that you can modify schedule directly with your colleague in the meeting room by viewing the overlap of each person calendar... It could be seen as a ultra portable way of sharing information and record things.

Friday, March 5, 2010

3D live streaming using two web cameras

Does the future will be plenty of 3D video streams... And does your computers will have two web cams eye ?

Quoted text :

"NanoStream 3D live video encoder is a video software for 3D stereoscopic encoding from multiple camera views. The encoder is based on a modular architecture with direct show filters and codecs. It is available as development kit for integrating video encoding and streaming functionality into custom applications. It’s well worth noting that high performance video coding solutions for real time streaming and broadcast software can be installed on any PC."

Source : Fudzilla and NanoCosmos

But as long as the software does not purpose camera calibration... we will only have the perception of the depth... But it not a metric depth.


Wednesday, March 3, 2010

Towards Internet-scale Multi-view Stereo

Yasutaka Furukawa is back.

After PBA, PMVS et PMVS2. He and the team show very good result with large image collection.


On previous approach we notice a lot of parasite points (Easily visible on Photosynth => 3D points that results of bad correspondences...) but in this recent work they seems all filtered out. The only bad point is that some planar surface was not reconstructed ( windows, stained glass window). The part that are not reconstructed on the ground are due to detected moving people I think. So it's curious that from the number of images used the method is not capable of handling few shot to fill the remaining holes. On another side, the texture look clean and detailed !

It looks like the Daniel Martinec Round patches, but it's clearly more (See the thumbnail).


Source : Yasutaka Furukawa