If you want an excellent review of formulation of stereovision/depth computation with a state of the art like approach you need to read this.
"Stereo vision: algorithms and applications"
Source :Website of Stefano Mattoccia
A bunch of news about Computer vision, Computer Graphics, GPGPU or the mix of the three....
If you want an excellent review of formulation of stereovision/depth computation with a state of the art like approach you need to read this.
"Stereo vision: algorithms and applications"
Source :Website of Stefano Mattoccia
Microsoft ICE is the Microsoft free solution to compute panoramic images.
What's new in the 1.3.3 version (A lot of stuff !)
Additional features
So what are missing :
A lot of names have been seen on the web for the object : Arc, Gem, Sphere. But the official name is Playstation Move. This accessory allow movement detection for the PS3 and was officially shown at GDC2010. This device is a response of Sony over the Microsoft Natal and Nintendo Wii Mote.
Here some videos show the device at work :
The guy that play at the FPS seems quite boring experience.. It do not move at all.. No immersion inside the action... What novelties beside a classic gamepad ?
The following video is clearly more fun : A real interaction is visible ( over the ball detection and sensor reactivity)
And more interesting the player smile... that it's share a good user experience.
On some video on the web we can see two "Move" pad on the hand player that show that the used camera can track different Light Ball so different "Move" device.
VisualSize share their last results "PhotoModel3D" and compare them to Bundler alternative.
They compare : RunTime and 3D points computed.
They cannot test accuracy of the method. A lot of bad 3D points seems to be estimated.
It looks like a Bundler + PMVS couple.
But's it cool to see alternative to the PhotoSynth machine...
It's not so easy to get information about the core of photosynth (How all the steps are done).
We all have ideas but without real veracity.
The more interesting information could be found on http://getsatisfaction.com/livelabs a forum where microsoft guys answer to customer questions.
So on the question how photosynth compute FOV values (aka Field of view). It's for sure from exif data but with a little help. If the camera is known they use a internal database of CDD sensor width. One years ago they claims to get 600 camera reference in their database. (I do not imagine how many they have today).
So from this information and test we could deduce that Photosynth is not a autocalibration process (If they do not find the FOV they use an arbitrary value).
At your turn could you find more information ?
Source : GetStatisfaction
"We have a database with sensor sizes for about 600 cameras in the product now. We combine the sensor size with the focal length in the EXIF to compute the Field of View (FOV). Why do we need this? Imaging taking two photos of the same scene that look identical, one taken very near with wide angle and one taken further away with telephoto. By computing the FOV from the lens/camera combination we have a more accurate estimate of distance which improves the overall output of the synth. If we don't know the FOV we use a default starting point and the synther actually figures out what the FOV was as it pieces things together. This usually works fine but loosely connected synths do benefit from having this knowledge ahead of time.
Right now the sensor sizes are baked into the synther."
Curver, allows to draw curve based shape.
Instead of working with control points and tangents, the user draws curves and then manipulates them with tools by pushing them around, smoothing them, cutting and attaching them, modifying their thickness, painting masking weights, etc).
I definitively love the approach. If you can make modify the thickness directly with a brush tool instead a local thickness control point it looks more like real drawing.
Source : Gamedev The Author Blog
Do you want to see Natal works on quite complex moves :
It works well considering the fact that there is low light. It's seems to be "quite" real time... But we clearly see some latency (especially when the guy turn around).
Source : Project Natal
The EyeWriter project is an ongoing collaborative research effort to empower people who are suffering from ALS (Amyotrophic lateral sclerosis) with creative technologies.
It is a low-cost eye-tracking apparatus & custom software that allows graffiti writers and artists with paralysis resulting from Amyotrophic lateral sclerosis to draw using only their eyes.
The Eyewriter from Evan Roth on Vimeo.
Source : eyewriter
Between Natal and Microsoft Surface, Microsoft research seems to explore new way of interaction.
It's pretty cool to see those concept based on a camera and a pico projector that allow you to use any non reflective surface as a virtual desk.
It's seems that works already as a concept :
The result is quite nice and look promising. Imagine that you can modify schedule directly with your colleague in the meeting room by viewing the overlap of each person calendar... It could be seen as a ultra portable way of sharing information and record things.
Does the future will be plenty of 3D video streams... And does your computers will have two web cams eye ?
Quoted text :
"NanoStream 3D live video encoder is a video software for 3D stereoscopic encoding from multiple camera views. The encoder is based on a modular architecture with direct show filters and codecs. It is available as development kit for integrating video encoding and streaming functionality into custom applications. It’s well worth noting that high performance video coding solutions for real time streaming and broadcast software can be installed on any PC."
Source : Fudzilla and NanoCosmos
But as long as the software does not purpose camera calibration... we will only have the perception of the depth... But it not a metric depth.
Yasutaka Furukawa is back.
After PBA, PMVS et PMVS2. He and the team show very good result with large image collection.
On previous approach we notice a lot of parasite points (Easily visible on Photosynth => 3D points that results of bad correspondences...) but in this recent work they seems all filtered out. The only bad point is that some planar surface was not reconstructed ( windows, stained glass window). The part that are not reconstructed on the ground are due to detected moving people I think. So it's curious that from the number of images used the method is not capable of handling few shot to fill the remaining holes. On another side, the texture look clean and detailed !
It looks like the Daniel Martinec Round patches, but it's clearly more (See the thumbnail).
Source : Yasutaka Furukawa