This guide from Hany Farid gives you a mathematical viewpoint of common point use in Image Processing.
It's a good introduction.
Learn about the fundamentals of signal and image processing built upon a unifying linear algebraic framework.
http://www.cs.dartmouth.edu/farid/tutorials/fip.pdf
Other short guide are accessible here : http://www.cs.dartmouth.edu/farid/tutorials/
A bunch of news about Computer vision, Computer Graphics, GPGPU or the mix of the three....
Thursday, December 30, 2010
Tuesday, December 28, 2010
Sixth Sense interface
I really like the concept at 3 minutes by using the white page and a logo as a tracking pattern.
It's like a reverse movement captation (a computer vision gyroscope !).
SixthSense from Fluid Interfaces on Vimeo.
Found here http://vimeo.com/17567999
"'SixthSense' is a wearable gestural interface that augments the physical world around us with digital information and lets us use natural hand gestures to interact with that information. By using a camera and a tiny projector mounted in a pendant like wearable device, 'SixthSense' sees what you see and visually augments any surfaces or objects we are interacting with. It projects information onto surfaces, walls, and physical objects around us, and lets us interact with the projected information through natural hand gestures, arm movements, or our interaction with the object itself. 'SixthSense' attempts to free information from its confines by seamlessly integrating it with reality, and thus making the entire world your computer."
It's like a reverse movement captation (a computer vision gyroscope !).
SixthSense from Fluid Interfaces on Vimeo.
Found here http://vimeo.com/17567999
"'SixthSense' is a wearable gestural interface that augments the physical world around us with digital information and lets us use natural hand gestures to interact with that information. By using a camera and a tiny projector mounted in a pendant like wearable device, 'SixthSense' sees what you see and visually augments any surfaces or objects we are interacting with. It projects information onto surfaces, walls, and physical objects around us, and lets us interact with the projected information through natural hand gestures, arm movements, or our interaction with the object itself. 'SixthSense' attempts to free information from its confines by seamlessly integrating it with reality, and thus making the entire world your computer."
Friday, December 17, 2010
Bing Panorama generator
The Microsoft Bing application enable mobile phone to generate panorama with live sphere preview !
Thursday, December 16, 2010
Kinect body gesture recognition
As everyone know Microsoft do not provide his tech to recognize body movement.
By openSDK already exist and could be used !
I suggest everyone to look to openNI skeleton detection.
And a demo of the skeleton tracking output !
And Kinect with skeletal tracking from OpenNI
By openSDK already exist and could be used !
I suggest everyone to look to openNI skeleton detection.
And a demo of the skeleton tracking output !
And Kinect with skeletal tracking from OpenNI
Wednesday, December 15, 2010
Visual SLAM with Kinect
This video show VisualSlam with the Kinect camera. It's kind of pose registration and add scene completion with the new 3D point cloud.
Tuesday, December 14, 2010
3D Reconstruction with Kinect
This video shows the first results for 3D object reconstruction using the depth images from the Microsoft Kinect camera.
Data acquisition is as simple as moving the Kinect around the object of interest.
From there the raw data is processed in a two step algorithm:
1. Superresolution images are computed from several consecutive captured frames.
2. The superresolution images are aligned using a improved version of the global alignment technique from "3D Shape Scanning with a Time-of-Flight Camera - Yan Cui, et.al. CVPR2010".
Laboratory : Lab
Data acquisition is as simple as moving the Kinect around the object of interest.
From there the raw data is processed in a two step algorithm:
1. Superresolution images are computed from several consecutive captured frames.
2. The superresolution images are aligned using a improved version of the global alignment technique from "3D Shape Scanning with a Time-of-Flight Camera - Yan Cui, et.al. CVPR2010".
Laboratory : Lab
Wednesday, December 8, 2010
gDebugger is now FREE
gDEBugger GL is an advanced OpenGL Debugger, Profiler and Graphic Memory Analyzer, which traces application activity on top of the OpenGL API to provide the information you need to find bugs and to optimize OpenGL applications performance. And it is now available for free !
Download
Free licence here
Source : ozone3D
Download
Free licence here
Source : ozone3D
Tuesday, December 7, 2010
Virtual Plastic Surgery
Using an innovative VECTRA® 3-Dimensional camera and Sculptor™ software, developed by Canfield Imaging Systems (Fairfield, NJ) Dr. Simoni can provide plastic surgery patients with life-like simulations of their faces in 3 dimensional spaces.
Photogrammetry techniques, for the best, or the worse...
Source : here
This software could be also created one day for breast : remind the simulation software : http://smallideasforabigworld.blogspot.com/2009/11/breast-implant-simulator.html.
Libellés :
Computer Graphics,
computer vision,
Tech demo
Kinect on a quadrotor (UAV)
This video show the usage of a Kinect as a radar sensor on a autonomous quadrotor UAV.
More info here :
More info here :
Hybrid Systems Laboratory http://hybrid.eecs.berkeley.edu/
This work is part of the STARMAC Project in the Hybrid Systems Lab at UC Berkeley (EECS department). http://hybrid.eecs.berkeley.edu/
Researcher: Patrick Bouffard
PI: Prof. Claire Tomlin
Our lab's Ascending Technologies [1] Pelican quadrotor, flying autonomously and avoiding obstacles.
The attached Microsoft Kinect [2] delivers a point cloud to the onboard computer via the ROS [3] kinect driver, which uses the OpenKinect/Freenect [4] project's driver for hardware access. A sample consensus algorithm [5] fits a planar model to the points on the floor, and this planar model is fed into the controller as the sensed altitude. All processing is done on the on-board 1.6 GHz Intel Atom based computer, running Linux (Ubuntu 10.04).
A VICON [6] motion capture system is used to provide the other necessary degrees of freedom (lateral and yaw) and acts as a safety backup to the Kinect altitude--in case of a dropout in the altitude reading from the Kinect data, the VICON based reading is used instead. In this video however, the safety backup was not needed.
[1] http://www.asctec.de
[2] http://www.microsoft.com
[3] http://www.ros.org/wiki/kinect
[4] http://openkinect.org
[5] http://www.ros.org/wiki/pcl
[6] http://www.vicon.com
Researcher: Patrick Bouffard
PI: Prof. Claire Tomlin
Our lab's Ascending Technologies [1] Pelican quadrotor, flying autonomously and avoiding obstacles.
The attached Microsoft Kinect [2] delivers a point cloud to the onboard computer via the ROS [3] kinect driver, which uses the OpenKinect/Freenect [4] project's driver for hardware access. A sample consensus algorithm [5] fits a planar model to the points on the floor, and this planar model is fed into the controller as the sensed altitude. All processing is done on the on-board 1.6 GHz Intel Atom based computer, running Linux (Ubuntu 10.04).
A VICON [6] motion capture system is used to provide the other necessary degrees of freedom (lateral and yaw) and acts as a safety backup to the Kinect altitude--in case of a dropout in the altitude reading from the Kinect data, the VICON based reading is used instead. In this video however, the safety backup was not needed.
[1] http://www.asctec.de
[2] http://www.microsoft.com
[3] http://www.ros.org/wiki/kinect
[4] http://openkinect.org
[5] http://www.ros.org/wiki/pcl
[6] http://www.vicon.com
Thursday, December 2, 2010
Omni stereo camera
ViewPlus a Japonese company purpose fun OmniStereo camera :
I let you admire the works and camera placement ! and the three model Jupiter, Saturn and Venus
I let you admire the works and camera placement ! and the three model Jupiter, Saturn and Venus
this cam get 60 eyes ! |
They are build on a basis component like this one :
If you want to know more : ViewPlus
Libellés :
computer vision,
Image processing,
Tech demo
The multitouch on a blended surface
The german university of Aachen is cirrrently working on a original multitouch interface. They purpose a bended multitouch screen. By using IR led an videoprojector.
Such project is interesting as more an more people have used to work with multi screen display.
I really like the idea that the band that are the more blended is used as a thumbnails view for photo sorting.
Such project is interesting as more an more people have used to work with multi screen display.
I really like the idea that the band that are the more blended is used as a thumbnails view for photo sorting.
Libellés :
Human Machine Interface,
Multitouch,
Tech demo
Wednesday, December 1, 2010
EC2 Instance Type - The Cluster GPU Instance
Amazon purpose now GPU clusters...
http://aws.typepad.com/aws/2010/11/new-ec2-instance-type-the-cluster-gpu-instance.html
Each cluster have the following configuration :
http://aws.typepad.com/aws/2010/11/new-ec2-instance-type-the-cluster-gpu-instance.html
Each cluster have the following configuration :
- A pair of NVIDIA Tesla M2050 "Fermi" GPUs.
- A pair of quad-core Intel "Nehalem" X5570 processors offering 33.5 ECUs (EC2 Compute Units).
- 22 GB of RAM.
- 1690 GB of local instance storage.
- 10 Gbps Ethernet, with the ability to create low latency, full bisection bandwidth HPC clusters.
It cool to see that Good GPU are included in clusters. But does the cloud must be used to launch a computation on the cloud, or does the cloud must be substituted by only one machine with a network interface....
Subscribe to:
Posts (Atom)