The world has shrunk. Distances have dissolved. Communication lines and interaction with countless systems have been rendered feasible. However this technological overhaul has been peripheral and not so much related to the human body; researchers and innovators have constantly grappled with the issue of bridging the gaps which limit the human-environment contact. Well, looks like we finally may have stumbled upon an answer to that quagmire.
Pranav Mistry, a student at the Media Lab of Massachusetts Institute of Technology (MIT), has developed a gestural interface device which enables enrichment of the physical world with knowledge that is digital and allows a person to use natural motions to act together with this information so received. This device, tentatively name as the Sixth Sense, is a wearable machine that assists unexplored interactions between the real and the virtual sphere of data. It consists of certain commonly available components, which are intrinsic to its functioning. These include a camera, a portable battery-powered projection system coupled with a mirror and a cell phone. All these components communicate to the cell phone, which acts as the communication and computation device. The entire hardware apparatus is encompassed in a pendant-shaped mobile wearable device. Basically the camera recognises individuals, images, pictures, gestures one makes with their hands and the projector assists in projecting any information on whatever type of surface is present in front of the person. The usage of the mirror is significant as the projector dangles pointing downwards from the neck. To bring out variations on a much higher plane, in the demo video which was broadcasted to showcase the prototype to the world, Mistry uses coloured caps on his fingers so that it becomes simpler for the software to differentiate between the fingers, demanding various applications. The software program analyses the video data caught by the camera and also tracks down the locations of the coloured markers by utilising single computer vision techniques. One can have any number of hand gestures and movements as long as they are all reasonably identified and differentiated for the system to interpret it, preferably through unique and varied fiducials. This is possible only because the ‘Sixth Sense’ device supports multi-touch and multi-user interaction.