Toshiba's Cell Technology Recognizes Human Face
posted 2005-10-30 20:45:10 latest updated 2006-05-02 14:39:43
computer science At the "CEATEC JAPAN 2005" event held from October 4, 2005, Toshiba Corp. demonstrated image recognition technology dubbed "Digital Kagami (mirror) F-type" using the "Cell" next-generation microprocessor.
The technology recognizes a human face in real time, and imposes various virtual makeup styles and hairstyles over the image and then displays it on the screen. With real time motion tracking, it looked like a virtual mirror (3.6 M video file is available).
The demonstration consisted of two key applications. They were the "3D makeup simulation" technology that varies virtual makeup styles and the "3D hairstyle simulation" technology that changes virtual hairstyles. Both applications detect key features of a user's face and estimate a 3D map of the face using the same processing manner. First, the applications capture a user's face with a camera and detect the position of key features of the face, including the eyes, nose and mouth, using image recognition technology. By matching the 2D positions of these key features to a computer graphic image using a 3D face model, the applications estimate what direction the user is facing and the 3D positions of the face's 500 features. The camera at the front bottom of the display takes pictures of a face through a half mirror located in front of the display. This results displayed images looking more like they are reflected in a mirror, because the camera's viewpoint meets the user's visual line.
In detecting facial features, outlines including the "chin" are said to be one of the most difficult facial parts to figure out. Typical methods use facial outline models called "SNAKE" instead of real images. In the demo, a Toshiba's operator manually designated the outline just before the demonstration, without using such existing methods.
After processing a 3D map of 500 facial features, the 3D makeup simulation technology adds on a makeup texture, which looks like shading on Kabuki actors, onto a 3D model and displays it on the screen in front of the user. Using alpha blending technology, real images such as eyeballs and the mouth viewed through the camera were displayed without modification in areas with no makeup. "The application detects facial features so finely that observers can even distinguish the user's facial expressions, but it transmits a real image in areas with slight shades and shadows, which cannot be rendered by the technology," said a company spokesperson.
Unlike the makeup simulation, 3D hairstyle simulation technology employs a method called "image-based rendering," as it is difficult to render computerized graphics of each hair in real time. In the image-based rendering method, an image is selected and composed in line with the position and direction of the user's face using a database storing 50-100 images shot from different angles for a hairstyle on an on-demand basis.
Printed 2018-01-19 22:52 from www.cogsciworld.net
do not publish this article without explicit consent