Makeup can mask facial recognition

004795_10_fig2As advancements in facial recognition are made, many people have become increasingly worried about protecting or maintaining their privacy. And while there are ways to hide or obscure a face, it has been thought by many that makeup wasn’t enough to fool that cameras.

However, researchers in Michigan and West Virginia have set out to disprove such an idea, demonstrating how makeup actually can change the appearance of an individual. While the way someone’s head is held, the expressions he or she may make, and the lighting don’t confuse computers, things such as natural aging or face-altering methods like plastic surgery can. Now, makeup can be added to the list.

This is because makeup can change the shape and texture of a face, by playing natural contours of the face up or down, changing the appearance of the quality and size of certain features, and even camouflaging identifying marks, including scars, birth marks, moles, or tattoos. Of course not a simple application of makeup is enough to do the rick, but heavy layers of makeup can be.

To find out more about this study and its aims, refer to an article on the subject that describes it in further detail.

Facial recognition software used to track presidential candidates’ emotions during debates

Image courtesy of wlfi.com

Purdue University professor Chris Kowal is using facial recognition software to track the emotions of President Barack Obama and Governor Mitt Romney in real time during the debates. Dr. Kowal wants to see if there is a clear relationship between his findings and voters’ perceptions.

Universal emotions like happiness, fear, surprise, and others are easily detected by the facial recognition software as it maps the muscles of the face and their movement.

And since an emotional connection is essential to making sales, promoting brands, and winning over undecided voters, the applications for facial recognition software in marketing research are unlimited.

Dr. Kowal suggests that the ability of facial recognition software to track emotions alongside fact checking could be next fascinating area of research.

This blog is sponsored by ImageGraphicsVideo, a company offering ComputerVision Software Development Services.

Computers trained to recognize emotions

While humans have always been better at detecting and responding to emotions than computers, new research done at MIT is showing that, in some cases, computers are taking the lead over their human counterparts.

The study focuses on the act of smiling, honing in on the different reasons people smile, whether out of happiness and delight, or pure frustration. And using the results from the study for a large sample of people, researchers on this project have fed information to computers, which are actually better at telling the different types of smiles apart.

Experiments involved asking participants to act out expressions associated with specific emotions, which were recorded by a webcam. They then had to fill out a purposely-frustrating form or watch a video made to evoke feelings of delight, and their reactions were recorded as well.

One of the most interesting findings was that the vast majority of those asked to feign frustration did not smile in their forced attempts, but upon experiencing frustration in an unprompted situation, they did. Additionally, there is a difference in the way people smile; those who are delighted tend to have a gradual build-up to the smile, whereas frustrated smiles are quick and fleeting.

The main aim of this study is to help unravel the mysteries of emotions. In particular, those affected by autism may have a difficult time interpreting emotions; while a smile is viewed as a positive thing, this study demonstrates that isn’t always the case. Additionally, those who are public speakers or figures in the spotlight might benefit from better understanding the timing in reactions and how the slightest difference in facial emotions can be interpreted differently.

Apple to use 3D technology for its iPhone 5

Three-dimensional (3D) technology has been garnering quite a following for itself, most recently for teaming up with Apple.

According to PatentlyApple, the iPhone 5, which is the next generation of iPhone, has a proposed 3D camera that will become the new standard.


Image courtesy of PatentlyApple

In addition to expected upgrades of resolution and color, the camera will utilize the combined technologies of Laser, LIDAR and RADAR, giving it the ability to create 3D imagery. Additionally, it will be capable of both facial recognition and facial gesturing recognition.

While cameras that rely on 3D technology are already on the market, the scope of what they can accomplish is still considered limited. But with the sensors Apple plans to implement, it is poised to become the most sophisticated camera on the smartphone market.

Combined technology for facial animation

Facial animation is about to become even more realistic.

By combining two types of technology – 3D scanning capabilities as well as a motion-capture system – Microsoft is able to zero in on the temporal and spatial resolutions of a face. The result is more detailed faces that can be used in video games, movies, and even as avatars for Kinect.

Watch the following video for more specifics on just how it works and how it can be used in the future:

Facial detection zeroes in on micro-expressions

ComputerVision Software is moving beyond the simple boundaries of facial detection, and now is taking that technology one step further to identifying and decoding micro-expressions on faces.

Scientists working with the Machine Vision Group at the University of Oulu in Finland have developed a new system which is able to interpret facial expressions and tell when they betray an individual.

The field of studying micro-expression, which was featured on the television show “Lie to Me,” is difficult to precisely detect even by the most trained of humans. This is because the expressions tend to last between 1/3 and 1/25 of a second, and even standard cameras are incapable of properly detecting them.

However, the Oulu scientists developed a Local Binary Pattern (LBP), which is a special system of increasing the details of an algorithm, which breaks down and further categorizes and classifies expressions.

The use of micro-expressions can range from interviewing suspects who could be untruthful to identifying potential terrorists at an airport, but that’s only the beginning. What other ways can you foresee this technology being used?