Determining familial matches with Facial Recognition

Photo courtesy of UCF
Photo courtesy of UCF

Last month, researchers at the University of Central Florida presented a new facial recognition tool at the IEEE Computer Vision and Pattern Recognition conference in Columbus, Ohio. 

While there is no shortage of facial recognition tools used by companies and governments the world over, this one is unique in that its aim is to unite or reunite children with their biological parents.

The university’s Center for Research in Computer Vision initially got to work by creating a database of more than 10,000 images of famous people–such as politicians and celebrities–and their children.

It works by using a specially designed algorithm that breaks the face down into sections, and using various facial parts as comparisons; they are then sorted according to which matches are the most likely.

Though software for this purpose already exists, this tool was anywhere from 3 to 10 percent better than those programs, and it naturally surpasses the recognition capabilities of humans, who base their decisions on appearance rather than the actual science of it. It also reaffirmed the fact that sons resemble their fathers more than their mothers, and daughters resemble their mothers more than their fathers.

What other ways could this tool be useful?

ComputerVision steps up soldiers’ game

Photo by Bill Jamieson
Photo by Bill Jamieson

ComputerVision has long been of interest to and utilized by the United States government and armed forces, but now it appears as though the army is using this technology to help transform soldiers into expert marksmen.

Tracking Point, a Texas-based startup that specializes in making precision-guided firearms, sold a number of “scope and trigger” kits for use on XM 2010 sniper rifles. The technology allows a shooter to pinpoint and “tag” a target, then use object-tracking technology, combined with a variety of variables (temperature, distance, etc.), to determine the most effective place to fire. The trigger is then locked until the person controlling the weapon has lined up the shot correctly, at which point he or she can pull the trigger.

To learn more about this technology and how it is implemented, watch the following video:

VISAPP Computer Vision conference extends submission deadline

VISAPP_2014_conference_logoComputer Vision is an interesting kind of technology in many ways, but perhaps one of the most notable things about it is how applicable it is and can be in our every day lives. And although it’s not necessarily a “new” field, it is something that is gaining popularity and recognition in the lives of “normal” people, meaning those who are not scientists, researchers, programmers, etc.

At the start of next year, Lisbon, Portugal will play host to a conference on this very topic, which highlights the work being done in the field and the emerging technologies that can help Computer Vision help people. Currently, VISAPP 2014, the 9th International Conference on Computer Vision Theory and Applications, is accepting paper submissions for the conference, with its submission deadline having been extended until September 18.

Counting grapes with Computer Vision

Photo courtesy of Carnegie Mellon
Photo courtesy of Carnegie Mellon
It’s not secret that Computer Vision is an asset in the agricultural world, yet it’s still interesting to discover the new ways in which it is being put to you. For example, researchers at Carnegie Mellon University’s Robotics Institute published a study demonstrating how visual counting – one of the elementary Computer Vision concepts – is a way of estimating the yield of a crop of grapes.

Using an HD camera, a special lighting system, and a laser scanner, the setup can count grapes as small as 4mm in diameter, and using algorithms, is able to use the number of grapes and convert that to an estimated harvest yield. And while the margin of error is 9.8 percent, in humans, it’s 30, demonstrating that the Computer Vision system is more efficient and possibly more cost-effective.

Computer Vision aids endangered species conservation efforts

Photo by Dr. Paddy Ryan/The National Heritage Collection
Photo by Dr. Paddy Ryan/The National Heritage Collection

In an effort to help protect and conserve endangered species, scientists have been tracking and tagging them for years. However, there are some species that are either too large in population or too sensitive to tagging, and researchers have been working on another way to track them.

Now, thanks to SLOOP, a new computer vision software program from MIT, identifying animals has never been easier. A human sorting through 10,000 images would likely take years to properly identify animals, but this computer program cuts down the manpower and does things much quicker. Through the use of pattern-recognition algorithms, the program is able to match up stripes and spots on an animal and return 20 images that are likely matches, giving researchers a much smaller and more accurate pool to work with. Then the researchers turn to crowdsourcing, and with the aid of adept pattern-matchers, are able to narrow things down even more, resulting in 97% accuracy. This will allow researchers to spend more practical time in the field working on conversation efforts instead of wasting time in front of a computer screen.

Computer Vision sees faces in the clouds

Image courtesy of Shinseungback Kimyonghun
Image courtesy of Shinseungback Kimyonghun

Computer Vision has many practical uses, ranging from security enhancement to making our lives easier, but what about art?

A new project, Shinseungback Kimyounghung, was launched by two South Koreans who are using Computer Vision to find faces in the clouds. This is similar to how children often lay on their backs and point out shapes in the sky, but instead, relies on computer algorithms to spot faces.

However, while the project appears artistic on surface level, examining it deeper reveals a study comparison how computers see versus how humans see. What the end result will be isn’t yet clear at this point, but it’s an interesting and thoughtful take on the subject nonetheless.

Using text to visually search within Google

Photo courtesy of Google
Photo courtesy of Google

Google has undergone a number of changes in recent months, including but not limited to, the shutting down of some services but the launch of others. And while the end of Google Reader was announced in an effort to drive more users to Google+, that service has also seen some new features.

One of these features allows users who are logged in to search within their own albums on Google using Google Search. This kind of technology relies on Computer Vision algorithms to identify people, places, and things more easily, even if they haven’t been properly sorted or identified. The goal is to aid in visual searches through the use of phrases such as “my photos of cats” or “my photos of flowers,” etc. And as is often the case with Computer Vision and machine learning, the more photos you have, the better the technology is often able to refine itself over time.

What are your thoughts on this feature? Does it sound like something you would use?

Robots discovering objects through Computer Vision

herbrobotWhile having a personal robot ala Judy in the television show, the Jetson’s, may have been the dream of every child, now such a thing is closer to becoming a reality.

Meet HERB, a Home-Exploring Robot Butler created by researchers at Carnegie Mellon University, as part of the Lifelong Robotic Object Discovery.

The robot, which is armed with a Kinect camera and relies on computer vision, has programming that includes a memory loaded with digital models and images of objects to aid in recognition. The goal is to create a robot that not only recognizes what it has already been taught, but grows that information on its own, without the database being expanded manually. It does this not only through simply seeing things, but also by exploring the environment and interacting with objects in it.

The Kinect camera helps to aid HERB in three-dimensional recognition, while the location of an object is also telling. Additionally, HERB can distinguish between items that move and those that don’t. As it interacts with its environment, it eventually is able to determine if something is an object, meaning if something can be lifted.

This information can later lead to robots that do things for humans, such as bringing them items or helping to clean. And while there is still a ways to go, the possibilities seem to be endless.

Makeup can mask facial recognition

004795_10_fig2As advancements in facial recognition are made, many people have become increasingly worried about protecting or maintaining their privacy. And while there are ways to hide or obscure a face, it has been thought by many that makeup wasn’t enough to fool that cameras.

However, researchers in Michigan and West Virginia have set out to disprove such an idea, demonstrating how makeup actually can change the appearance of an individual. While the way someone’s head is held, the expressions he or she may make, and the lighting don’t confuse computers, things such as natural aging or face-altering methods like plastic surgery can. Now, makeup can be added to the list.

This is because makeup can change the shape and texture of a face, by playing natural contours of the face up or down, changing the appearance of the quality and size of certain features, and even camouflaging identifying marks, including scars, birth marks, moles, or tattoos. Of course not a simple application of makeup is enough to do the rick, but heavy layers of makeup can be.

To find out more about this study and its aims, refer to an article on the subject that describes it in further detail.