Last month, researchers at the University of Central Florida presented a new facial recognition tool at the IEEE Computer Vision and Pattern Recognition conference in Columbus, Ohio.
While there is no shortage of facial recognition tools used by companies and governments the world over, this one is unique in that its aim is to unite or reunite children with their biological parents.
The university’s Center for Research in Computer Vision initially got to work by creating a database of more than 10,000 images of famous people–such as politicians and celebrities–and their children.
It works by using a specially designed algorithm that breaks the face down into sections, and using various facial parts as comparisons; they are then sorted according to which matches are the most likely.
Though software for this purpose already exists, this tool was anywhere from 3 to 10 percent better than those programs, and it naturally surpasses the recognition capabilities of humans, who base their decisions on appearance rather than the actual science of it. It also reaffirmed the fact that sons resemble their fathers more than their mothers, and daughters resemble their mothers more than their fathers.
ComputerVision has long been of interest to and utilized by the United States government and armed forces, but now it appears as though the army is using this technology to help transform soldiers into expert marksmen.
Tracking Point, a Texas-based startup that specializes in making precision-guided firearms, sold a number of “scope and trigger” kits for use on XM 2010 sniper rifles. The technology allows a shooter to pinpoint and “tag” a target, then use object-tracking technology, combined with a variety of variables (temperature, distance, etc.), to determine the most effective place to fire. The trigger is then locked until the person controlling the weapon has lined up the shot correctly, at which point he or she can pull the trigger.
To learn more about this technology and how it is implemented, watch the following video:
Using an HD camera, a special lighting system, and a laser scanner, the setup can count grapes as small as 4mm in diameter, and using algorithms, is able to use the number of grapes and convert that to an estimated harvest yield. And while the margin of error is 9.8 percent, in humans, it’s 30, demonstrating that the Computer Vision system is more efficient and possibly more cost-effective.
Computer Vision has many practical uses, ranging from security enhancement to making our lives easier, but what about art?
A new project, Shinseungback Kimyounghung, was launched by two South Koreans who are using Computer Vision to find faces in the clouds. This is similar to how children often lay on their backs and point out shapes in the sky, but instead, relies on computer algorithms to spot faces.
However, while the project appears artistic on surface level, examining it deeper reveals a study comparison how computers see versus how humans see. What the end result will be isn’t yet clear at this point, but it’s an interesting and thoughtful take on the subject nonetheless.
The robot, which is armed with a Kinect camera and relies on computer vision, has programming that includes a memory loaded with digital models and images of objects to aid in recognition. The goal is to create a robot that not only recognizes what it has already been taught, but grows that information on its own, without the database being expanded manually. It does this not only through simply seeing things, but also by exploring the environment and interacting with objects in it.
The Kinect camera helps to aid HERB in three-dimensional recognition, while the location of an object is also telling. Additionally, HERB can distinguish between items that move and those that don’t. As it interacts with its environment, it eventually is able to determine if something is an object, meaning if something can be lifted.
This information can later lead to robots that do things for humans, such as bringing them items or helping to clean. And while there is still a ways to go, the possibilities seem to be endless.
As advancements in facial recognition are made, many people have become increasingly worried about protecting or maintaining their privacy. And while there are ways to hide or obscure a face, it has been thought by many that makeup wasn’t enough to fool that cameras.
However, researchers in Michigan and West Virginia have set out to disprove such an idea, demonstrating how makeup actually can change the appearance of an individual. While the way someone’s head is held, the expressions he or she may make, and the lighting don’t confuse computers, things such as natural aging or face-altering methods like plastic surgery can. Now, makeup can be added to the list.
This is because makeup can change the shape and texture of a face, by playing natural contours of the face up or down, changing the appearance of the quality and size of certain features, and even camouflaging identifying marks, including scars, birth marks, moles, or tattoos. Of course not a simple application of makeup is enough to do the rick, but heavy layers of makeup can be.
Add cancer to the list of medical problems computer vision can be used in diagnosing and treating.
At the Lawrence Berkeley National Laboratory in California, researchers have created a program that analyzes images of tumors (of which there are thousands, stored in the database of The Cancer Genome Atlas project). This program relies on an algorithm that sorts through image sets and helps identify tumor subtypes – a process which is not so easy considering no two tumors are alike.
After sorting through the images, it categorizes them according to subtype and composition of organizational structure, and then matches those things up with clinical data that give an idea of how a patient affected with a certain tumor will react to treatment.
For more on what this program can do and how it can be used, see the press release here.
As we enter 2013, prognostications abound regarding object recognition technology and its likely impact on the economy, jobs and the human condition.
Some paint a grim picture of human obsolescence and slow growth. Others, a Utopian image of humans and machines extending each other’s capabilities that unlocks new economic vistas for the benefit of all.
In the less sanguine camp is Nobel Prize-winning economist, Paul Krugman who takes issue with the Congressional Budget Office’s (CBO) seemingly pat assumption that long term growth will occur at about the same rates we’ve seen for the past few decades.
On the more optimistic side is Bianca Bosker, Executive Tech Editor for the Huffington Post. She does a masterful job synthesizing a wide array of sources to make a balanced case.
Writing in the New York Times, Krugman points to Robert Gordon of Northwestern University and his contention that the age of growth that began in the late 1700s may be drawing to a close. He sees Gordon’s reasoning as a useful basis for doubting the CBO’s projections, however, Krugman does not agree with Gordon.
Gordon contends that growth has occurred unevenly owing to several discrete industrial revolutions that took us to the next level of major growth. The first was the steam engine. The second was the internal combustion engine, electrification and chemical engineering. The third is the information age and the Internet where smart machines are the payoff for fewer people than was the case in the second revolution.
Krugman posits that machines with ever-improving artificial intelligence and object recognition capabilities will likely fuel higher productivity and economic growth. He even states it would be “all too easy” to fear that smart machines will bring about the mass obsolescence of American workers.
If so, is Krugman saying the CBO’s long term projections are too conservative? Could this be a silver lining of sorts? He then asks the more unsettling question, “Who will benefit from this growth?”
Krugman promises in a future column to take up why the conventional wisdom underpinning long run budget projections is “all wrong.” And when he does, we should get a clearer view of his take on the roles object recognition, machine learning and human beings will play in the economy of tomorrow.
Bosker, in striking a balance between human obsolescence and human empowerment, seconds Kevin Kelly’s prediction in Wired that “robo surgeons” and “nannybots” will surely take over human jobs.
She then explores Google’s Project Glass as an example of wearable computers soon to arrive that observe and record our surroundings like an add-on brain.
Bosker quotes AI researcher Rod Furlan who speculates that Google Glass could soon help us find misplaced car keys. She predicts facial recognition will help us remember people’s names as soon as they come into view and bypass a potentially awkward encounter. And that object recognition could encourage us to skip an indulgent food we’d best not eat.
Attention shoppers with smartphone in hand: snap a photo of what someone else is wearing and find it on sale now!
L’Atelier of Paris reports on a startup from Sweden called OCULUSai whose Android app, Productify, lets you do just that (iOS version coming soon).
Productify recognizes the shoes, clothing or accessories of someone in your gaze and responds with a list of sites where to buy it. It also displays information on related products.
OCULUSai is focusing its efforts on the world of fashion and apparel and presented its solution at LeWeb in Paris in early December.
The technologies woven together include: computer vision, object recognition, image scanning, database marketing, and social media integration.
When you take a picture of an object, the app queries a database pre-populated with extensive fashion and apparel listings and related data about their visual qualities. In turn, the server, Oculus Brain, lists e-commerce sites where the item is for sale. You can share the product on Facebook and Twitter and recommend it to a friend.
I guess this means the days of people guessing how much you spent on that outfit are over!
So, how do you see augmented reality combining with e-commerce to transform your industry?
The Herald Sun of Australia reports that the Queensland Police Service (QPS) is in negotiations to begin using special facial recognition software as the state prepares to host the G20 Summit in 2014.
Police would be able to compare images taken over CCTV or with a phone against mugshots in a database and return high probability matches in under one second. They view the technology as essential to combat criminal and terrorist networks.
Meanwhile, civil libertarians have raised concerns that the databases will be scraped from Facebook and other sources with no framework in place to prevent abuse. They are calling for a national data commissioner with the power to investigate and prosecute to address misuse or loss of personal data.
The database is currently holding between 700,000 and 1,000,000 mugshots. QPS will begin mobile data trials in 2013 using iPhones and iPads to capture faces for identification.
Given the difficulty of obtaining fingerprints, DNA and other biometrics, faces are seen as vital to tracking terrorist threats. The system is expected to be able to handle any variation in facial expression. Moreover, it can return matches even when a person looks away, turns their head sideways 60 degrees, or tilts it up or down by 20 degrees.
With this kind of robust capability, what would a facial recognition system make possible in your business? Go ahead and leave your comment below.
This blog is sponsored by ImageGraphicsVideo, a company offering custom ComputerVision Software Development Services