Determining familial matches with Facial Recognition

Photo courtesy of UCF
Photo courtesy of UCF

Last month, researchers at the University of Central Florida presented a new facial recognition tool at the IEEE Computer Vision and Pattern Recognition conference in Columbus, Ohio. 

While there is no shortage of facial recognition tools used by companies and governments the world over, this one is unique in that its aim is to unite or reunite children with their biological parents.

The university’s Center for Research in Computer Vision initially got to work by creating a database of more than 10,000 images of famous people–such as politicians and celebrities–and their children.

It works by using a specially designed algorithm that breaks the face down into sections, and using various facial parts as comparisons; they are then sorted according to which matches are the most likely.

Though software for this purpose already exists, this tool was anywhere from 3 to 10 percent better than those programs, and it naturally surpasses the recognition capabilities of humans, who base their decisions on appearance rather than the actual science of it. It also reaffirmed the fact that sons resemble their fathers more than their mothers, and daughters resemble their mothers more than their fathers.

What other ways could this tool be useful?

Advertisements

Grammar-like algorithm identifies actions in video

Photo courtesy of http://www.freeimages.co.uk/
Photo courtesy of http://www.freeimages.co.uk/

Body language is a powerful thing, allowing us to gauge the tone and intention of a person, often without accompanying words. But is this a skill that is unique to humans, or are computers also capable of being intuitive?

To date, picking up on the subtext of a person’s movements is still not something machines can do, however, researchers at MIT and UC Irvine have developed an algorithm that can observe small actions in videos and string them together, piecing together an idea of what is occurring. Much like grammar helps create and connect ideas into complete thoughts, the algorithm is capable of not only analyzing what actions are taking place, but guessing what movements will come next.

There are a handful of ways that this technology would benefit humans. For example, if could help an athlete practicing his or her form and technique. Researchers also posit that it could be useful in a future where humans and robots are sharing the same workspace and doing similar tasks.

But with any technological advancement comes the question of cost–not money, but privacy. In this case, would the positives outweigh the negatives? In what ways can you envision this tool being helpful for your everyday tasks?

 

 

Roboray: the next step toward human-like robots

Photo courtesy of University of Bristol and Samsung
Photo courtesy of University of Bristol and Samsung
Many people have imagined a day where robots would be advanced enough to help them with simple and complex tasks alike, and now researchers at the University of Bristol have joined forces with Samsung and taken steps toward accomplishing that.

Roboray is the name of the robot, who relies on cameras, real-time 3D visual maps, and computer vision algorithms to move around, “remembering” where he has been before. This allows the robot to navigate autonomously, even when GPS information is not available. The technology used for Roboray also allows the robot to walk in a more human-like manner, with gravity helping him walk. This not only requires less energy, but gives the robot a more human-like appearance.

Would you consider purchasing a robot like Roboray? What kinds of tasks would you find the robot most helpful in assisting you with?

Robots discovering objects through Computer Vision

herbrobotWhile having a personal robot ala Judy in the television show, the Jetson’s, may have been the dream of every child, now such a thing is closer to becoming a reality.

Meet HERB, a Home-Exploring Robot Butler created by researchers at Carnegie Mellon University, as part of the Lifelong Robotic Object Discovery.

The robot, which is armed with a Kinect camera and relies on computer vision, has programming that includes a memory loaded with digital models and images of objects to aid in recognition. The goal is to create a robot that not only recognizes what it has already been taught, but grows that information on its own, without the database being expanded manually. It does this not only through simply seeing things, but also by exploring the environment and interacting with objects in it.

The Kinect camera helps to aid HERB in three-dimensional recognition, while the location of an object is also telling. Additionally, HERB can distinguish between items that move and those that don’t. As it interacts with its environment, it eventually is able to determine if something is an object, meaning if something can be lifted.

This information can later lead to robots that do things for humans, such as bringing them items or helping to clean. And while there is still a ways to go, the possibilities seem to be endless.

Image recognition used with Instagram, other social media sites

starbucksInstagram users threw a fit late last year when the popular photo app announced its new terms of services, many which users felt were a violation of privacy.

The main thing users took issue with was the ownership of photos, that is, if Instagram is allowed to take photos from its users and re-appropriate them as the company sees fit.

But what many people don’t realize is that their photos are already being used in the marketing and advertising worlds. Just consider gazeMetrix, a startup that uses computer vision and machine learning when sorting through photos on social media platforms, in order to recognize brand logos and trademarks being photographed.

In finding the use and appearance of these logos, companies are then able to promote their brands more effectively by targeting ads to the proper markets, see how these items are being used, and communicate to users of their specific products.

An article on Forbes.com provides examples of the many ways this data can be used. What are some other potential uses?

Computer vision algorithm helps identify tumors

Image courtesy of Berkeley Labs
Image courtesy of Berkeley Labs

Add cancer to the list of medical problems computer vision can be used in diagnosing and treating.

At the Lawrence Berkeley National Laboratory in California, researchers have created a program that analyzes images of tumors (of which there are thousands, stored in the database of The Cancer Genome Atlas project). This program relies on an algorithm that sorts through image sets and helps identify tumor subtypes – a process which is not so easy considering no two tumors are alike.

After sorting through the images, it categorizes them according to subtype and composition of organizational structure, and then matches those things up with clinical data that give an idea of how a patient affected with a certain tumor will react to treatment.

For more on what this program can do and how it can be used, see the press release here.

 

Google Glasses

Will Object Recognition Drive Growth or Slash Jobs?

As we enter 2013, prognostications abound regarding object recognition technology and its likely impact on the economy, jobs and the human condition.

Some paint a grim picture of human obsolescence and slow growth. Others, a Utopian image of humans and machines extending each other’s capabilities that unlocks new economic vistas for the benefit of all.

In the less sanguine camp is Nobel Prize-winning economist, Paul Krugman who takes issue with the Congressional Budget Office’s (CBO) seemingly pat assumption that long term growth will occur at about the same rates we’ve seen for the past few decades.

Google Glasses
Google Glasses Courtesy of Google, Inc.
On the more optimistic side is Bianca Bosker, Executive Tech Editor for the Huffington Post. She does a masterful job synthesizing a wide array of sources to make a balanced case.

Writing in the New York Times, Krugman points to Robert Gordon of Northwestern University and his contention that the age of growth that began in the late 1700s may be drawing to a close. He sees Gordon’s reasoning as a useful basis for doubting the CBO’s projections, however, Krugman does not agree with Gordon.

Gordon contends that growth has occurred unevenly owing to several discrete industrial revolutions that took us to the next level of major growth. The first was the steam engine. The second was the internal combustion engine, electrification and chemical engineering. The third is the information age and the Internet where smart machines are the payoff for fewer people than was the case in the second revolution.

Krugman posits that machines with ever-improving artificial intelligence and object recognition capabilities will likely fuel higher productivity and economic growth. He even states it would be “all too easy” to fear that smart machines will bring about the mass obsolescence of American workers.

If so, is Krugman saying the CBO’s long term projections are too conservative? Could this be a silver lining of sorts? He then asks the more unsettling question, “Who will benefit from this growth?”

Krugman promises in a future column to take up why the conventional wisdom underpinning long run budget projections is “all wrong.” And when he does, we should get a clearer view of his take on the roles object recognition, machine learning and human beings will play in the economy of tomorrow.

Bosker, in striking a balance between human obsolescence and human empowerment, seconds Kevin Kelly’s prediction in Wired that “robo surgeons” and “nannybots” will surely take over human jobs.

She then explores Google’s Project Glass as an example of wearable computers soon to arrive that observe and record our surroundings like an add-on brain.

Bosker quotes AI researcher Rod Furlan who speculates that Google Glass could soon help us find misplaced car keys. She predicts facial recognition will help us remember people’s names as soon as they come into view and bypass a potentially awkward encounter. And that object recognition could encourage us to skip an indulgent food we’d best not eat.

Ultimately, Bosker holds that big data married with gut instinct offers us an opportunity to stake out new professions as laid out by New York Times writer, Steve Lohr in his story, “Sure, Big Data Is Great. But So Is Intuition.” 

So what’s your gut telling you? As wearable computers become more affordable, how do you see them revolutionizing your business?