Last month, researchers at the University of Central Florida presented a new facial recognition tool at the IEEE Computer Vision and Pattern Recognition conference in Columbus, Ohio.
While there is no shortage of facial recognition tools used by companies and governments the world over, this one is unique in that its aim is to unite or reunite children with their biological parents.
The university’s Center for Research in Computer Vision initially got to work by creating a database of more than 10,000 images of famous people–such as politicians and celebrities–and their children.
It works by using a specially designed algorithm that breaks the face down into sections, and using various facial parts as comparisons; they are then sorted according to which matches are the most likely.
Though software for this purpose already exists, this tool was anywhere from 3 to 10 percent better than those programs, and it naturally surpasses the recognition capabilities of humans, who base their decisions on appearance rather than the actual science of it. It also reaffirmed the fact that sons resemble their fathers more than their mothers, and daughters resemble their mothers more than their fathers.
In an effort to help protect and conserve endangered species, scientists have been tracking and tagging them for years. However, there are some species that are either too large in population or too sensitive to tagging, and researchers have been working on another way to track them.
Now, thanks to SLOOP, a new computer vision software program from MIT, identifying animals has never been easier. A human sorting through 10,000 images would likely take years to properly identify animals, but this computer program cuts down the manpower and does things much quicker. Through the use of pattern-recognition algorithms, the program is able to match up stripes and spots on an animal and return 20 images that are likely matches, giving researchers a much smaller and more accurate pool to work with. Then the researchers turn to crowdsourcing, and with the aid of adept pattern-matchers, are able to narrow things down even more, resulting in 97% accuracy. This will allow researchers to spend more practical time in the field working on conversation efforts instead of wasting time in front of a computer screen.
As advancements in facial recognition are made, many people have become increasingly worried about protecting or maintaining their privacy. And while there are ways to hide or obscure a face, it has been thought by many that makeup wasn’t enough to fool that cameras.
However, researchers in Michigan and West Virginia have set out to disprove such an idea, demonstrating how makeup actually can change the appearance of an individual. While the way someone’s head is held, the expressions he or she may make, and the lighting don’t confuse computers, things such as natural aging or face-altering methods like plastic surgery can. Now, makeup can be added to the list.
This is because makeup can change the shape and texture of a face, by playing natural contours of the face up or down, changing the appearance of the quality and size of certain features, and even camouflaging identifying marks, including scars, birth marks, moles, or tattoos. Of course not a simple application of makeup is enough to do the rick, but heavy layers of makeup can be.
In a world with new computer vision-related software being introduced regularly, it’s no surprise that many consumers feel as though there is nothing they can do to protect themselves against an unwanted invasion of privacy.
However, just as companies come out with new facial recognition technology and algorithm-based programs, there are other companies that are helping customers gain a bit more control over how much of a presence they have on the web.
One example is VersusMedia, a Los Angeles-based company which launched Scramble Face in March, with a product targeted toward users wanting to locate and remove pictures of themselves that have been posted or indexed across the internet.
The premise is that in a world where potential employers and educators research applicants ahead of time, users should have some control over the content that appears on the Internet, be it something they posted or something that someone else uploaded.
Users who register with Scramble Face will upload pictures, and then pay for a 90-day period. During this time, the program continually scans the Internet for photos matching the individual, and provides a website name and number of pictures matched on each particular site.
What remains to be seen is whether the site helps with the removal process of identified photos, or simply makes users aware of images that exist, leaving them to deal with it on their own.
Instagram users threw a fit late last year when the popular photo app announced its new terms of services, many which users felt were a violation of privacy.
The main thing users took issue with was the ownership of photos, that is, if Instagram is allowed to take photos from its users and re-appropriate them as the company sees fit.
But what many people don’t realize is that their photos are already being used in the marketing and advertising worlds. Just consider gazeMetrix, a startup that uses computer vision and machine learning when sorting through photos on social media platforms, in order to recognize brand logos and trademarks being photographed.
In finding the use and appearance of these logos, companies are then able to promote their brands more effectively by targeting ads to the proper markets, see how these items are being used, and communicate to users of their specific products.
Add cancer to the list of medical problems computer vision can be used in diagnosing and treating.
At the Lawrence Berkeley National Laboratory in California, researchers have created a program that analyzes images of tumors (of which there are thousands, stored in the database of The Cancer Genome Atlas project). This program relies on an algorithm that sorts through image sets and helps identify tumor subtypes – a process which is not so easy considering no two tumors are alike.
After sorting through the images, it categorizes them according to subtype and composition of organizational structure, and then matches those things up with clinical data that give an idea of how a patient affected with a certain tumor will react to treatment.
For more on what this program can do and how it can be used, see the press release here.
Computer Vision can seem like a daunting field to those not familiar with it. However, the University of Canterbury in New Zealand has created an online, interactive “textbook” geared at teaching high school students more about this creative, emerging field.
Just watch this video to see how easy explaining Computer Vision can be and how applicable it is in our everyday lives.
These sharks boast a unique pattern of white spots on dark skin, which is similar to the kind of “blob extraction” that astrophysicists use to identify stars and other bodies in space.
Once it paved the way, this technology then opened the doors for other types of identification–this time, for dolphins. Through the use of manual photo identification, dolphins were able to be identified based on the marks on their dorsal fins. Yet even this process was too time consuming.
Recently, however, a computer science professor at Eckerd College has, along with the help of her students, created the program DARWIN (Digital Analysis and Recognition of Whale Images on a Network). This speeds up the process by using a combination of computer vision and signal processing techniques to make the process automated, as opposed to manual.
After creating an outline of the fin of a bottlenose dolphin, the system builds up a database and, using computer-vision algorithms, matches up identified fins in the database with those that are unknown. The images then are displayed in a ranking system, showing both matches that are highly probable, as well as those that aren’t as likely.
This is an interesting development for sea animals, because the identification process is faster and more reliable. But what practical applications might be involved? What benefits will researchers of marine life have from programs like this? And what applications can this have in other realms of computer vision and identification?
In fact, those concerned about violation of privacy associated with facial recognition databases, eye scans, or fingerprint matching, might find ear biometrics to be much more appealing as a way of identifying and matching.
It is standard fare in action movies involving secrets and spies to see the protagonist trick iris recognition scanners into allowing access to off-limits vaults or restricted areas. Unfortunately, it seems that the ineffectiveness of these machines isn’t such an unlikely possibility, considering the recently uncovered fallibility of such security precautions.
A research team at the University of Notre Dame in Indiana discovered this when matching up more than 20,000 images of 644 different irises taken over a three-year period of time. The result? While photos taken a month apart were matched up, those with the three-year gap experienced a 153% increase of a false non-match rate. What this means is that, over time, irises change.
According to the team, this is something that could become worse if current technology isn’t improved or updated, and could result in either legitimate persons being locked out of a system or others tricking security at checkpoints.
Right now, these findings point to the idea that images of irises should be regularly updated. Additionally, new technology should be created to take these changing irises into account.