Instagram users threw a fit late last year when the popular photo app announced its new terms of services, many which users felt were a violation of privacy.
The main thing users took issue with was the ownership of photos, that is, if Instagram is allowed to take photos from its users and re-appropriate them as the company sees fit.
But what many people don’t realize is that their photos are already being used in the marketing and advertising worlds. Just consider gazeMetrix, a startup that uses computer vision and machine learning when sorting through photos on social media platforms, in order to recognize brand logos and trademarks being photographed.
In finding the use and appearance of these logos, companies are then able to promote their brands more effectively by targeting ads to the proper markets, see how these items are being used, and communicate to users of their specific products.
Add cancer to the list of medical problems computer vision can be used in diagnosing and treating.
At the Lawrence Berkeley National Laboratory in California, researchers have created a program that analyzes images of tumors (of which there are thousands, stored in the database of The Cancer Genome Atlas project). This program relies on an algorithm that sorts through image sets and helps identify tumor subtypes – a process which is not so easy considering no two tumors are alike.
After sorting through the images, it categorizes them according to subtype and composition of organizational structure, and then matches those things up with clinical data that give an idea of how a patient affected with a certain tumor will react to treatment.
For more on what this program can do and how it can be used, see the press release here.
Computer Vision can seem like a daunting field to those not familiar with it. However, the University of Canterbury in New Zealand has created an online, interactive “textbook” geared at teaching high school students more about this creative, emerging field.
Just watch this video to see how easy explaining Computer Vision can be and how applicable it is in our everyday lives.
According to the recently published findings, worms are one of many tiny multi-cellular living beings that act as effective test subjects for researching genetics.
Using artificial intelligence–combined with advanced image processing–scientists are able to inspect and process these worms–known as Caenorhabditis elegans–more quickly and efficiently than in the past. With a camera that records 3D images of worms and compares them against a model of abnormal worms, the machine can not only tell the difference, but learns from it, teaching itself as it goes.
Picking out distinct factors better than humans can on their own, this technology highlights genetic mutations between the worms, which can be a key for unlocking further advances in genetic research and testing in humans in years to come.
It is standard fare in action movies involving secrets and spies to see the protagonist trick iris recognition scanners into allowing access to off-limits vaults or restricted areas. Unfortunately, it seems that the ineffectiveness of these machines isn’t such an unlikely possibility, considering the recently uncovered fallibility of such security precautions.
A research team at the University of Notre Dame in Indiana discovered this when matching up more than 20,000 images of 644 different irises taken over a three-year period of time. The result? While photos taken a month apart were matched up, those with the three-year gap experienced a 153% increase of a false non-match rate. What this means is that, over time, irises change.
According to the team, this is something that could become worse if current technology isn’t improved or updated, and could result in either legitimate persons being locked out of a system or others tricking security at checkpoints.
Right now, these findings point to the idea that images of irises should be regularly updated. Additionally, new technology should be created to take these changing irises into account.
The idea for the project was inspired by police and forensic television shows, where investigators used computer vision technology to recognize and track down faces of mystery persons. Currently, this kind of technology measures what is known as key features, such as the distance between eyes or someone’s mouth and nose.
For the new project, which will begin next month, the selected team will examine the death masks of people whose identities are known, and compare them to various pieces of artwork, such as portraits and sculpted busts. Assuming this works on the known individuals and their corresponding faces in artwork, the technology can then be applied to art subjects who are unknown, with hopes of any number of them somehow matching up with the database of known faces outside of the art realm.
One thing to consider is that some people may have aged, but just as there is technology to imagine how a kidnapped child might develop and change as he or she ages, so this same kind of algorithm will be applied to art.
Of course, there are plenty of obstacles standing in the way. The most obvious is that art is merely two-dimensional. Additionally, many artists have been known to render their subjects differently, either because of artistic interpretation or due to an attempt to flatter the subject. But even so, this project is the first step in the direction of attempting to crack the code of who the subjects in some of the more famous works of art truly are.
Computer vision applications perform a variety of services, from helping with scientific and medical advancements to assisting in facial recognition for law enforcement and even those in the field of marketing. But the use of this kind of technology also extends to the hobbies and free time of individuals.
An example is the iPad application Visipedia, which is a field guide for bird watchers, created and currently under continual development by researchers at the Jacobs School of Engineering. Users can take and submit photos of birds, and the computer vision algorithms will come up with pictures and information about the bird candidate that is the closest match. While this technology has already been implemented with the more well-known of landmarks and sites, recognizing objects such as animals and plants is still far behind the curve.
How it works is by providing the likely matches based on the images provided, drawing from a database of male, female, old and young images of more than 500 species of birds in North America. Users then are able to label various parts of the bird and provide additional information such as colour or size, to help inform the program. The program is then able to take this information and apply it, which helps improve the algorithms already in place.
Image courtesy of Jacobs School of Engineering at UC San Diego
The app functions much like a game of 20 questions, asking users to answer questions based on identification. The answers then help narrow down the pool of results. Currently, the app averages five or six questions before properly identifying the bird, but the goal is to reduce that number to two or three.
The product, known as Croppola, is a free service which crops images for users, eliminating the grueling personal work required to turn OK shots in excellent ones. It works by allowing users to upload multiple images at once and choose aspect ratios for each individual shot. It also includes “custom” cropping which configures photos to the acceptable sizes for Instagram or Facebook.
Image courtesy of Croppola
While Croppola may not always hit the nail on the head, it seems to get it right most of the time. More so, it’s just another exciting example of the ways that computer vison makes it easier for average technology users to do so much more.
It works to recognize when the cat is carrying something in his mouth – such as a rodent or bird he picked up outside – and deny him access to the house if he brings those things in. It also will ideally disallow other animals – be it different cats or non-feline creatures – from entering the house. This idea is nothing new; in 2004 a similar program was invented.
While programs like Microsoft’s Kinect have made it easier for people to harness the potential power of computer vision, there also exists a free open-source code, OpenCV, which is able to detect, recognize, and follow objects. Using a code someone wrote for recognizing humans, Forster is reworking it to train it to recognize his cat instead.
As promising as Forster’s story is, however, computer vision is still complicated for the average user, and has a long way to go before just anyone can use it in the manner it was designed for.