Google has undergone a number of changes in recent months, including but not limited to, the shutting down of some services but the launch of others. And while the end of Google Reader was announced in an effort to drive more users to Google+, that service has also seen some new features.
Google is taking computer vision into its own hands and may soon be transferring it into yours.
The California-based Internet and software corporation recently revealed information that it is working on a new project and product, entitled Project Glass. The project is based around a pair of glasses that see, analyze and interpret the world around its wearer. It combines computer vision, eye motion, voice recognition, object recognition and more, to create something which takes every day stimuli and turns it into information that is immediately accessible.
How this will change the way people interact is a major question that people have. Additionally, many feel that our brains are already overstimulated by all the endless amounts of information available to us. Will this make things even worse? Or do these glasses have the potential to streamline the way we go about our daily lives?
Photo applications and related technology are already utilizing facial recognition with their software. But a new line of Canon cameras has taken that a step further, to include more detailed and personalized facial recognition.
For example, up to 12 predefined users can be programmed into the camera, with information such as their names, birthdays and pictures. Whenever a photo of one of these people is taken, the camera uses this information to focus clearly on the individuals in the picture that match the information it is already storing, which helps those taking the picture avoid zooming in or focusing on someone in the background of the scene.
Other attributes of Face ID include differentiating between children and adults and adjusting the camera settings accordingly. What will be interesting is if the software is also compatible with photo applications; will iPhoto or Picasa be able to take the programmed information and apply it to photos a user has added to his or her library or even uploaded?
Anyone who has typed an address into Google Street View has probably had the experience of seeing that the result shows the street, but not always the exact building at a specified address. This is because computer-vision software hasn’t been advanced enough to zero in on numbers. This is namely due to the fact addresses are displayed on buildings in a variety of colors, sizes and fonts, making it difficult for computers to pinpoint, recognize and extract information from them.
Photo courtesy of Netzer et. al.
However, researchers at Stanford University have teamed up with Google to improve the technology by creating an algorithm that’s able to more accurately identify street numbers. In “Reading Digits in Natural Images with Unsupervised Feature Learning,” the teams explain how they trained the system to recognize these numbers using computer-vision algorithms combined with technology that recognizes patterns and learns to adapt to and implement them.
It is the hope of Google that this kind of information could lead to a better Street View system, in addition to more accurate maps and navigation services.
Image recognition search engines are leading the way today as a new method for the curious public to discover on-the-go information. Once confined to the realm of text-based searching, now consumers are able to rely on software powered by computer-vision technology to connect isolated images with the world at large.
Google is one of the larger companies utilizing computer vision to provide a service to its customers. Its Google Goggles application, made available last year on both the Android and iPhone platforms, allows users to take a picture of a product or a landmark and returns results about the picture in question and any related products or information. In the future, Google hopes to expand its results to living objects, proposing the identification of plant and animal species based upon pictures.
Another company, Pongr, is using similar technology but in a different way. Instead of providing a purely informational service, Pongr bridges the gap between consumer and advertiser. According to a recent press release from the company, users can send in photos of products they like and Pongr will return to the customer with information, links and special offers on and from that brand, while the companies themselves learn valuable information about their consumers and target audiences.
The most recent example of this is Pongr’s collaboration with Pepsi and The X Factor, which asks consumers to send in pictures of Pepsi products advertising The X Factor. In turn, they will be sent links to exclusive content pertaining to the brands and entered into a contest to win – all through the use of computer-vision technology.
The following graphic is Pongr’s explanation of how it works:
Image courtesy of Pongr
The technology is nothing new, but Pongr puts a new spin on it, by capitalizing on something that is already a part of the status quo: cell phone users taking pictures. It then caters to this demographic by connecting the consumer’s every day world with the online world. And at first glance, this appears to be a win-win situation for all parties involved. However, it does bring up the question of what continually new and interesting ways can computer-vision technology be put to use.
Technology that powers reverse image searching is one of the latest rollouts from Google, as the company’s search function has expanded in recent months.
The feature, which was first introduced to a select audience in June, is now available to the general public. According to the Google Images Help page, Google makes use of computer vision technologies to match an image to similar ones in the Google index and associated databases, and returns “best guess” results, which include both text and other image results.
This comes alongside news from six weeks ago of Google purchasing a facial recognition company, and for many, brings up concerns of what Google will be capable of by combining these like technologies.
However, the aforementioned kinds of concerns are likely premature, if not entirely unwarranted, as the type of technology in question is able to search for generic objects, whereas the possibility of returning results as specific as faces is unrealistic – at least at this point.
Although what Google is doing is on a large, corporate scale, there are smaller companies, such as ImageGraphicsVideo, which are creating similar types of software which companies can implement to do similar, industry-specific tasks. And understanding the capabilities of Google in regards to its image search is merely the first step in unraveling what computer-vision software is able to do in other realms.