Honeybees informing ComputerVision

A project conducted by RMIT‘s school of media and communication has come up with findings that the brains of honey bees are capable of tackling complex visual problems, as well as creating and applying rules to adapt to those specific scenarios.

This information was published earlier this month in the Proceedings of the National Academy of Sciences (PNAS), with the explanation that “the miniature brains of honey bees rapidly learn to master two abstract concepts simultaneously, one based on spatial relationships (above/below and right/left) and another based on the perception of difference.”


Image courtesy of iStockphoto/Florin Tirlea

An example of this in humans is the ability to encounter a situation, such as coming up to an intersection, and acting accordingly. This involves a range of realizations and responses, such as observing the traffic light, gauging the speed of vehicles and looking out for pedestrians or bicyclists that might also obstruct the flow of traffic. Based on the information being fed to our brains, we are able to make split-second decisions, something which computers aren’t yet fully capable of doing. This is because it involves processing more than one kind of complex task, and these tasks don’t appear to have anything in common, in the “mind” of a computer.

However, that’s not to say that computers can’t eventually learn this skill, too. By studying the brains of honey bees, researchers hope to learn how this works in them, and then apply those same things to computers, allowing them to process visual inputs efficiently and effectively.

Computer Vision and the New Aesthetic

Computer Vision isn’t just technology. Now it seems, with the rise of a movement known as the New Aesthetic, Computer Vision is an artform, a way of experiencing life. A recent article in the Atlantic takes on this new way of viewing the world through the eyes of humans while simultaneously viewing the world from the perspective of Computer Vision, and somehow melding the two together. It is, simply put, “an eruption of the digital into the physical.”

What are your thoughts? Just how concerned should we be with this movement, and how seriously should it be taken?

Augmented reality right before our eyes

Google is taking computer vision into its own hands and may soon be transferring it into yours.

The California-based Internet and software corporation recently revealed information that it is working on a new project and product, entitled Project Glass. The project is based around a pair of glasses that see, analyze and interpret the world around its wearer. It combines computer vision, eye motion, voice recognition, object recognition and more, to create something which takes every day stimuli and turns it into information that is immediately accessible.

According to an article in today’s edition of the New York Times, “the glasses can stream information to the lenses and allow the wearer to send and receive messages through voice commands. There is also a built-in camera to record video and take pictures.”

How this will change the way people interact is a major question that people have. Additionally, many feel that our brains are already overstimulated by all the endless amounts of information available to us. Will this make things even worse? Or do these glasses have the potential to streamline the way we go about our daily lives?

An inferiority complex for ComputerVision?

It’s undeniable that the rise of ComputerVision technology has aided our society in many ways, by making the completion of complex and time-consuming tasks easier and faster. Yet in spite of the many advances made in the field, particularly over the past few years, the technology still isn’t able to rival the capabilities of humans – at least not now.

According to an article entitled “Comparing machines and humans on a visual categorization test,” published this month in the Proceedings of the National Academy of Sciences (PNAS), the ability of ComputerVision software to recognize pre-defined objects, complete projects, and solve problems may be quick and efficient, but it still falls short of what humans are capable of.

In an experiment conducted with people and machines, the test subjects had to recognize and classify abstract images. Again and again, human test subjects proved that they had the ability to “learn” – and apply what has been learned – to decrease the rate of error in recognizing reoccurring patterns. After viewing less than 20 images, most human participants were able to pick up on the pattern. Meanwhile, computers that normally fare well while working within a set of limited data required thousands of examples to produce correct answers, demonstrating that elaborative tasks which rely upon more abstract identification and reasoning are a weakness.

The study refers to this shortcoming of computer technology as a “semantic gap.” Of course, the pertinent question isn’t necessarily whether or not the reasoning abilities of computers will ever be able to parallel that of humans. Instead, perhaps we should be asking when they will be able to.