Honeybees informing ComputerVision

A project conducted by RMIT‘s school of media and communication has come up with findings that the brains of honey bees are capable of tackling complex visual problems, as well as creating and applying rules to adapt to those specific scenarios.

This information was published earlier this month in the Proceedings of the National Academy of Sciences (PNAS), with the explanation that “the miniature brains of honey bees rapidly learn to master two abstract concepts simultaneously, one based on spatial relationships (above/below and right/left) and another based on the perception of difference.”


Image courtesy of iStockphoto/Florin Tirlea

An example of this in humans is the ability to encounter a situation, such as coming up to an intersection, and acting accordingly. This involves a range of realizations and responses, such as observing the traffic light, gauging the speed of vehicles and looking out for pedestrians or bicyclists that might also obstruct the flow of traffic. Based on the information being fed to our brains, we are able to make split-second decisions, something which computers aren’t yet fully capable of doing. This is because it involves processing more than one kind of complex task, and these tasks don’t appear to have anything in common, in the “mind” of a computer.

However, that’s not to say that computers can’t eventually learn this skill, too. By studying the brains of honey bees, researchers hope to learn how this works in them, and then apply those same things to computers, allowing them to process visual inputs efficiently and effectively.

Computer Vision and the New Aesthetic

Computer Vision isn’t just technology. Now it seems, with the rise of a movement known as the New Aesthetic, Computer Vision is an artform, a way of experiencing life. A recent article in the Atlantic takes on this new way of viewing the world through the eyes of humans while simultaneously viewing the world from the perspective of Computer Vision, and somehow melding the two together. It is, simply put, “an eruption of the digital into the physical.”

What are your thoughts? Just how concerned should we be with this movement, and how seriously should it be taken?

Mind’s Eye fuses ComputerVision with the military

Computer Vision has been prominent in the headlines for its use by law enforcement, but now the government is also taking an interest in discovering how ComputerVision technology might aid the military.

The Defense Advanced Research Projects Agency, otherwise known as DARPA, has set to work on a program it has titled Mind’s Eye.

The aim of the project is to help reconcile what computers and humans take away from the same scene. This is because, while computers are becoming increasingly better at recognizing individuals or objects and acting in a particular manner, there is still a large margin of error.

Meanwhile, humans are still better at a variety of tasks, but might not be able to do them as quickly as a computer. The goal of Mind’s Eye is to close the gap between the two by using technology to combine the quickness and efficiency of a computer with the perception and logic of a human.

The way that ComputerVision might help out in the military is by detecting enemy troops in combat or relaying other pertinent information directly to personnel in the field, in real time. The eventual goal is that these machines or computers will be able to take in a scene, interpret what is happening, and communicate that information back to soldiers via both words and pictures.


Image courtesy of DARPA

It is interesting to think the ways in which technology like this has the capacity to change the way militaries are run and conflicts are fought.

Artificial vision focuses on potatoes

In the past six months, ComputerVision has made headlines for its work in the agricultural industry: first with oranges, then with strawberries, and now with potatoes.

In collaboration with the British Potato Council, the Centre for Vision and Robotics Research at the University of Lincoln has created a machine that is able to detect defective potatoes.


Image courtesy of the USDA

Currently, standard potato protocol requires that they’re sorted and classified by people, who have nothing to rely on except for their eyes and their hands. This new system will work by calling upon information supplied to it from sample batches of bad potatoes, and has the ability to store information and learn from it.

The potato prototype was made from what are known as “off-the-shelf” materials. Essentially, there weren’t any kind of special parts that were ordered for its construction, which demonstrates just how much can be accomplished with few materials and little money. Naturally, the team is convinced that more expensive and specialized hardware could result in an even better model in the future.

An inferiority complex for ComputerVision?

It’s undeniable that the rise of ComputerVision technology has aided our society in many ways, by making the completion of complex and time-consuming tasks easier and faster. Yet in spite of the many advances made in the field, particularly over the past few years, the technology still isn’t able to rival the capabilities of humans – at least not now.

According to an article entitled “Comparing machines and humans on a visual categorization test,” published this month in the Proceedings of the National Academy of Sciences (PNAS), the ability of ComputerVision software to recognize pre-defined objects, complete projects, and solve problems may be quick and efficient, but it still falls short of what humans are capable of.

In an experiment conducted with people and machines, the test subjects had to recognize and classify abstract images. Again and again, human test subjects proved that they had the ability to “learn” – and apply what has been learned – to decrease the rate of error in recognizing reoccurring patterns. After viewing less than 20 images, most human participants were able to pick up on the pattern. Meanwhile, computers that normally fare well while working within a set of limited data required thousands of examples to produce correct answers, demonstrating that elaborative tasks which rely upon more abstract identification and reasoning are a weakness.

The study refers to this shortcoming of computer technology as a “semantic gap.” Of course, the pertinent question isn’t necessarily whether or not the reasoning abilities of computers will ever be able to parallel that of humans. Instead, perhaps we should be asking when they will be able to.