Smartphone app designed to alert drivers of dangers on the road

Image courtesy of Carlos S. Pereyra

Researchers at Dartmouth have come out with a new app for smartphones that works to detect dangerous driving behavior, in an effort to make roads safer. An article on NewScientist featured the app, CarSafe, explaining how it uses dual-cameras to watch the actions of both the drivers on the road and the driver of the vehicle it is in.

After mounting it in the vehicle, computer-vision technology works to take real-time information and process it, with the ability to detect if a driver is becoming drowsy or distracted, as well as to see if the vehicle or other vehicles are swerving, crossing over the lanes, or coming too close to other cars.

If any of the above occur, an alarm that is both audible and visible goes off on the phone.

What makes this app particularly unique is the fact that smartphones are not capable of using both cameras at once. However, CarSafe is written so that the two cameras are constantly switching back and forth, analyzing the scenes at a rate of eight frames per second. This does cause a delay in real-time processing, but it’s the closest anything has come to this kind of dual-camera technology, thus far.

Learn more by watching the following video:

This blog is sponsored by ImageGraphicsVideo, a company offering ComputerVision Software Development Services.

 

Honeybees informing ComputerVision

A project conducted by RMIT‘s school of media and communication has come up with findings that the brains of honey bees are capable of tackling complex visual problems, as well as creating and applying rules to adapt to those specific scenarios.

This information was published earlier this month in the Proceedings of the National Academy of Sciences (PNAS), with the explanation that “the miniature brains of honey bees rapidly learn to master two abstract concepts simultaneously, one based on spatial relationships (above/below and right/left) and another based on the perception of difference.”


Image courtesy of iStockphoto/Florin Tirlea

An example of this in humans is the ability to encounter a situation, such as coming up to an intersection, and acting accordingly. This involves a range of realizations and responses, such as observing the traffic light, gauging the speed of vehicles and looking out for pedestrians or bicyclists that might also obstruct the flow of traffic. Based on the information being fed to our brains, we are able to make split-second decisions, something which computers aren’t yet fully capable of doing. This is because it involves processing more than one kind of complex task, and these tasks don’t appear to have anything in common, in the “mind” of a computer.

However, that’s not to say that computers can’t eventually learn this skill, too. By studying the brains of honey bees, researchers hope to learn how this works in them, and then apply those same things to computers, allowing them to process visual inputs efficiently and effectively.

Mind’s Eye fuses ComputerVision with the military

Computer Vision has been prominent in the headlines for its use by law enforcement, but now the government is also taking an interest in discovering how ComputerVision technology might aid the military.

The Defense Advanced Research Projects Agency, otherwise known as DARPA, has set to work on a program it has titled Mind’s Eye.

The aim of the project is to help reconcile what computers and humans take away from the same scene. This is because, while computers are becoming increasingly better at recognizing individuals or objects and acting in a particular manner, there is still a large margin of error.

Meanwhile, humans are still better at a variety of tasks, but might not be able to do them as quickly as a computer. The goal of Mind’s Eye is to close the gap between the two by using technology to combine the quickness and efficiency of a computer with the perception and logic of a human.

The way that ComputerVision might help out in the military is by detecting enemy troops in combat or relaying other pertinent information directly to personnel in the field, in real time. The eventual goal is that these machines or computers will be able to take in a scene, interpret what is happening, and communicate that information back to soldiers via both words and pictures.


Image courtesy of DARPA

It is interesting to think the ways in which technology like this has the capacity to change the way militaries are run and conflicts are fought.

An inferiority complex for ComputerVision?

It’s undeniable that the rise of ComputerVision technology has aided our society in many ways, by making the completion of complex and time-consuming tasks easier and faster. Yet in spite of the many advances made in the field, particularly over the past few years, the technology still isn’t able to rival the capabilities of humans – at least not now.

According to an article entitled “Comparing machines and humans on a visual categorization test,” published this month in the Proceedings of the National Academy of Sciences (PNAS), the ability of ComputerVision software to recognize pre-defined objects, complete projects, and solve problems may be quick and efficient, but it still falls short of what humans are capable of.

In an experiment conducted with people and machines, the test subjects had to recognize and classify abstract images. Again and again, human test subjects proved that they had the ability to “learn” – and apply what has been learned – to decrease the rate of error in recognizing reoccurring patterns. After viewing less than 20 images, most human participants were able to pick up on the pattern. Meanwhile, computers that normally fare well while working within a set of limited data required thousands of examples to produce correct answers, demonstrating that elaborative tasks which rely upon more abstract identification and reasoning are a weakness.

The study refers to this shortcoming of computer technology as a “semantic gap.” Of course, the pertinent question isn’t necessarily whether or not the reasoning abilities of computers will ever be able to parallel that of humans. Instead, perhaps we should be asking when they will be able to.