Roboray: the next step toward human-like robots

Photo courtesy of University of Bristol and Samsung
Photo courtesy of University of Bristol and Samsung
Many people have imagined a day where robots would be advanced enough to help them with simple and complex tasks alike, and now researchers at the University of Bristol have joined forces with Samsung and taken steps toward accomplishing that.

Roboray is the name of the robot, who relies on cameras, real-time 3D visual maps, and computer vision algorithms to move around, “remembering” where he has been before. This allows the robot to navigate autonomously, even when GPS information is not available. The technology used for Roboray also allows the robot to walk in a more human-like manner, with gravity helping him walk. This not only requires less energy, but gives the robot a more human-like appearance.

Would you consider purchasing a robot like Roboray? What kinds of tasks would you find the robot most helpful in assisting you with?

Drones use computer vision and 3D mapping to aid humans

Photo by U.S. Navy photo by Photographer’s Mate 2nd Class Daniel J. McLain
Photo by U.S. Navy Photographer’s Mate 2nd Class Daniel J. McLain

Droids have been getting a lot of attention in the news today, but mostly on the negative end of the spectrum. However, droids can also be used for good, which some companies are doing.

South Africa’s SteadiDrone and Switzerland’s SenseFly are two examples of companies producing drones that can aid humans. These machines fly over an area and are able to get a full picture, literally, of what is going on below, by mapping out what the terrain looks like.

Military units and space programs have been using similar devices for years; by sending in a machine beforehand, it can view and assess the risk, giving the user eyes to gauge a situation before being exposed to potential problems. What other ways might drones be used for good, both in the professional and personal realms?

 

New Toyota car concept makes use of facial recognition

Photo courtesy of Yoshikazu Tsuno

Kinect has been used for a variety of products and inventions since the system first became available to public last year. But would you trust a car that relies on Kinect software to perform many of its basic functions?

That’s exactly what the Smart INSECT (which stands for Information Network Social Electricity City Transporter), Toyota’s newest electric-powered car concept, does. With the use of facial recognition and motion sensors, the car is capable of recognizing its owner, as well as predicting and analyzing movements of the driver. Some examples of this include a greeting message being displayed when the owner approaches the car, as well as doors opening when the driver needs to enter or exit the vehicle. It can also be accessed remotely from a smartphone in order to lock/unlock the doors and begin the air conditioning system.

Would you buy a car like this if it were available?

This blog is sponsored by ImageGraphicsVideo, a company offering ComputerVision Software Development Services.

ComputerVision and brainwaves merge in threat-detection binoculars

Image courtesy of DARPA

While soliders are trained to detect and recognize threats while on duty, there are certain kinds of attacks, ambushes, and other dangers that are nearly impossible to recognize before its too late. In fact, according to an article on Forbes.com, 47 percent of potential dangers are regularly missed by soldiers surveying scenes while on duty.

However, the Defense Advanced Research Projects Agency (DARPA), a branch of the United States Department of Defense, has spent the better half of the last decade working on technology to assist soldiers.

The end result? CT2WS. This system, known official as the Cognitive Technology Threat Warning System, was first proposed in 2007 by the Pentagon. Essentially it is a pair of binoculars that not only see far and wide, but also interact with the brains of soliders, and is able to detect 91 percent of threats.

How does it work? The science is complicated, but essentially, there are two parts: a video camera with a wide field of view that provides around 10 images per second, and an electroencephalogram (EEG) cap on the soldier’s head that monitors and processes brain waves. These two technologies combined are able to work faster than normal human processing to detect threats long before the human brain can pick up on them.

This blog is sponsored by ImageGraphicsVideo, a company offering ComputerVision Software Development Services.

Genetic screening in worms made easier

Researchers at Georgia Tech have developed technology that can detect and determine differences between worms used in genetic research.

According to the recently published findings, worms are one of many tiny multi-cellular living beings that act as effective test subjects for researching genetics.

Using artificial intelligence–combined with advanced image processing–scientists are able to inspect and process these worms–known as Caenorhabditis elegans–more quickly and efficiently than in the past. With a camera that records 3D images of worms and compares them against a model of abnormal worms, the machine can not only tell the difference, but learns from it, teaching itself as it goes.

Picking out distinct factors better than humans can on their own, this technology highlights genetic mutations between the worms, which can be a key for unlocking further advances in genetic research and testing in humans in years to come.

Robots take over manufacturing jobs

As computer vision technology continues making advances in a variety of fields, it now has extended to the the global industry. According to an article in the New York Times, robots are now being used in manufacturing plants and factories. This may come as no surprise to many, but the difference is that a new wave of robots is performing complex tasks–yes, that’s plural. Whereas in the past, many robots had one set skill or function, robots in the new Tesla factory in Fremont, Calif., are performing as many as four specific skills. And it’s not the only place, as other manufacturers are catching on that tasks can be done more efficiently, and for less money.

This is all made possible by computer vision cameras. Years ago, they were expensive to build and implement and limited in their scope. But with the advent of technology such as Microsoft’s Kinect, robots can do and see far more than before, and at a low cost.

While this is no doubt changing the face of manufacturing, some people can’t help but wonder how this will affect the economy, as it eliminates jobs, only increasing the unemployment rate in locations where manufacturing is an integral component of the local economy.
What do you think? Will human factory workers one day become obsolete? Is this a good or a bad change? What are the advantages and disadvantages?

Video filter detects what the human eye cannot

One of the most accepted notions about the world around us is that just because we can’t see something, it doesn’t mean it isn’t there. However, researchers at MIT are working on a particular type of technology which makes the unseeable seeable.

In a paper entitled “Eulerian Video Magnification for Revealing Subtle Changes in the World,” the researchers reveal what they’re created, which is a special filter that functions much like a magnifying glass for videos.

This technology was designed to work in the health field; it is capable of detecting body functions that the human eye can’t see on its own, such as a baby’s breathing pattern or a human pulse. However, the implications for what it can do reach far.

For more information on the technical specifics, refer to the following video:

Computers trained to recognize emotions

While humans have always been better at detecting and responding to emotions than computers, new research done at MIT is showing that, in some cases, computers are taking the lead over their human counterparts.

The study focuses on the act of smiling, honing in on the different reasons people smile, whether out of happiness and delight, or pure frustration. And using the results from the study for a large sample of people, researchers on this project have fed information to computers, which are actually better at telling the different types of smiles apart.

Experiments involved asking participants to act out expressions associated with specific emotions, which were recorded by a webcam. They then had to fill out a purposely-frustrating form or watch a video made to evoke feelings of delight, and their reactions were recorded as well.

One of the most interesting findings was that the vast majority of those asked to feign frustration did not smile in their forced attempts, but upon experiencing frustration in an unprompted situation, they did. Additionally, there is a difference in the way people smile; those who are delighted tend to have a gradual build-up to the smile, whereas frustrated smiles are quick and fleeting.

The main aim of this study is to help unravel the mysteries of emotions. In particular, those affected by autism may have a difficult time interpreting emotions; while a smile is viewed as a positive thing, this study demonstrates that isn’t always the case. Additionally, those who are public speakers or figures in the spotlight might benefit from better understanding the timing in reactions and how the slightest difference in facial emotions can be interpreted differently.

Honeybees informing ComputerVision

A project conducted by RMIT‘s school of media and communication has come up with findings that the brains of honey bees are capable of tackling complex visual problems, as well as creating and applying rules to adapt to those specific scenarios.

This information was published earlier this month in the Proceedings of the National Academy of Sciences (PNAS), with the explanation that “the miniature brains of honey bees rapidly learn to master two abstract concepts simultaneously, one based on spatial relationships (above/below and right/left) and another based on the perception of difference.”


Image courtesy of iStockphoto/Florin Tirlea

An example of this in humans is the ability to encounter a situation, such as coming up to an intersection, and acting accordingly. This involves a range of realizations and responses, such as observing the traffic light, gauging the speed of vehicles and looking out for pedestrians or bicyclists that might also obstruct the flow of traffic. Based on the information being fed to our brains, we are able to make split-second decisions, something which computers aren’t yet fully capable of doing. This is because it involves processing more than one kind of complex task, and these tasks don’t appear to have anything in common, in the “mind” of a computer.

However, that’s not to say that computers can’t eventually learn this skill, too. By studying the brains of honey bees, researchers hope to learn how this works in them, and then apply those same things to computers, allowing them to process visual inputs efficiently and effectively.

Kinect cameras may help detect autism

There are stories of new innovative games or programs being developed daily, thanks to the release of Microsoft’s Kinect. However, at the Institute of Child Development in Minneapolis, Minnesota, this technology is being used to help detect autism.

Researchers have installed Kinect cameras in a nursery, which, when combined with specific algorithms, are trained to observe children. The cameras are able to identify children based on their clothing and size, and then compare information about how active the children are as compared to their “classmates,” highlighting those who are more or less active than the average, which could be markers for autism.

Children who show signs of interacting less socially or not possessing fully developed motor skills – indicators of autism – will then be referred to doctors who can better analyze individual cases. While the purpose is not to detect autism 100 percent, the hopes are that this program will pinpoint students who may be cause for concern and catch them early.

Additionally, the creators are working to make the program more advanced, in that it will be able to detect if a child is capable of following an object, as autistic children often have trouble making eye contact, among other things.

Already, some centers are using Kinect not to detect autism, but to help children with it learn to interact socially with others as well as better their own skills.

How else might Kintect assis in detecting or treating autism? What other medical fields might be able to use Kinect to an advantage?