Grammar-like algorithm identifies actions in video

Photo courtesy of http://www.freeimages.co.uk/
Photo courtesy of http://www.freeimages.co.uk/

Body language is a powerful thing, allowing us to gauge the tone and intention of a person, often without accompanying words. But is this a skill that is unique to humans, or are computers also capable of being intuitive?

To date, picking up on the subtext of a person’s movements is still not something machines can do, however, researchers at MIT and UC Irvine have developed an algorithm that can observe small actions in videos and string them together, piecing together an idea of what is occurring. Much like grammar helps create and connect ideas into complete thoughts, the algorithm is capable of not only analyzing what actions are taking place, but guessing what movements will come next.

There are a handful of ways that this technology would benefit humans. For example, if could help an athlete practicing his or her form and technique. Researchers also posit that it could be useful in a future where humans and robots are sharing the same workspace and doing similar tasks.

But with any technological advancement comes the question of cost–not money, but privacy. In this case, would the positives outweigh the negatives? In what ways can you envision this tool being helpful for your everyday tasks?

 

 

Advertisements

VISAPP Computer Vision conference extends submission deadline

VISAPP_2014_conference_logoComputer Vision is an interesting kind of technology in many ways, but perhaps one of the most notable things about it is how applicable it is and can be in our every day lives. And although it’s not necessarily a “new” field, it is something that is gaining popularity and recognition in the lives of “normal” people, meaning those who are not scientists, researchers, programmers, etc.

At the start of next year, Lisbon, Portugal will play host to a conference on this very topic, which highlights the work being done in the field and the emerging technologies that can help Computer Vision help people. Currently, VISAPP 2014, the 9th International Conference on Computer Vision Theory and Applications, is accepting paper submissions for the conference, with its submission deadline having been extended until September 18.

Roboray: the next step toward human-like robots

Photo courtesy of University of Bristol and Samsung
Photo courtesy of University of Bristol and Samsung
Many people have imagined a day where robots would be advanced enough to help them with simple and complex tasks alike, and now researchers at the University of Bristol have joined forces with Samsung and taken steps toward accomplishing that.

Roboray is the name of the robot, who relies on cameras, real-time 3D visual maps, and computer vision algorithms to move around, “remembering” where he has been before. This allows the robot to navigate autonomously, even when GPS information is not available. The technology used for Roboray also allows the robot to walk in a more human-like manner, with gravity helping him walk. This not only requires less energy, but gives the robot a more human-like appearance.

Would you consider purchasing a robot like Roboray? What kinds of tasks would you find the robot most helpful in assisting you with?

Robots discovering objects through Computer Vision

herbrobotWhile having a personal robot ala Judy in the television show, the Jetson’s, may have been the dream of every child, now such a thing is closer to becoming a reality.

Meet HERB, a Home-Exploring Robot Butler created by researchers at Carnegie Mellon University, as part of the Lifelong Robotic Object Discovery.

The robot, which is armed with a Kinect camera and relies on computer vision, has programming that includes a memory loaded with digital models and images of objects to aid in recognition. The goal is to create a robot that not only recognizes what it has already been taught, but grows that information on its own, without the database being expanded manually. It does this not only through simply seeing things, but also by exploring the environment and interacting with objects in it.

The Kinect camera helps to aid HERB in three-dimensional recognition, while the location of an object is also telling. Additionally, HERB can distinguish between items that move and those that don’t. As it interacts with its environment, it eventually is able to determine if something is an object, meaning if something can be lifted.

This information can later lead to robots that do things for humans, such as bringing them items or helping to clean. And while there is still a ways to go, the possibilities seem to be endless.

Image recognition used with Instagram, other social media sites

starbucksInstagram users threw a fit late last year when the popular photo app announced its new terms of services, many which users felt were a violation of privacy.

The main thing users took issue with was the ownership of photos, that is, if Instagram is allowed to take photos from its users and re-appropriate them as the company sees fit.

But what many people don’t realize is that their photos are already being used in the marketing and advertising worlds. Just consider gazeMetrix, a startup that uses computer vision and machine learning when sorting through photos on social media platforms, in order to recognize brand logos and trademarks being photographed.

In finding the use and appearance of these logos, companies are then able to promote their brands more effectively by targeting ads to the proper markets, see how these items are being used, and communicate to users of their specific products.

An article on Forbes.com provides examples of the many ways this data can be used. What are some other potential uses?

Computer Vision and machine learning help farmers kill weeds

Photo courtesy of Blue River Technology

Blue River Technology, a young startup out of Stanford University, kills weeds using Computer Vision and machine learning. In the process, farmers maximize their yield and cut back on the use of herbicides.

Currently used for lettuce crops, the company’s technology learned how to recognize the plant by analyzing close to 1,000,000 photos of lettuce. When a camera relays images of weeds growing among the lettuce plants, the software instructs a mechanical knife to root them out. As a backup, the software can send a signal to a sprayer that douses the weeds with herbicide.

In a few years’ time, with technology like this, could you see yourself growing the garden of your dreams in your backyard, freed from pulling weeds?

This blog is sponsored by ImageGraphicsVideo, a company offering ComputerVision Software Development Services.