Grammar-like algorithm identifies actions in video

Photo courtesy of http://www.freeimages.co.uk/
Photo courtesy of http://www.freeimages.co.uk/

Body language is a powerful thing, allowing us to gauge the tone and intention of a person, often without accompanying words. But is this a skill that is unique to humans, or are computers also capable of being intuitive?

To date, picking up on the subtext of a person’s movements is still not something machines can do, however, researchers at MIT and UC Irvine have developed an algorithm that can observe small actions in videos and string them together, piecing together an idea of what is occurring. Much like grammar helps create and connect ideas into complete thoughts, the algorithm is capable of not only analyzing what actions are taking place, but guessing what movements will come next.

There are a handful of ways that this technology would benefit humans. For example, if could help an athlete practicing his or her form and technique. Researchers also posit that it could be useful in a future where humans and robots are sharing the same workspace and doing similar tasks.

But with any technological advancement comes the question of cost–not money, but privacy. In this case, would the positives outweigh the negatives? In what ways can you envision this tool being helpful for your everyday tasks?

 

 

Advertisements

Image Recognition allows fish to navigate

There are countless practical applications of Image Recognition technology, but for every helpful use, there are plenty of “just because” utilizations of ComputerVision. One such example comes from Studio Diip, a Dutch company that has worked on projects ranging from vegetable recognition to automated card recognition, and which has used technology to allow fish in a tank to navigate a vehicle.

How does it work? In short, a camera positioned on the fish watches it swimming in its tank, analyzes this movement to determine the direction it is going, and then directs a car (mounted to the tank) to head in that direction. It’s not much of a scientific breakthrough, but it’s a fun idea.

How might this technology be applied in other ways? In what way can ComputerVision help improve your product?

Computer Vision aids flow cytometry

Photo courtesy of the USCD Jacob School of Engineering
Photo courtesy of the USCD Jacob School of Engineering

Engineers at the University of California, San Diego, are using Computer Vision as a means of sorting cells, and thus far have been able to do so at a rate of 38 times faster than before. This process of counting and sorting cells is known as  flow cytometry.

The analysis of the cells helps to categorize them based on their size, shape, and structure, and also can distinguish if they are benign or malignant, information that could be useful for clinical studies and stem cell characterization.

While this type of research was occurring before, it’s a job that has traditionally taken a lot of time. But now, the use of a camera on a microscope can analyze information faster–cutting the time from between 0.4 and 10 seconds to observe and analyze a single frame down to between 11.94 and 151.7 milliseconds.

In what ways do you see this technology making advancements in the medical and clinical world? How else can you imagine it benefitting science?

Counting grapes with Computer Vision

Photo courtesy of Carnegie Mellon
Photo courtesy of Carnegie Mellon
It’s not secret that Computer Vision is an asset in the agricultural world, yet it’s still interesting to discover the new ways in which it is being put to you. For example, researchers at Carnegie Mellon University’s Robotics Institute published a study demonstrating how visual counting – one of the elementary Computer Vision concepts – is a way of estimating the yield of a crop of grapes.

Using an HD camera, a special lighting system, and a laser scanner, the setup can count grapes as small as 4mm in diameter, and using algorithms, is able to use the number of grapes and convert that to an estimated harvest yield. And while the margin of error is 9.8 percent, in humans, it’s 30, demonstrating that the Computer Vision system is more efficient and possibly more cost-effective.

Meet the robot phlebotomist

Most everyone can recall a time when doctors or nurses have needed to draw blood or give shots and had trouble finding the proper veins. A company in California is all too familiar with this scenario, and in an effort to make the process of drawing blood more efficient, has created Veebot.

Veebot is essentially a robot phlebotomist. Relying on infrared lighting, an infrared camera, image analysis software, and ultrasound technology, the robot is able to locate the best vein for taking blood. All of this is checked and double-checked in the span of a few seconds in order to ensure that the first needle prick is successful.

Currently, the Veebot has been correct in its identification about 83% of the time, which is better than the average of humans. Once it has reached a 90% success rate, the company hopes to use the machine in clinical trials.

To see how this machine works, watch the video below:

Computer Vision used in game analysis

Image courtesy of Disney Research
Image courtesy of Disney Research
Researchers at the Pittsburgh campus of Disney Research are using computer vision to analyze the patterns of field hockey players, in hopes of creating a new way for coachers and commentators to make sense of game data in real time. Furthermore, this technology can be used not only in field hockey, but any other kind of team sport with continuous play.

With a focus on player roles, the research zeroes in on the tactics, strategy, and style for players and their teams. Eight cameras recording high-definition video are used to record matches and data from these is analyzed against other matches. The compiled information can give insight into strengths and weaknesses of teams and solid strategies for how to better face their opponents.

Roboray: the next step toward human-like robots

Photo courtesy of University of Bristol and Samsung
Photo courtesy of University of Bristol and Samsung
Many people have imagined a day where robots would be advanced enough to help them with simple and complex tasks alike, and now researchers at the University of Bristol have joined forces with Samsung and taken steps toward accomplishing that.

Roboray is the name of the robot, who relies on cameras, real-time 3D visual maps, and computer vision algorithms to move around, “remembering” where he has been before. This allows the robot to navigate autonomously, even when GPS information is not available. The technology used for Roboray also allows the robot to walk in a more human-like manner, with gravity helping him walk. This not only requires less energy, but gives the robot a more human-like appearance.

Would you consider purchasing a robot like Roboray? What kinds of tasks would you find the robot most helpful in assisting you with?