Grammar-like algorithm identifies actions in video

Photo courtesy of http://www.freeimages.co.uk/
Photo courtesy of http://www.freeimages.co.uk/

Body language is a powerful thing, allowing us to gauge the tone and intention of a person, often without accompanying words. But is this a skill that is unique to humans, or are computers also capable of being intuitive?

To date, picking up on the subtext of a person’s movements is still not something machines can do, however, researchers at MIT and UC Irvine have developed an algorithm that can observe small actions in videos and string them together, piecing together an idea of what is occurring. Much like grammar helps create and connect ideas into complete thoughts, the algorithm is capable of not only analyzing what actions are taking place, but guessing what movements will come next.

There are a handful of ways that this technology would benefit humans. For example, if could help an athlete practicing his or her form and technique. Researchers also posit that it could be useful in a future where humans and robots are sharing the same workspace and doing similar tasks.

But with any technological advancement comes the question of cost–not money, but privacy. In this case, would the positives outweigh the negatives? In what ways can you envision this tool being helpful for your everyday tasks?

 

 

Advertisements

VISAPP Computer Vision conference extends submission deadline

VISAPP_2014_conference_logoComputer Vision is an interesting kind of technology in many ways, but perhaps one of the most notable things about it is how applicable it is and can be in our every day lives. And although it’s not necessarily a “new” field, it is something that is gaining popularity and recognition in the lives of “normal” people, meaning those who are not scientists, researchers, programmers, etc.

At the start of next year, Lisbon, Portugal will play host to a conference on this very topic, which highlights the work being done in the field and the emerging technologies that can help Computer Vision help people. Currently, VISAPP 2014, the 9th International Conference on Computer Vision Theory and Applications, is accepting paper submissions for the conference, with its submission deadline having been extended until September 18.

Counting grapes with Computer Vision

Photo courtesy of Carnegie Mellon
Photo courtesy of Carnegie Mellon
It’s not secret that Computer Vision is an asset in the agricultural world, yet it’s still interesting to discover the new ways in which it is being put to you. For example, researchers at Carnegie Mellon University’s Robotics Institute published a study demonstrating how visual counting – one of the elementary Computer Vision concepts – is a way of estimating the yield of a crop of grapes.

Using an HD camera, a special lighting system, and a laser scanner, the setup can count grapes as small as 4mm in diameter, and using algorithms, is able to use the number of grapes and convert that to an estimated harvest yield. And while the margin of error is 9.8 percent, in humans, it’s 30, demonstrating that the Computer Vision system is more efficient and possibly more cost-effective.

Computer Vision studies bird flocking behavior

Photo courtesy of Andreas Trepte.
Photo courtesy of Andreas Trepte.
Flocking is a behavior exhibited in birds, which is similar to how land animals join together in herds. And while there is an intricate pattern to this flocking, it’s difficult to establish exactly how birds communicate to keep this form. Their movements are synchronous, but the question is: how do birds on the outer edges of the flock stay in sync and help guide the group? Luckily, we have computer vision to help answer that question.

Before, scientists used to simulate this behavior and then compare it to what occurs with birds in real life in an attempt to demonstrate the how and the why. However, now computer vision can measure both position and velocity of objects in a frame, thanks to the work of William Bialek at Princeton University, which is demonstrating that birds are capable of matching the speed and direction of their neighbor birds.

Additionally, the concept of “critical point” helps explain this, showing that the social desires of the birds overwhelms the motivation of each individual bird, as they work toward flying as a collective flock and not as solo birds.

There still remains more to be seen and explored, but check out this study for further reading.

Meet the robot phlebotomist

Most everyone can recall a time when doctors or nurses have needed to draw blood or give shots and had trouble finding the proper veins. A company in California is all too familiar with this scenario, and in an effort to make the process of drawing blood more efficient, has created Veebot.

Veebot is essentially a robot phlebotomist. Relying on infrared lighting, an infrared camera, image analysis software, and ultrasound technology, the robot is able to locate the best vein for taking blood. All of this is checked and double-checked in the span of a few seconds in order to ensure that the first needle prick is successful.

Currently, the Veebot has been correct in its identification about 83% of the time, which is better than the average of humans. Once it has reached a 90% success rate, the company hopes to use the machine in clinical trials.

To see how this machine works, watch the video below:

Computer Vision used in game analysis

Image courtesy of Disney Research
Image courtesy of Disney Research
Researchers at the Pittsburgh campus of Disney Research are using computer vision to analyze the patterns of field hockey players, in hopes of creating a new way for coachers and commentators to make sense of game data in real time. Furthermore, this technology can be used not only in field hockey, but any other kind of team sport with continuous play.

With a focus on player roles, the research zeroes in on the tactics, strategy, and style for players and their teams. Eight cameras recording high-definition video are used to record matches and data from these is analyzed against other matches. The compiled information can give insight into strengths and weaknesses of teams and solid strategies for how to better face their opponents.

Roboray: the next step toward human-like robots

Photo courtesy of University of Bristol and Samsung
Photo courtesy of University of Bristol and Samsung
Many people have imagined a day where robots would be advanced enough to help them with simple and complex tasks alike, and now researchers at the University of Bristol have joined forces with Samsung and taken steps toward accomplishing that.

Roboray is the name of the robot, who relies on cameras, real-time 3D visual maps, and computer vision algorithms to move around, “remembering” where he has been before. This allows the robot to navigate autonomously, even when GPS information is not available. The technology used for Roboray also allows the robot to walk in a more human-like manner, with gravity helping him walk. This not only requires less energy, but gives the robot a more human-like appearance.

Would you consider purchasing a robot like Roboray? What kinds of tasks would you find the robot most helpful in assisting you with?