Grammar-like algorithm identifies actions in video

Photo courtesy of
Photo courtesy of

Body language is a powerful thing, allowing us to gauge the tone and intention of a person, often without accompanying words. But is this a skill that is unique to humans, or are computers also capable of being intuitive?

To date, picking up on the subtext of a person’s movements is still not something machines can do, however, researchers at MIT and UC Irvine have developed an algorithm that can observe small actions in videos and string them together, piecing together an idea of what is occurring. Much like grammar helps create and connect ideas into complete thoughts, the algorithm is capable of not only analyzing what actions are taking place, but guessing what movements will come next.

There are a handful of ways that this technology would benefit humans. For example, if could help an athlete practicing his or her form and technique. Researchers also posit that it could be useful in a future where humans and robots are sharing the same workspace and doing similar tasks.

But with any technological advancement comes the question of cost–not money, but privacy. In this case, would the positives outweigh the negatives? In what ways can you envision this tool being helpful for your everyday tasks?



Computer Vision recognizes signs of autism in infants

Photo courtesy of MIT Technology Review

The M.I.T. Technology Review reports on the use of a computer vision system that is helping doctors diagnose autism in infants by age 2 and 3 instead of age 5.

Earlier diagnosis makes it possible to teach social and communication skills before other, maladaptive patterns become ingrained in a child’s behavior.

Diagnosing autism in children at younger ages requires a psychologist with expertise in autism to monitor the child closely for long periods of time.

Even when a child or infant’s behavior can be recorded on video, it takes hours of expert analysis, frame by frame, to arrive at a diagnosis.

Now, Jordan Hashemi and a team at the University of Minnesota is using computer vision to discover those at a higher risk for autism, earlier.

For example, child psychologists have developed several tests that screen for delayed tracking by infants with autism like a rattle shaken from one side of the head and then from the other.

To support this and other tests, the custom-developed computer vision system makes very fine assessments such as monitoring head movement along with the position of the left ear, left eye, and nose. Other behaviors analyzed include changes in limb position and gait in response to stimuli.

This blog is sponsored by ImageGraphicsVideo, a company offering ComputerVision Software Development Services.