Researchers at the Pittsburgh campus of Disney Research are using computer vision to analyze the patterns of field hockey players, in hopes of creating a new way for coachers and commentators to make sense of game data in real time. Furthermore, this technology can be used not only in field hockey, but any other kind of team sport with continuous play.
With a focus on player roles, the research zeroes in on the tactics, strategy, and style for players and their teams. Eight cameras recording high-definition video are used to record matches and data from these is analyzed against other matches. The compiled information can give insight into strengths and weaknesses of teams and solid strategies for how to better face their opponents.
Roboray is the name of the robot, who relies on cameras, real-time 3D visual maps, and computer vision algorithms to move around, “remembering” where he has been before. This allows the robot to navigate autonomously, even when GPS information is not available. The technology used for Roboray also allows the robot to walk in a more human-like manner, with gravity helping him walk. This not only requires less energy, but gives the robot a more human-like appearance.
Would you consider purchasing a robot like Roboray? What kinds of tasks would you find the robot most helpful in assisting you with?
In an effort to help protect and conserve endangered species, scientists have been tracking and tagging them for years. However, there are some species that are either too large in population or too sensitive to tagging, and researchers have been working on another way to track them.
Now, thanks to SLOOP, a new computer vision software program from MIT, identifying animals has never been easier. A human sorting through 10,000 images would likely take years to properly identify animals, but this computer program cuts down the manpower and does things much quicker. Through the use of pattern-recognition algorithms, the program is able to match up stripes and spots on an animal and return 20 images that are likely matches, giving researchers a much smaller and more accurate pool to work with. Then the researchers turn to crowdsourcing, and with the aid of adept pattern-matchers, are able to narrow things down even more, resulting in 97% accuracy. This will allow researchers to spend more practical time in the field working on conversation efforts instead of wasting time in front of a computer screen.
For most people, getting a duplicate key made is not so difficult a process. Of course dealerships charge more for car keys, but regular keys to your house or office often only cost a couple dollars to have made at your local hardware store. However, a new company, Schloosl, is using computer vision to make copies of keys that not only work, but in some cases are superior to the originals.
Before computer vision, keys had been made from photographs before, but it was a precise and difficult art. However, cameras and computer vision of today have made it much easier. Of course, people may ask why it is any better than a hardware store copy, and that’s due to the precision.
Instead of using a blank key that is roughly the same and cutting a copy from an inferior version of a key, computer vision is able to analyze photos pinpoint the exact measurements so that spacing between teeth and deepness or shallowness of the teeth is more accurate, resulting in a better overall product.
What are your thoughts? Would a better key make a difference for you? For more on this, read the Shloosl blog.
Computer Vision has many practical uses, ranging from security enhancement to making our lives easier, but what about art?
A new project, Shinseungback Kimyounghung, was launched by two South Koreans who are using Computer Vision to find faces in the clouds. This is similar to how children often lay on their backs and point out shapes in the sky, but instead, relies on computer algorithms to spot faces.
However, while the project appears artistic on surface level, examining it deeper reveals a study comparison how computers see versus how humans see. What the end result will be isn’t yet clear at this point, but it’s an interesting and thoughtful take on the subject nonetheless.
The robot, which is armed with a Kinect camera and relies on computer vision, has programming that includes a memory loaded with digital models and images of objects to aid in recognition. The goal is to create a robot that not only recognizes what it has already been taught, but grows that information on its own, without the database being expanded manually. It does this not only through simply seeing things, but also by exploring the environment and interacting with objects in it.
The Kinect camera helps to aid HERB in three-dimensional recognition, while the location of an object is also telling. Additionally, HERB can distinguish between items that move and those that don’t. As it interacts with its environment, it eventually is able to determine if something is an object, meaning if something can be lifted.
This information can later lead to robots that do things for humans, such as bringing them items or helping to clean. And while there is still a ways to go, the possibilities seem to be endless.
As advancements in facial recognition are made, many people have become increasingly worried about protecting or maintaining their privacy. And while there are ways to hide or obscure a face, it has been thought by many that makeup wasn’t enough to fool that cameras.
However, researchers in Michigan and West Virginia have set out to disprove such an idea, demonstrating how makeup actually can change the appearance of an individual. While the way someone’s head is held, the expressions he or she may make, and the lighting don’t confuse computers, things such as natural aging or face-altering methods like plastic surgery can. Now, makeup can be added to the list.
This is because makeup can change the shape and texture of a face, by playing natural contours of the face up or down, changing the appearance of the quality and size of certain features, and even camouflaging identifying marks, including scars, birth marks, moles, or tattoos. Of course not a simple application of makeup is enough to do the rick, but heavy layers of makeup can be.
While there is a lot of talk about the ways computer vision can save lives, in some instances, it is already doing just that.
Last month, a computer vision drowning detection system, known as Poseidon, saved a man in Australia from drowning after an epileptic seizure caused him to sink to the bottom of a pool.
And he wasn’t the first, either. In total, 25 people were saved as a result of this system being implemented in pools.
Currently, Poseidon is in more than 220 pools across America, Europe, Japan, and now Australia.
According to the company, “the Poseidon system is based on a network of overhead or underwater cameras connected to a computer equipped with the Poseidon software [that] analyzes the trajectories of the swimmers and sends an alert to lifegards when a swimmer is in trouble.”