Computer Vision is an interesting kind of technology in many ways, but perhaps one of the most notable things about it is how applicable it is and can be in our every day lives. And although it’s not necessarily a “new” field, it is something that is gaining popularity and recognition in the lives of “normal” people, meaning those who are not scientists, researchers, programmers, etc.
The robot, which is armed with a Kinect camera and relies on computer vision, has programming that includes a memory loaded with digital models and images of objects to aid in recognition. The goal is to create a robot that not only recognizes what it has already been taught, but grows that information on its own, without the database being expanded manually. It does this not only through simply seeing things, but also by exploring the environment and interacting with objects in it.
The Kinect camera helps to aid HERB in three-dimensional recognition, while the location of an object is also telling. Additionally, HERB can distinguish between items that move and those that don’t. As it interacts with its environment, it eventually is able to determine if something is an object, meaning if something can be lifted.
This information can later lead to robots that do things for humans, such as bringing them items or helping to clean. And while there is still a ways to go, the possibilities seem to be endless.
Technology has already simplified our lives in innumerable ways, but now Sweden-based Tobii Technology is looking to up the ante.
The company specializes in “gaze interaction,” an intuitive technology that allows users to rely not on a mouse or trackpad for computer navigation and control, but rather the eyes.
With the announcement of its newest product, Tobii REX comes a promise of a limited number of units – 5,000 to be exact – that will be available for purchase in the latter half of the year. Caution: it is only compatible with machines using Windows 8.
The product, similar in appearance to Kinect, is a bar that attaches to the computer via a USB connection. While the use of a mouse and trackpad won’t immediately become obsolete, a user can decide him or herself how much of the computer tasks are controlled by eyes.
Kinect has been used for a variety of products and inventions since the system first became available to public last year. But would you trust a car that relies on Kinect software to perform many of its basic functions?
That’s exactly what the Smart INSECT (which stands for Information Network Social Electricity City Transporter), Toyota’s newest electric-powered car concept, does. With the use of facial recognition and motion sensors, the car is capable of recognizing its owner, as well as predicting and analyzing movements of the driver. Some examples of this include a greeting message being displayed when the owner approaches the car, as well as doors opening when the driver needs to enter or exit the vehicle. It can also be accessed remotely from a smartphone in order to lock/unlock the doors and begin the air conditioning system.
Would you buy a car like this if it were available?
This blog is sponsored by ImageGraphicsVideo, a company offering ComputerVision Software Development Services.
As computer vision technology continues making advances in a variety of fields, it now has extended to the the global industry. According to an article in the New York Times, robots are now being used in manufacturing plants and factories. This may come as no surprise to many, but the difference is that a new wave of robots is performing complex tasks–yes, that’s plural. Whereas in the past, many robots had one set skill or function, robots in the new Tesla factory in Fremont, Calif., are performing as many as four specific skills. And it’s not the only place, as other manufacturers are catching on that tasks can be done more efficiently, and for less money.
This is all made possible by computer vision cameras. Years ago, they were expensive to build and implement and limited in their scope. But with the advent of technology such as Microsoft’s Kinect, robots can do and see far more than before, and at a low cost.
While this is no doubt changing the face of manufacturing, some people can’t help but wonder how this will affect the economy, as it eliminates jobs, only increasing the unemployment rate in locations where manufacturing is an integral component of the local economy.
What do you think? Will human factory workers one day become obsolete? Is this a good or a bad change? What are the advantages and disadvantages?
Researchers have installed Kinect cameras in a nursery, which, when combined with specific algorithms, are trained to observe children. The cameras are able to identify children based on their clothing and size, and then compare information about how active the children are as compared to their “classmates,” highlighting those who are more or less active than the average, which could be markers for autism.
Children who show signs of interacting less socially or not possessing fully developed motor skills – indicators of autism – will then be referred to doctors who can better analyze individual cases. While the purpose is not to detect autism 100 percent, the hopes are that this program will pinpoint students who may be cause for concern and catch them early.
Additionally, the creators are working to make the program more advanced, in that it will be able to detect if a child is capable of following an object, as autistic children often have trouble making eye contact, among other things.
Already, some centers are using Kinect not to detect autism, but to help children with it learn to interact socially with others as well as better their own skills.
How else might Kintect assis in detecting or treating autism? What other medical fields might be able to use Kinect to an advantage?
In yet another example of Kinect software being used for 3D-scanning purposes, Silicon Valley startup Matterport has come out with a prototype product, modeled after the Kinect camera, which is able to scan rooms and provide 3D models in a matter of minutes.
Using depth sensors and an RGB camera, the handheld device scans and renders 20 times faster than similar scanners already on the market, according to the company’s founder. And while existing models are either larger or more expensive, the estimated cost of the Matterport product, once released, will be much much cheaper.
The implications of a product like this spreads across a variety of interests. Professionals in fields such as architecture, interior design, and real estate can use this product in many ways. Law enforcement officials might be interested in the scanner for helping to recreate crime scenes. And video game enthusiasts stand to benefit from games with improved graphics. Even the casual technology user might be able to use a scanner to create a panoramic video of a vacation spot to watch later or show off to friends.
It works to recognize when the cat is carrying something in his mouth – such as a rodent or bird he picked up outside – and deny him access to the house if he brings those things in. It also will ideally disallow other animals – be it different cats or non-feline creatures – from entering the house. This idea is nothing new; in 2004 a similar program was invented.
While programs like Microsoft’s Kinect have made it easier for people to harness the potential power of computer vision, there also exists a free open-source code, OpenCV, which is able to detect, recognize, and follow objects. Using a code someone wrote for recognizing humans, Forster is reworking it to train it to recognize his cat instead.
As promising as Forster’s story is, however, computer vision is still complicated for the average user, and has a long way to go before just anyone can use it in the manner it was designed for.
Back in elementary school, there was something inherently unhip about kids who had to wear glasses. But now, the geeks are bringing a while new meaning to the term “four eyes.” With recent advancements in technology, it seems as though glasses are the future, as they offer access to an entirely new way of seeing things.
CEO Vision is just one of the latest innovations in the HUD (heads-up display) realm. What it is, is a management dashboard, which works with the SAP HANA database. Users wear a special pair of glasses, which, in turn, displays and interprets on-the-page reports and other business-related information in real time. The information appears in 3D and functions interactively.
CEO Vision was concocted with the help of two HD cameras, a Microsoft Kinect system, and a head display. It relies upon eye movement and facial tracking, combined with hand gestures, to quickly provide detailed information to its user. This kind of technology is known as a Spatial Operating Environment (SOE), not unlike the technology imagined a decade ago in the 2002 movie “Minority Report.”
For an example of how CEO Vision works in its early stages, check out the following video:
Facial animation is about to become even more realistic.
By combining two types of technology – 3D scanning capabilities as well as a motion-capture system – Microsoft is able to zero in on the temporal and spatial resolutions of a face. The result is more detailed faces that can be used in video games, movies, and even as avatars for Kinect.
Watch the following video for more specifics on just how it works and how it can be used in the future: