Plan to speed up border crossings in Hong Kong

Courtesy South China Morning News, Photo: Edward Wong

Phila Siu writing for The South China Morning Post reports that the Immigration Department in Hong Kong is seeking approval to add face recognition to fingerprint-checking of incoming visitors.

The new technology is part of a revamp of systems at control points expected to speed up processing time.

Automated border clearance has become fairly mature as Australia, Britain, Germany and Portugal have adopted similar facial recognition technologies.

The revamp is in anticipation of the proportion of travelers entering Hong Kong who rely on e-travel documents rising from 60% in 2016 to 90% by 2020.

The department expects this will relieve the need for added stations and free up that manpower for other duties. For more info about projected costs and the debate swirling over how this technology should be used, see the original article here.

With a facial recognition technology custom designed for your business, what changes in workflow would you envision? Add your comment below.

This blog is sponsored by ImageGraphicsVideo, a company offering ComputerVision Software Development Services

Computer Vision Designed for Safe & Speedy Landmine Removal

Researchers Roger Achkar and Michel Owayjan at the American University of Science & Technology, Beirut, Lebanon have published the simulation results of a computer vision system they are developing that uses autonomous robots to detect landmines.

Landmines plague over 60 countries and where 80% of landmine victims were military personnel a century ago, today, 90% are civilians. While anti-personnel landmines are planted underground, anti-tank landmines are above ground and therefore visible to the computer vision system.

Courtesy of American University of Science & Technology, Beirut, Lebanon

Landmines as stored in an Artificial Neural Network for Machine Learning Purposes
Landmines to be detected and classified

Landmine Models as Extracted in a Computer Vision Simulation
Landmines extracted during simulation

Using an autonomous robot equipped with computer vision overcomes several drawbacks of existing anti-tank minesweeping techniques.

Manual minesweeping where humans use metal detectors is dangerous, slow, and information capture to aid future efforts is, for all intents and purposes, non-existent.

Mechanized minesweeping is faster and eliminates the human safety concern; however, it cannot access the variety of locations humans can.

An autonomous robot with computer vision improves on mechanized minesweeping by using image processing and machine learning to continuously inform the next sweep.

The robot scans the terrain to see if a mine exists and uses a camera to capture the scanned area. Next, it processes the image digitally to minimize noise and bring out the features of the landmines.

Once images are enhanced, it accesses an Artificial Neural Network previously trained with images of known landmines. This enables the system to identify, recognize and classify them by make and model with a 90% success rate—even when the images captured show landmines rotated differently or partly hidden.

See the full article the September 2012 issue of the International Journal of Artificial Intelligence and Applications (IJAIA) here.

This blog is sponsored by ImageGraphicsVideo, a company offering ComputerVision Software Development Services.

Infrared camera aimed at individuals drunk in public

Photo courtesy of flickr user _sml

Although talk of facial recognition has been aimed at finding people guilty of alleged crimes or recognizing individuals for industry-related purposes, a new use has been found for this technology: identifying drunk people.

A paper published in the International Journal of Electronic Security and Digital Forensics discusses a new infrared-camera algorithm developed at the University of Patras in Greece. It focuses on the heat dispersion on the faces of people in a crowd, paying attention to where blood vessels dilate at the skin’s surface. Drunk individuals tend to have more heat on their noses and less on their foreheads, information that could be beneficial to law enforcement officers in the field who are trying to detect from afar whether or not someone is under the influence of alcohol.

This blog is sponsored by ImageGraphicsVideo, a company offering ComputerVision Software Development Services.

ComputerVision and brainwaves merge in threat-detection binoculars

Image courtesy of DARPA

While soliders are trained to detect and recognize threats while on duty, there are certain kinds of attacks, ambushes, and other dangers that are nearly impossible to recognize before its too late. In fact, according to an article on Forbes.com, 47 percent of potential dangers are regularly missed by soldiers surveying scenes while on duty.

However, the Defense Advanced Research Projects Agency (DARPA), a branch of the United States Department of Defense, has spent the better half of the last decade working on technology to assist soldiers.

The end result? CT2WS. This system, known official as the Cognitive Technology Threat Warning System, was first proposed in 2007 by the Pentagon. Essentially it is a pair of binoculars that not only see far and wide, but also interact with the brains of soliders, and is able to detect 91 percent of threats.

How does it work? The science is complicated, but essentially, there are two parts: a video camera with a wide field of view that provides around 10 images per second, and an electroencephalogram (EEG) cap on the soldier’s head that monitors and processes brain waves. These two technologies combined are able to work faster than normal human processing to detect threats long before the human brain can pick up on them.

This blog is sponsored by ImageGraphicsVideo, a company offering ComputerVision Software Development Services.

Genetic screening in worms made easier

Researchers at Georgia Tech have developed technology that can detect and determine differences between worms used in genetic research.

According to the recently published findings, worms are one of many tiny multi-cellular living beings that act as effective test subjects for researching genetics.

Using artificial intelligence–combined with advanced image processing–scientists are able to inspect and process these worms–known as Caenorhabditis elegans–more quickly and efficiently than in the past. With a camera that records 3D images of worms and compares them against a model of abnormal worms, the machine can not only tell the difference, but learns from it, teaching itself as it goes.

Picking out distinct factors better than humans can on their own, this technology highlights genetic mutations between the worms, which can be a key for unlocking further advances in genetic research and testing in humans in years to come.

ComputerVision used to identify marine life

Back in 2006, scientists at the Universities Space Research Association discovered a way to take pattern-recognition software used in space research to identify whale sharks.

These sharks boast a unique pattern of white spots on dark skin, which is similar to the kind of “blob extraction” that astrophysicists use to identify stars and other bodies in space.

Once it paved the way, this technology then opened the doors for other types of identification–this time, for dolphins. Through the use of manual photo identification, dolphins were able to be identified based on the marks on their dorsal fins. Yet even this process was too time consuming.

Recently, however, a computer science professor at Eckerd College has, along with the help of her students, created the program DARWIN (Digital Analysis and Recognition of Whale Images on a Network). This speeds up the process by using a combination of computer vision and signal processing techniques to make the process automated, as opposed to manual.

After creating an outline of the fin of a bottlenose dolphin, the system builds up a database and, using computer-vision algorithms, matches up identified fins in the database with those that are unknown. The images then are displayed in a ranking system, showing both matches that are highly probable, as well as those that aren’t as likely.

This is an interesting development for sea animals, because the identification process is faster and more reliable. But what practical applications might be involved? What benefits will researchers of marine life have from programs like this? And what applications can this have in other realms of computer vision and identification?

Automated baked-goods identification can benefit businesses

Researchers at the University of Hyogo, alongside Brain Corporation, have created a computer-vision system that works to develop individual baked goods in a second.

The system had its first test-run at a bakery in Tokyo, where employers are benefitting. This is because their new employees who haven’t yet learned the ropes, or part-timers who don’t know the name of every kind of baked good, can still work the cash registers. Additionally, when there are long lines, it can speed up the check-out process, making the entire operation run more efficiently and smoothly.

While the system works relatively well, there still are some kinks to work out. For example, baked goods are easily distinguish by their shapes and toppings, but when it comes to sandwiches, the machine has a tougher time telling them apart.

Luckily, there are other companies out there with the technology to build even better versions of this same sample system. For example, the people at ImageGraphicsVideo can build a similar system which also has a learning capability. This means that whoever is using the system can input, or “teach,” new items to the computer. Not only that, but the user can point out when items are incorrectly identified, which the program then learns and uses in the future to avoid making the same mistakes.

Ears might be the best biometric

Amid research showing that iris scans are fallible, other research is showing that it’s possible our ears might instead be the best biometric yet.

Certainly, ears aren’t perfect, but unlike other facial features, they aren’t as prone to changing over time.

Image courtesy of MethodShop
Image courtesy of MethodShop

In a paper by Mark S. Nixon, a professor of computer vision at University of Southampton’s School of Electronics and Computer Science, he explains his findings. The paper, entitled “A Survey on Ear Biometrics,” explains many of the advantages ears have over other biometric identifications, including the fact that it is less intrusive.

In fact, those concerned about violation of privacy associated with facial recognition databases, eye scans, or fingerprint matching, might find ear biometrics to be much more appealing as a way of identifying and matching.

Research finds iris scanning unreliable over time

It is standard fare in action movies involving secrets and spies to see the protagonist trick iris recognition scanners into allowing access to off-limits vaults or restricted areas. Unfortunately, it seems that the ineffectiveness of these machines isn’t such an unlikely possibility, considering the recently uncovered fallibility of such security precautions.

A research team at the University of Notre Dame in Indiana discovered this when matching up more than 20,000 images of 644 different irises taken over a three-year period of time. The result? While photos taken a month apart were matched up, those with the three-year gap experienced a 153% increase of a false non-match rate. What this means is that, over time, irises change.

According to the team, this is something that could become worse if current technology isn’t improved or updated, and could result in either legitimate persons being locked out of a system or others tricking security at checkpoints.

Right now, these findings point to the idea that images of irises should be regularly updated. Additionally, new technology should be created to take these changing irises into account.

Honeybees informing ComputerVision

A project conducted by RMIT‘s school of media and communication has come up with findings that the brains of honey bees are capable of tackling complex visual problems, as well as creating and applying rules to adapt to those specific scenarios.

This information was published earlier this month in the Proceedings of the National Academy of Sciences (PNAS), with the explanation that “the miniature brains of honey bees rapidly learn to master two abstract concepts simultaneously, one based on spatial relationships (above/below and right/left) and another based on the perception of difference.”


Image courtesy of iStockphoto/Florin Tirlea

An example of this in humans is the ability to encounter a situation, such as coming up to an intersection, and acting accordingly. This involves a range of realizations and responses, such as observing the traffic light, gauging the speed of vehicles and looking out for pedestrians or bicyclists that might also obstruct the flow of traffic. Based on the information being fed to our brains, we are able to make split-second decisions, something which computers aren’t yet fully capable of doing. This is because it involves processing more than one kind of complex task, and these tasks don’t appear to have anything in common, in the “mind” of a computer.

However, that’s not to say that computers can’t eventually learn this skill, too. By studying the brains of honey bees, researchers hope to learn how this works in them, and then apply those same things to computers, allowing them to process visual inputs efficiently and effectively.