Internet of Things: is there a limit?

A bit about IoT

The Internet of Things is a network of interconnected devices able to exchange data. These “things” are in fact electronic platforms such as Arduino or Raspberry PI, connected with sensors, actuators and other electronic or mechanical components that enable them to perform the desired tasks. Each “thing” is able to operate within the existing Internet infrastructure. This technology is part of a more general class of cyber-physical systems such as smart grids, smart homes or virtual power plants.

These “things” are already embedded in the current technologies such as smartphones, thermostats, cars, watches, cameras, etc. making it possible for the devices to interact with one another becoming aware of the environment around them. This enables them to take automated decisions, which in the past needed human supervision. The estimated number of these devices is now around 20 billion and this number is expected to reach 30 billion by 2020.

IoT’s current applications

The benefit of using IoT are numerous given the thousands of people that use it daily. Below there are just few examples where IoT can increase performance.

Agriculture

Integrating data with various IT applications helps farmers take decisions in real time based on multiple factors such as weather predictions, animal behaviour, etc. For example, a model can determine the rise in temperature at a specific time of the year and correlate it with the hatching of insects, the results will help prevent take actions to decrease crop damages.

There are other applications which can monitor soil moisture in vineyards to control the future grapevine production or manage humidity and temperature levels in hay, straw, etc to prevent any microbial contamination.

Health Industry

IoT improves the monitoring of conditions of patients inside as well as outside hospitals.

For example, when patients go in for medical procedures at the Orlando area hospital, they’re tagged with a real-time location system which tracks their progress from the operation room to the recovery unit.

In Chicago, a large home-health agency have started using a health analytics program which enables them to have insights of patients with heart diseases. The system provides daily information from each registered patient about blood pressure, weight, glucose measurements and other indicators that can influence their condition.

Industry

A major impact that IoT will have is in Industry as we are witnessing the fourth industrial revolution.

There are numerous applications that improve monitoring, diagnosing, controlling or assessing any part of the production process.

For example inside chemical plants, toxic gas and oxygen levels can be monitored and controlled via IoT to ensure workers and goods safety.

Inside industrial and medical fridges temperature and humidity can be controlled with sensors connected to Internet.

Any asset can be tracked indoor using active tags such as ZigBee or passive tags such as RFID.

Machines can perform auto-diagnosis and provide real-time info about their status.

Mines or other hazardous environments can be remotely monitored and data can be gathered to anticipate and react to potential threats.

Life style

If you think of any commercial event, using IoT can help vendors understand customers, and shape any services they provide to optimize the sale.

Cognitive building are another IoT delight. They can anticipate our basic needs before we even know it. Food can be monitored, activities can be learned, in a sense, they will assist humans in all things, regardless whether we are physically present in the building or not.

Possible future developments

Companies that create products will have to be prepared to update their product during their lifespan. With such a variety of “things”, developers will launch millions of new apps.

There are areas where devices need to improve such as good battery life, low hardware and operating cost, wide area coverage, etc. In this sense, probably batteries with life measured in years, mainstream hardware with costs below 10$ or emerging standards such as Narrow-band IoT will dominate this area.

IoT will need a new tactics for data analysis. As the data volumes are increasing exponentially, the traditional analytics will be replaced by new tool and algorithms. The main focus for each analysis tool will be to translate the masses of data into information that can be converted into concrete actions.

Security is also a factor that needs to be considered. As the devices evolve, so will the information attacks. The need to encrypt the communication is fairly obvious, but also new challenges have to be considered, such as impersonation of “things” or attacks that are meant to drain batteries. One of the key pieces missing from IoT is standardization around security and privacy.

As soon as we find answers to all the above issues the limits of what can be achieved through IoT, will head to the sky.

Industry 4.0 Impact on Businesses

What is Industry 4.0?

 

Industry 4.0

As we enter the fourth Industrial Revolution, businesses are headed into the future with the following question: How can we make sure that transition will be smooth and swift?

Before assessing how the new technology will impact our business, we need to answer two questions: what exactly is Industry 4.0 and what needs to be changed in order to keep up with it?

The term “Industry 4.0”  originates from a German industrial initiative conducted by prof. Henning Kagermann. It refers to combining the major digital inventions to transform the different industrial sectors such as electronics, industrial manufacturing, engineering and construction, healthcare, transportation, energy, etc.

These digital innovations include Cloud Computing, Internet of Things (IoT), Advanced Robotics, autonomous machines with a focus on artificial intelligence and Big Data. Using these technologies combined with large-scale machine production allows physical and virtual worlds to merge. This integration is currently in its early stages of life.   

Why does it matter?

 

If we look back at the last ten years, we see that the variety of products has doubled while their life span has reduced to a quarter. There’s clearly a need for rethinking the production process and line it up with the technological  progress.

The interconnection between consumers is now at a historical high with an incredible richness of technologies to help them achieve new things. To understand the next step – i.e. identify domains where services can be improved – machine learning plays a key role. The collected data will increase exponentially, due to innovations such as quantum computing. This means that these data streams have to be correlated, filtered, grouped or merged, in one word they have to make sense. This is where machine learning comes into play.

The result of analyzing and understanding huge amounts of data is increasing the capability to transform the service we offer, to optimize the use of our resources and ultimately to maintain our competitiveness in the market.  

Why not simply ignore it?

 

Being competitive is one of the main reasons to get involved. Staying up to date with new technology will undoubtedly be beneficial to make processes faster and cost-efficient.

The risk to get behind in technology could mean future issues when trying to integrate to the new developments in the manufacturing sector.

 

Where to?

To catch this wave of transformations, companies will have to keep constantly aware of innovations and assess the costs of embedding them into their production process.

As most operations will become fully digitized, this will help bring new products to market with less effort. One advantage to this approach is that new offerings will be easily tested without requiring a full launch.

Because of the new interconnection capability, any product can provide insights into the customers who use it: how they operate, what workarounds they find to their problems, what delays they have, etc.

Industry 4.0 will take advantage of mass customization to offer companies personalized products using previously used data or experience. They will be able to use technology that will already have been trained on others datasets and will easily be applied to their own production line.

 

Determining familial matches with Facial Recognition

Photo courtesy of UCF
Photo courtesy of UCF

Last month, researchers at the University of Central Florida presented a new facial recognition tool at the IEEE Computer Vision and Pattern Recognition conference in Columbus, Ohio. 

While there is no shortage of facial recognition tools used by companies and governments the world over, this one is unique in that its aim is to unite or reunite children with their biological parents.

The university’s Center for Research in Computer Vision initially got to work by creating a database of more than 10,000 images of famous people–such as politicians and celebrities–and their children.

It works by using a specially designed algorithm that breaks the face down into sections, and using various facial parts as comparisons; they are then sorted according to which matches are the most likely.

Though software for this purpose already exists, this tool was anywhere from 3 to 10 percent better than those programs, and it naturally surpasses the recognition capabilities of humans, who base their decisions on appearance rather than the actual science of it. It also reaffirmed the fact that sons resemble their fathers more than their mothers, and daughters resemble their mothers more than their fathers.

What other ways could this tool be useful?

Grammar-like algorithm identifies actions in video

Photo courtesy of http://www.freeimages.co.uk/
Photo courtesy of http://www.freeimages.co.uk/

Body language is a powerful thing, allowing us to gauge the tone and intention of a person, often without accompanying words. But is this a skill that is unique to humans, or are computers also capable of being intuitive?

To date, picking up on the subtext of a person’s movements is still not something machines can do, however, researchers at MIT and UC Irvine have developed an algorithm that can observe small actions in videos and string them together, piecing together an idea of what is occurring. Much like grammar helps create and connect ideas into complete thoughts, the algorithm is capable of not only analyzing what actions are taking place, but guessing what movements will come next.

There are a handful of ways that this technology would benefit humans. For example, if could help an athlete practicing his or her form and technique. Researchers also posit that it could be useful in a future where humans and robots are sharing the same workspace and doing similar tasks.

But with any technological advancement comes the question of cost–not money, but privacy. In this case, would the positives outweigh the negatives? In what ways can you envision this tool being helpful for your everyday tasks?

 

 

Residents enter their buildings using Facial Recognition

Image courtesy of FST21
Image courtesy of FST21

Apartment living has its pros and cons, but one thing many renters can relate to is having to call a locksmith and pay high fees for replacing lost or forgotten keys. However, residents at Manhattan’s Knickerbocker Village don’t have to worry about that.

The 12-building complex, home to 1,600 apartments, recently installed the FST21 SafeRise system.

How it works is that residents are photographed, with a series of body measurements and movements also recorded. This information is then stored in a system that recognizes the residents when they approach an entrance, immediately allowing them to enter.

In addition to the facial recognition technology, the system also includes a ‘digital doorman’ that allows visitors to contract residents via an intercom, or contact the security desk to ask permission to enter.

What are your thoughts on this technology? Would you feel safer knowing your building used facial recognition?

Image Recognition allows fish to navigate

There are countless practical applications of Image Recognition technology, but for every helpful use, there are plenty of “just because” utilizations of ComputerVision. One such example comes from Studio Diip, a Dutch company that has worked on projects ranging from vegetable recognition to automated card recognition, and which has used technology to allow fish in a tank to navigate a vehicle.

How does it work? In short, a camera positioned on the fish watches it swimming in its tank, analyzes this movement to determine the direction it is going, and then directs a car (mounted to the tank) to head in that direction. It’s not much of a scientific breakthrough, but it’s a fun idea.

How might this technology be applied in other ways? In what way can ComputerVision help improve your product?

ComputerVision steps up soldiers’ game

Photo by Bill Jamieson
Photo by Bill Jamieson

ComputerVision has long been of interest to and utilized by the United States government and armed forces, but now it appears as though the army is using this technology to help transform soldiers into expert marksmen.

Tracking Point, a Texas-based startup that specializes in making precision-guided firearms, sold a number of “scope and trigger” kits for use on XM 2010 sniper rifles. The technology allows a shooter to pinpoint and “tag” a target, then use object-tracking technology, combined with a variety of variables (temperature, distance, etc.), to determine the most effective place to fire. The trigger is then locked until the person controlling the weapon has lined up the shot correctly, at which point he or she can pull the trigger.

To learn more about this technology and how it is implemented, watch the following video:

Computer Vision aids flow cytometry

Photo courtesy of the USCD Jacob School of Engineering
Photo courtesy of the USCD Jacob School of Engineering

Engineers at the University of California, San Diego, are using Computer Vision as a means of sorting cells, and thus far have been able to do so at a rate of 38 times faster than before. This process of counting and sorting cells is known as  flow cytometry.

The analysis of the cells helps to categorize them based on their size, shape, and structure, and also can distinguish if they are benign or malignant, information that could be useful for clinical studies and stem cell characterization.

While this type of research was occurring before, it’s a job that has traditionally taken a lot of time. But now, the use of a camera on a microscope can analyze information faster–cutting the time from between 0.4 and 10 seconds to observe and analyze a single frame down to between 11.94 and 151.7 milliseconds.

In what ways do you see this technology making advancements in the medical and clinical world? How else can you imagine it benefitting science?

Pinterest and Getty Images join forces with Image Recognition

 Image courtesy of Pinterest
Image courtesy of Pinterest

Since its launch in 2010, Pinterest has been the center of a variety of copyright issues, mostly pertaining to the unauthorized use of copyrighted material by users. The biggest problem in all of this is that most users are unknowningly violating copyright laws, which makes it harder to prosecute them. But recently, it seems as though Pinterest has found a fix for this quandary.

Rather than fighting one another, Pinterest has teamed up with (re: paid) Getty Images, a company that owns the rights to millions of images, many of which are repinned on Pinterest without proper credit. The agreement between the two dictates that image recognition software will now be used. This software will identify art and photos that belong to Getty Images and tag them with metadata. In this way, the artists will receive credit, Pinterest will avoid legal issues with Getty, and users will be protected as well. It’s a win-win-win.

Do you think this is a good fix? How else might image recognition software be used to give credit where credit is due?

 

VISAPP Computer Vision conference extends submission deadline

VISAPP_2014_conference_logoComputer Vision is an interesting kind of technology in many ways, but perhaps one of the most notable things about it is how applicable it is and can be in our every day lives. And although it’s not necessarily a “new” field, it is something that is gaining popularity and recognition in the lives of “normal” people, meaning those who are not scientists, researchers, programmers, etc.

At the start of next year, Lisbon, Portugal will play host to a conference on this very topic, which highlights the work being done in the field and the emerging technologies that can help Computer Vision help people. Currently, VISAPP 2014, the 9th International Conference on Computer Vision Theory and Applications, is accepting paper submissions for the conference, with its submission deadline having been extended until September 18.