Computer Vision aids flow cytometry

Photo courtesy of the USCD Jacob School of Engineering
Photo courtesy of the USCD Jacob School of Engineering

Engineers at the University of California, San Diego, are using Computer Vision as a means of sorting cells, and thus far have been able to do so at a rate of 38 times faster than before. This process of counting and sorting cells is known as  flow cytometry.

The analysis of the cells helps to categorize them based on their size, shape, and structure, and also can distinguish if they are benign or malignant, information that could be useful for clinical studies and stem cell characterization.

While this type of research was occurring before, it’s a job that has traditionally taken a lot of time. But now, the use of a camera on a microscope can analyze information faster–cutting the time from between 0.4 and 10 seconds to observe and analyze a single frame down to between 11.94 and 151.7 milliseconds.

In what ways do you see this technology making advancements in the medical and clinical world? How else can you imagine it benefitting science?

Meet the robot phlebotomist

Most everyone can recall a time when doctors or nurses have needed to draw blood or give shots and had trouble finding the proper veins. A company in California is all too familiar with this scenario, and in an effort to make the process of drawing blood more efficient, has created Veebot.

Veebot is essentially a robot phlebotomist. Relying on infrared lighting, an infrared camera, image analysis software, and ultrasound technology, the robot is able to locate the best vein for taking blood. All of this is checked and double-checked in the span of a few seconds in order to ensure that the first needle prick is successful.

Currently, the Veebot has been correct in its identification about 83% of the time, which is better than the average of humans. Once it has reached a 90% success rate, the company hopes to use the machine in clinical trials.

To see how this machine works, watch the video below:

Computer Vision detects heart rate

Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) are now using computer vision to determine heart rate. Using an algorithm that analyzes small movements in the head, researchers are able to connect those movements to the rush of blood caused by the beating heart, in turn determining the heart rate. This opens doors for testing those who, for one reason or another, maybe not be the best candidates for EKG testing.

Computer vision algorithm helps identify tumors

Image courtesy of Berkeley Labs
Image courtesy of Berkeley Labs

Add cancer to the list of medical problems computer vision can be used in diagnosing and treating.

At the Lawrence Berkeley National Laboratory in California, researchers have created a program that analyzes images of tumors (of which there are thousands, stored in the database of The Cancer Genome Atlas project). This program relies on an algorithm that sorts through image sets and helps identify tumor subtypes – a process which is not so easy considering no two tumors are alike.

After sorting through the images, it categorizes them according to subtype and composition of organizational structure, and then matches those things up with clinical data that give an idea of how a patient affected with a certain tumor will react to treatment.

For more on what this program can do and how it can be used, see the press release here.


Fighting Obesity with Outdoor Webcams and Crowdsourcing


public outdoor webcams
Public outdoor webcams help gauge use of infrastructural amenities. Credit: Joe Angeles. Courtesy of Medical Xpress.

Leslie Gibson Mccarthy at Medical Xpress reports on a study out of Washington University in St. Louis published in the American Journal of Preventive Medicine entitled “Emerging Technologies: Webcams and Crowd-Sourcing to Identify Active Transportation.”

The goal of the study was to measure the use of man-made spaces like parks, trails and bike paths to better inform public health policy and promote a more physically active, less obese citizenry.

According to study co-author J. Aaron Hipp, PhD, assistant professor of public health at the Brown School, findings suggest the use of webcams and crowdsourcing can assist urban planners to design public spaces that foster more physical movement. In addition, computer vision experts can help improve public safety by incorporating machine learning as well.

The study used publicly available outdoor webcams and crowdsourcing to count people, bikes and cars. These two approaches helped overcome common obstacles to getting accurate counts such as rain, fog and crowded conditions.

For the webcam imagery, researchers relied on the web tool, AMOS (Archive of Many Outdoor Scenes) developed by study co-author Robert Pless, PhD, professor in the WUSTL School of Engineering & Applied Science.  AMOS crawls for publicly available outdoor webcams and timestamps one image per camera per 30 minute interval.

Using the Amazon Mechanical Turk website, crowdsourced workers marked the pedestrians, bicycles and vehicles in images captured over a year in which a bike lane was installed at an intersection in Washington, DC.

What was the impact of installing the bike lane in this area? Cycling quadrupled.

Ubiquitous webcams and crowdsourced data enable researchers to pinpoint how changes in environment, design, and policy affect people’s actual usage of public amenities. As such, communities can better allocate infrastructure budgets in service of public health goals.

Perhaps, over time, it’s possible we’ll see computer vision and machine learning pick up more and more of what gets crowdsourced today.

With this in mind, how do you see webcams, crowdsourcing and computer vision serving your needs?

Computer Vision recognizes signs of autism in infants

Photo courtesy of MIT Technology Review

The M.I.T. Technology Review reports on the use of a computer vision system that is helping doctors diagnose autism in infants by age 2 and 3 instead of age 5.

Earlier diagnosis makes it possible to teach social and communication skills before other, maladaptive patterns become ingrained in a child’s behavior.

Diagnosing autism in children at younger ages requires a psychologist with expertise in autism to monitor the child closely for long periods of time.

Even when a child or infant’s behavior can be recorded on video, it takes hours of expert analysis, frame by frame, to arrive at a diagnosis.

Now, Jordan Hashemi and a team at the University of Minnesota is using computer vision to discover those at a higher risk for autism, earlier.

For example, child psychologists have developed several tests that screen for delayed tracking by infants with autism like a rattle shaken from one side of the head and then from the other.

To support this and other tests, the custom-developed computer vision system makes very fine assessments such as monitoring head movement along with the position of the left ear, left eye, and nose. Other behaviors analyzed include changes in limb position and gait in response to stimuli.

This blog is sponsored by ImageGraphicsVideo, a company offering ComputerVision Software Development Services.


Genetic screening in worms made easier

Researchers at Georgia Tech have developed technology that can detect and determine differences between worms used in genetic research.

According to the recently published findings, worms are one of many tiny multi-cellular living beings that act as effective test subjects for researching genetics.

Using artificial intelligence–combined with advanced image processing–scientists are able to inspect and process these worms–known as Caenorhabditis elegans–more quickly and efficiently than in the past. With a camera that records 3D images of worms and compares them against a model of abnormal worms, the machine can not only tell the difference, but learns from it, teaching itself as it goes.

Picking out distinct factors better than humans can on their own, this technology highlights genetic mutations between the worms, which can be a key for unlocking further advances in genetic research and testing in humans in years to come.

Video filter detects what the human eye cannot

One of the most accepted notions about the world around us is that just because we can’t see something, it doesn’t mean it isn’t there. However, researchers at MIT are working on a particular type of technology which makes the unseeable seeable.

In a paper entitled “Eulerian Video Magnification for Revealing Subtle Changes in the World,” the researchers reveal what they’re created, which is a special filter that functions much like a magnifying glass for videos.

This technology was designed to work in the health field; it is capable of detecting body functions that the human eye can’t see on its own, such as a baby’s breathing pattern or a human pulse. However, the implications for what it can do reach far.

For more information on the technical specifics, refer to the following video:

Medical improvements in space

Computer vision is once again joining forces with 3D technology to aid humans: this time, in space. The European Space Agency has introduced CAMDASS, or Computer Assisted Medical Diagnosis and Surgery System, a headset to aid astronauts in the field while performing routine medical examinations.

The 3D display works in conjunction with the user’s vision, combining that with computer-generated graphics. As the name suggests, the headset is capable of assisting in both the diagnosis of medical ailments as well as surgical procedures. It relies heavily on ultrasound technology, which is sometimes needed to treat astronauts at the International Space Station, and elsewhere in space.

Image courtesy of ESA/Space Applications Service NV

CAMDASS works by using a camera that is connected to an ultrasound device. Together, the two work to match up what is seen on the patient in question to a virtual human body. The result is shown on the headset, which assists the wearer in identifying parts of the body and instructing him or her how to proceed. This is just one example among many of how speech recognition and computer vision technology combine forces to help aid specific projects.

Cancer detection via computer vision?

While doctors have a varying success rate of detecting and diagnosing breast cancer, researchers at Stanford University have created a computer system known as the C-Path (Computational Pathologist), which examines tissue samples and is capable of diagnosing breast cancer as well as, if not better than, humans. Additionally, it is able to provide a likely prognosis.

Currently, doctors have to individually examine tissue samples of tumors under microscopes to determine if they’re affected. The C-Path cuts out the middle man, and in addition to serving as inspector, also has the ability to learn as it goes. In fact, its initial programming was based on being fed preexisting samples with known prognoses. This comparison of what the machine does versus the knowledge it was provided with allowed it to “learn” and adapt.

It is the hope of scientists that the C-Path’s abilities can be improved over time so that it has the ability to not only predict the chances of a patient’s survival, but also offer information as to which treatment would be most effective for a particular type of cancer.

Of course, in some situations there is no replacement for the care and exactness of a human inspection, but it is interesting to think what kind of strides a machine like this might make in the medical community.