Computer Vision aids flow cytometry

Photo courtesy of the USCD Jacob School of Engineering
Photo courtesy of the USCD Jacob School of Engineering

Engineers at the University of California, San Diego, are using Computer Vision as a means of sorting cells, and thus far have been able to do so at a rate of 38 times faster than before. This process of counting and sorting cells is known as  flow cytometry.

The analysis of the cells helps to categorize them based on their size, shape, and structure, and also can distinguish if they are benign or malignant, information that could be useful for clinical studies and stem cell characterization.

While this type of research was occurring before, it’s a job that has traditionally taken a lot of time. But now, the use of a camera on a microscope can analyze information faster–cutting the time from between 0.4 and 10 seconds to observe and analyze a single frame down to between 11.94 and 151.7 milliseconds.

In what ways do you see this technology making advancements in the medical and clinical world? How else can you imagine it benefitting science?

Advertisements

Meet the robot phlebotomist

Most everyone can recall a time when doctors or nurses have needed to draw blood or give shots and had trouble finding the proper veins. A company in California is all too familiar with this scenario, and in an effort to make the process of drawing blood more efficient, has created Veebot.

Veebot is essentially a robot phlebotomist. Relying on infrared lighting, an infrared camera, image analysis software, and ultrasound technology, the robot is able to locate the best vein for taking blood. All of this is checked and double-checked in the span of a few seconds in order to ensure that the first needle prick is successful.

Currently, the Veebot has been correct in its identification about 83% of the time, which is better than the average of humans. Once it has reached a 90% success rate, the company hopes to use the machine in clinical trials.

To see how this machine works, watch the video below:

Computer Vision detects heart rate

Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) are now using computer vision to determine heart rate. Using an algorithm that analyzes small movements in the head, researchers are able to connect those movements to the rush of blood caused by the beating heart, in turn determining the heart rate. This opens doors for testing those who, for one reason or another, maybe not be the best candidates for EKG testing.

Computer vision algorithm helps identify tumors

Image courtesy of Berkeley Labs
Image courtesy of Berkeley Labs

Add cancer to the list of medical problems computer vision can be used in diagnosing and treating.

At the Lawrence Berkeley National Laboratory in California, researchers have created a program that analyzes images of tumors (of which there are thousands, stored in the database of The Cancer Genome Atlas project). This program relies on an algorithm that sorts through image sets and helps identify tumor subtypes – a process which is not so easy considering no two tumors are alike.

After sorting through the images, it categorizes them according to subtype and composition of organizational structure, and then matches those things up with clinical data that give an idea of how a patient affected with a certain tumor will react to treatment.

For more on what this program can do and how it can be used, see the press release here.

 

Video filter detects what the human eye cannot

One of the most accepted notions about the world around us is that just because we can’t see something, it doesn’t mean it isn’t there. However, researchers at MIT are working on a particular type of technology which makes the unseeable seeable.

In a paper entitled “Eulerian Video Magnification for Revealing Subtle Changes in the World,” the researchers reveal what they’re created, which is a special filter that functions much like a magnifying glass for videos.

This technology was designed to work in the health field; it is capable of detecting body functions that the human eye can’t see on its own, such as a baby’s breathing pattern or a human pulse. However, the implications for what it can do reach far.

For more information on the technical specifics, refer to the following video:

Kinect cameras may help detect autism

There are stories of new innovative games or programs being developed daily, thanks to the release of Microsoft’s Kinect. However, at the Institute of Child Development in Minneapolis, Minnesota, this technology is being used to help detect autism.

Researchers have installed Kinect cameras in a nursery, which, when combined with specific algorithms, are trained to observe children. The cameras are able to identify children based on their clothing and size, and then compare information about how active the children are as compared to their “classmates,” highlighting those who are more or less active than the average, which could be markers for autism.

Children who show signs of interacting less socially or not possessing fully developed motor skills – indicators of autism – will then be referred to doctors who can better analyze individual cases. While the purpose is not to detect autism 100 percent, the hopes are that this program will pinpoint students who may be cause for concern and catch them early.

Additionally, the creators are working to make the program more advanced, in that it will be able to detect if a child is capable of following an object, as autistic children often have trouble making eye contact, among other things.

Already, some centers are using Kinect not to detect autism, but to help children with it learn to interact socially with others as well as better their own skills.

How else might Kintect assis in detecting or treating autism? What other medical fields might be able to use Kinect to an advantage?

Medical improvements in space

Computer vision is once again joining forces with 3D technology to aid humans: this time, in space. The European Space Agency has introduced CAMDASS, or Computer Assisted Medical Diagnosis and Surgery System, a headset to aid astronauts in the field while performing routine medical examinations.

The 3D display works in conjunction with the user’s vision, combining that with computer-generated graphics. As the name suggests, the headset is capable of assisting in both the diagnosis of medical ailments as well as surgical procedures. It relies heavily on ultrasound technology, which is sometimes needed to treat astronauts at the International Space Station, and elsewhere in space.

Image courtesy of ESA/Space Applications Service NV

CAMDASS works by using a camera that is connected to an ultrasound device. Together, the two work to match up what is seen on the patient in question to a virtual human body. The result is shown on the headset, which assists the wearer in identifying parts of the body and instructing him or her how to proceed. This is just one example among many of how speech recognition and computer vision technology combine forces to help aid specific projects.