Engineers at the University of California, San Diego, are using Computer Vision as a means of sorting cells, and thus far have been able to do so at a rate of 38 times faster than before. This process of counting and sorting cells is known as flow cytometry.
The analysis of the cells helps to categorize them based on their size, shape, and structure, and also can distinguish if they are benign or malignant, information that could be useful for clinical studies and stem cell characterization.
While this type of research was occurring before, it’s a job that has traditionally taken a lot of time. But now, the use of a camera on a microscope can analyze information faster–cutting the time from between 0.4 and 10 seconds to observe and analyze a single frame down to between 11.94 and 151.7 milliseconds.
In what ways do you see this technology making advancements in the medical and clinical world? How else can you imagine it benefitting science?
Most everyone can recall a time when doctors or nurses have needed to draw blood or give shots and had trouble finding the proper veins. A company in California is all too familiar with this scenario, and in an effort to make the process of drawing blood more efficient, has created Veebot.
Veebot is essentially a robot phlebotomist. Relying on infrared lighting, an infrared camera, image analysis software, and ultrasound technology, the robot is able to locate the best vein for taking blood. All of this is checked and double-checked in the span of a few seconds in order to ensure that the first needle prick is successful.
Currently, the Veebot has been correct in its identification about 83% of the time, which is better than the average of humans. Once it has reached a 90% success rate, the company hopes to use the machine in clinical trials.
To see how this machine works, watch the video below:
Add cancer to the list of medical problems computer vision can be used in diagnosing and treating.
At the Lawrence Berkeley National Laboratory in California, researchers have created a program that analyzes images of tumors (of which there are thousands, stored in the database of The Cancer Genome Atlas project). This program relies on an algorithm that sorts through image sets and helps identify tumor subtypes – a process which is not so easy considering no two tumors are alike.
After sorting through the images, it categorizes them according to subtype and composition of organizational structure, and then matches those things up with clinical data that give an idea of how a patient affected with a certain tumor will react to treatment.
For more on what this program can do and how it can be used, see the press release here.
One of the most accepted notions about the world around us is that just because we can’t see something, it doesn’t mean it isn’t there. However, researchers at MIT are working on a particular type of technology which makes the unseeable seeable.
This technology was designed to work in the health field; it is capable of detecting body functions that the human eye can’t see on its own, such as a baby’s breathing pattern or a human pulse. However, the implications for what it can do reach far.
For more information on the technical specifics, refer to the following video:
Researchers have installed Kinect cameras in a nursery, which, when combined with specific algorithms, are trained to observe children. The cameras are able to identify children based on their clothing and size, and then compare information about how active the children are as compared to their “classmates,” highlighting those who are more or less active than the average, which could be markers for autism.
Children who show signs of interacting less socially or not possessing fully developed motor skills – indicators of autism – will then be referred to doctors who can better analyze individual cases. While the purpose is not to detect autism 100 percent, the hopes are that this program will pinpoint students who may be cause for concern and catch them early.
Additionally, the creators are working to make the program more advanced, in that it will be able to detect if a child is capable of following an object, as autistic children often have trouble making eye contact, among other things.
Already, some centers are using Kinect not to detect autism, but to help children with it learn to interact socially with others as well as better their own skills.
How else might Kintect assis in detecting or treating autism? What other medical fields might be able to use Kinect to an advantage?
The 3D display works in conjunction with the user’s vision, combining that with computer-generated graphics. As the name suggests, the headset is capable of assisting in both the diagnosis of medical ailments as well as surgical procedures. It relies heavily on ultrasound technology, which is sometimes needed to treat astronauts at the International Space Station, and elsewhere in space.
Image courtesy of ESA/Space Applications Service NV
CAMDASS works by using a camera that is connected to an ultrasound device. Together, the two work to match up what is seen on the patient in question to a virtual human body. The result is shown on the headset, which assists the wearer in identifying parts of the body and instructing him or her how to proceed. This is just one example among many of how speech recognition and computer vision technology combine forces to help aid specific projects.
It is the hope of scientists that the C-Path’s abilities can be improved over time so that it has the ability to not only predict the chances of a patient’s survival, but also offer information as to which treatment would be most effective for a particular type of cancer.
Of course, in some situations there is no replacement for the care and exactness of a human inspection, but it is interesting to think what kind of strides a machine like this might make in the medical community.