Did 3D scanning help an Olympian secure the silver?

In light of this summer’s Olympics, currently being held in London, it seems appropriate to touch on the ways in which computer vision has contributed to the international sports competition. Most recently, it has been related to fitout, which, for kayakers, mean the building of custom parts of the kayak that fit to the bodies of the competitors.

Researchers at the Australian Institute of Sport have been doing just that, working to make the athlete, the kayak, and the paddle all act as one cohesive unit.

According to Ami Drory, a biomechanist working on this project: “A good fitout allows the athlete to use their full range of motion while transferring as much force as possible into the water.”

Unfortunately, working on the fitout requires a lot of time and a lot of wasted material, which is why the institute decided to call upon a specialist at Canadian 3D-scanning developer Creaform. Together, they scanned athletes bodies in position as well as the kayaks they planned to compete in, to make the best possible fit.

And as it turns out, one of the athletes scanned, 18-year-old Jessica Fox, went on to take the silver medal this week. That’s not to say that she wouldn’t have done well without it, but for all anyone knows, this fitting could have propelled her from merely participating in the Olympics to being a medal holder.

Meat slicer uses 3D scanning to improve its method

There appears to be no limit to the ways in which 3D scanning can be applied. And more recently, it has begun making its mark on the food world.

Japanese company Nantsune, which has worked in the meat slicing industry since the 1920s, has developed a meat-slicing machine that uses 3D scanning technology to more accurately cut the meat. The machine, known as the Libra 165C, was designed to work with pork, but its use can likely be extended to other types of meat.

Traditionally, the meat industry has machines which cut meat in pieces that are the same size, but because this method doesn’t account for the thickness of a piece of meat, the weight varies. After the meat is cut, it is typically weighed and packaged that way.

With the Libra 165C, the meat is scanned just before it is sliced, with the cross-section of the meat taken into account. This results in pieces that are of various thickness or size, but all weigh the same. The best part? It’s speed. The machine can cut up to 6,000 slices per hour.

The machine will retail for approximately $160,000. Do you think it will become the standard in the meat industry? Watch the video below to see for yourself how it works:

3D scanning, straight to your hands

In yet another example of Kinect software being used for 3D-scanning purposes, Silicon Valley startup Matterport has come out with a prototype product, modeled after the Kinect camera, which is able to scan rooms and provide 3D models in a matter of minutes.

Using depth sensors and an RGB camera, the handheld device scans and renders 20 times faster than similar scanners already on the market, according to the company’s founder. And while existing models are either larger or more expensive, the estimated cost of the Matterport product, once released, will be much much cheaper.

The implications of a product like this spreads across a variety of interests. Professionals in fields such as architecture, interior design, and real estate can use this product in many ways. Law enforcement officials might be interested in the scanner for helping to recreate crime scenes. And video game enthusiasts stand to benefit from games with improved graphics. Even the casual technology user might be able to use a scanner to create a panoramic video of a vacation spot to watch later or show off to friends.

What would you use a 3D scanner for?

Target scanner aims to eliminate size ambiguity

For some time now, companies have been working to use computer vision technology in ways that benefit the consumer. More specifically, they are developing body scanners that take measurements of individuals in order to design custom-fitting clothing.

Now, the retailer Target is testing this out in its Australian market to the tune of $1 million.

A 3D body scanner will be used, not to measure each customer individually, but to measure 20,000 adults. This is part of an effort to update conceptions about body types and sizes, and make clothing that adheres to those standards, a response to what has been complained about as “inconsistent sizing” in clothing. The information compiled will be used to determine what common sizing is like, and clothes will be made based on those specifications, hopefully resulting in clothing that fits the majority of the population.

Those who are scanned – a process which is entirely voluntary – will also be able to take their measurements with them, so that they can be used for buying clothes elsewhere or online.


Image courtesy of Perth Now

Is buying clothing in inconsistent sizing something you can relate to? How do you think this might reshape the face of the clothing and fashion industries?

Glasses are the new cool

Back in elementary school, there was something inherently unhip about kids who had to wear glasses. But now, the geeks are bringing a while new meaning to the term “four eyes.” With recent advancements in technology, it seems as though glasses are the future, as they offer access to an entirely new way of seeing things.

CEO Vision is just one of the latest innovations in the HUD (heads-up display) realm. What it is, is a management dashboard, which works with the SAP HANA database. Users wear a special pair of glasses, which, in turn, displays and interprets on-the-page reports and other business-related information in real time. The information appears in 3D and functions interactively.

CEO Vision was concocted with the help of two HD cameras, a Microsoft Kinect system, and a head display. It relies upon eye movement and facial tracking, combined with hand gestures, to quickly provide detailed information to its user. This kind of technology is known as a Spatial Operating Environment (SOE), not unlike the technology imagined a decade ago in the 2002 movie “Minority Report.”

For an example of how CEO Vision works in its early stages, check out the following video:

Apple to use 3D technology for its iPhone 5

Three-dimensional (3D) technology has been garnering quite a following for itself, most recently for teaming up with Apple.

According to PatentlyApple, the iPhone 5, which is the next generation of iPhone, has a proposed 3D camera that will become the new standard.


Image courtesy of PatentlyApple

In addition to expected upgrades of resolution and color, the camera will utilize the combined technologies of Laser, LIDAR and RADAR, giving it the ability to create 3D imagery. Additionally, it will be capable of both facial recognition and facial gesturing recognition.

While cameras that rely on 3D technology are already on the market, the scope of what they can accomplish is still considered limited. But with the sensors Apple plans to implement, it is poised to become the most sophisticated camera on the smartphone market.

Combined technology for facial animation

Facial animation is about to become even more realistic.

By combining two types of technology – 3D scanning capabilities as well as a motion-capture system – Microsoft is able to zero in on the temporal and spatial resolutions of a face. The result is more detailed faces that can be used in video games, movies, and even as avatars for Kinect.

Watch the following video for more specifics on just how it works and how it can be used in the future:

Medical improvements in space

Computer vision is once again joining forces with 3D technology to aid humans: this time, in space. The European Space Agency has introduced CAMDASS, or Computer Assisted Medical Diagnosis and Surgery System, a headset to aid astronauts in the field while performing routine medical examinations.

The 3D display works in conjunction with the user’s vision, combining that with computer-generated graphics. As the name suggests, the headset is capable of assisting in both the diagnosis of medical ailments as well as surgical procedures. It relies heavily on ultrasound technology, which is sometimes needed to treat astronauts at the International Space Station, and elsewhere in space.

Image courtesy of ESA/Space Applications Service NV

CAMDASS works by using a camera that is connected to an ultrasound device. Together, the two work to match up what is seen on the patient in question to a virtual human body. The result is shown on the headset, which assists the wearer in identifying parts of the body and instructing him or her how to proceed. This is just one example among many of how speech recognition and computer vision technology combine forces to help aid specific projects.

ImageGraphicsVideo takes Kinect one step further

Kinect was first released in late 2010 as a hands-free controller for the Xbox 360. The webcam-like motion sensor allows users to use gestures and spoken commands in order to communicate with the device. And although it was initially released as a gamer tool, the implications for its use extend far beyond that.

The Xbox version of Kinect has sold 18 million copies worldwide since its release. Microsoft CEO Steve Ballmer announced that a new version, Kinect for PC, will be available for purchase on Feb. 1 in the countries of Australia, Canada, France, Germany, Italy, Japan, Mexico, New Zealand, Spain, the United Kingdom and the United States.

Whereas the old version of Kinect is available for as low at $129 on Amazon, the retail price for the PC version is $249 for those in the United States. However, where owners of the latter can download the Kinect Software Development Kit in order to enable the Xbox sensor to work with a PC, Microsoft has shared that the new Kinect for Windows sensor is specifically catered to work with PC systems. And although some users may be reluctant to shell out that kind of money, the price tag of $249 is actually a low cost when you consider the possibilities of what Kinect can help do or accomplish.

There are also rumors that Microsoft is teaming up with ASUS to roll out a line of notebooks featuring Windows 8 and Kinect functionality.

One noticeable difference for Kinect users is that they will be able to use their hands and voices to interact with the computer, which could eliminate the need for a keyboard or a mouse. This will likely prompt application developers to begin re-imagining the design and functionality of apps.

Although Kinect is primarily used with video games, its capabilities for use of facial recognition, voice recognition, and 3D scanning enable it to do so much more. These kinds of uses were discovered by hackers and software developers alike; this is part of what prompted Microsoft’s decision to make Kinect available for computers, so developers will be able to harness its power and expand upon its probable uses.

In addition, Kinect can be used in conferences, where using keyboards and mice can sometimes interrupt the flow of a presentation or meeting. Doctors could benefit from Kinect, saving time by using it to access information about a patient or procedure in the middle of operations, as opposed to scrubbing out, finding the information on a separate computer, and scrubbing back in. Students can also benefit from the software, particularly in anatomy-based courses.

Practical applications aside, one company, ImageGraphicsVideo, develops software the utilizes Kinect for a variety of other purposes. The information obtained from Kinect is done using motion sensors and a scanner, which can calculate biometric data about your body, or another object in front of it.

Using Kinect’s capabilities, combined with its own self-developed software, it is able to create 3d models of objects or entire rooms. Another use for this software is to collect measurement data in order to create custom-tailored clothing. And that’s just the beginning.

Using a depth camera and a series of measurements, the company is able to create 3D digital images, which can then be used to analyze information or control processes. With information obtained, the company works to create custom software based on a customer’s specific needs. In other words, the only real limit is simply one’s imagination.

Creating custom clothing

Where 3D scanning was once something relegated to the field of science, now it has expanded to a somewhat unlikely realm: the fashion world.

Clothing company Fitted Fashion is the most recent in a series of businesses implementing the use of full-body scanners to take hundreds of measurements of individuals in a matter of minutes. Those measurements are then stored and used in the creation of clothing, such as custom-made jeans for women.

In the future, there is also the potential to have those measurements sent to other retailers, including those online, which could further change the way people shop for clothing.

Image courtesy of Gizmag

Additionally, this technology has the potential to extend beyond designer jeans. In fact, it can be used to create custom anything – whether it’s suits, shirts, or shoes. The only limitation seems to be one’s imagination.