infoTECH Feature

September 12, 2014

GPU Manufacturer Releases Object Recognition Processing Software

A global provider of graphics processing units announced this week that it has released "drop in" software that seeks to help developers harness GPU acceleration and advance the fields of deep learning.

The NVIDIA (News - Alert) cuDNN program, the company says, intends to accelerate deep learning training processes to a rate that exceeds CPU-based methods by up to 10 times. It uses the CUDA parallel programming model to achieve that goal, and the announcement says developers can make use of the software easily and directly out of the box to begin experiencing development gains. The announcement also briefly explains what the emerging field of deep learning actually is.

"Deep learning is a fast-growing segment of machine learning that involves the creation of sophisticated, multi-level or 'deep' neural networks. These networks enable powerful computer systems to learn to recognize patterns, objects, and other items by analyzing massive amounts of training data," the company says.

This type of learning is, obviously, data-intensive. It requires that computers sift through large amounts of data at once, so it pays to have a software that can handle large data sets and train themselves to be more efficient in the future. This is what cuDNN intends to happen—quick learning with low overhead. The company points to one notable example of the model's use—the Caffe project at the University of California at Berkeley.

NVIDIA says on its website that its program is intended to fit within higher-level machine learning networks such as Caffe. cuDNN lets developers create neural net models, and the bulk of programs such as Caffe work to analyze the actual data. The UC Berkeley project page describes Caffe as "a deep learning framework developed with cleanliness, readability, and speed in mind." It lets developers switch between CPU and GPU processing with simple switches in code, it seeks to allow for readable code that is easy to manipulate, and it has proven itself quick by being able to process up to 40 million images in 24 hours with a single NVIDIA K40 or Titan GPU after researchers have properly cached the initial data.

The European Conference on Computer Vision is taking place this week, and NVIDIA says it has featured the ImageNet Large Scale Visual Recognition Challenge that places academic and industry teams against each other to complete object recognition tasks. The company blog describes it as "fiendishly difficult," and it says teams now, more than ever, are placing their hopes in GPU analysis that can complete the work of multiple CPUs with a fraction of the effort. NVIDIA says deep learning with GPUs can run faster and more efficiently, and cuDNN is helping it advance even further.




Edited by Maurice Nagle
FOLLOW US

Subscribe to InfoTECH Spotlight eNews

InfoTECH Spotlight eNews delivers the latest news impacting technology in the IT industry each week. Sign up to receive FREE breaking news today!
FREE eNewsletter

infoTECH Whitepapers