infoTECH Feature

August 15, 2016

GPUs, Training Data Availability Add to Machine Learning Uptake

Machine learning has grown in importance and uptake in recent years as more organizations seek to employ this technology for faster and more efficient decision making.

Indeed, that’s why Apple (News - Alert) last week announced plans to buy Turi Inc., and Intel last week announced its intention to purchase Nervana Systems. And it’s why such tech leaders as Amazon, Google and IBM (News - Alert) all offer machine learning capabilities as part of their cloud services.

Nimbix is another cloud company catering to organizations that want to use machine learning. Its platform, called JARVICE, is based on NVIDIA’s (News - Alert)  Tesla K80 graphics processing units and allows organizations to run large-scale high-performance computing workloads in the cloud. JARVICE, which Nimbix charges for on a per-second basis, reduces developer time to deployment from weeks to hours.

“Cloud computing can be a highly effective and cost efficient way to free up space for HPC projects,” according to this June Nimbix blog. “Utilizing a high performance cloud for your organization’s HPC projects allows you to stay up to date and on top of the fast changing applications and infrastructure.”

The blog said the solution provides a scalable environment that is flexible and secure, and is an easy and efficient way to manage and monitor work flows. It also features a simplified reporting tool for tracking consumption and costs, said Nimbix, which in May was named among the Cool Vendors for Compute Platforms by Gartner (News - Alert) Inc.

According to the NVIDIA website: “Data scientists in both industry and academia have been using GPUs for machine learning to make groundbreaking improvements across a variety of applications including image classification, video analytics, speech recognition, and natural language processing.”

They have been particularly popular for use in deep learning environments, says NVIDIA. That involves the use of sophisticated, multi-level deep neural networks to create systems that can perform feature detection from massive amounts of unlabeled training data.

NVIDIA adds that while machine learning is not a new idea, the massive amount of training data now available paired with GPU computing that now allows for powerful and efficient parallel computing have lowered the barriers and greatly increased the uptake for this technology.

“Early adopters of GPU accelerators for machine learning include many of the largest web and social media companies, along with top-tier research institutions in data science and machine learning,” says NVIDIA. “With thousands of computational cores and 10-100x application throughput compared to CPUs alone, GPUs have become the processor of choice for processing big data for data scientists.”




Edited by Alicia Young
FOLLOW US

Subscribe to InfoTECH Spotlight eNews

InfoTECH Spotlight eNews delivers the latest news impacting technology in the IT industry each week. Sign up to receive FREE breaking news today!
FREE eNewsletter

infoTECH Whitepapers