infoTECH Feature

August 23, 2016

Creating the Future of Storage Today with Scale-Out NAS for Hybrid Cloud

By Special Guest
Stefan Bernbo, Founder and CEO, Compuverde

Traditional approaches to storage offer vertical scaling, which clearly falls short of being able to accommodate today’s exponential data growth. Vertical scaling solutions are proving cost-prohibitive and cannot meet performance demands. Hybrid cloud presents a way for organizations to gain the maximum amount of business flexibility from cloud architectures, which helps maximize budget efficiency and performance goals at the same time. In a nutshell, hybrid cloud is a cloud computing environment that uses a mix of on-premises, private cloud and public cloud services, with orchestration between the two platforms.

Because this is a relatively new configuration, the learning curve can be steep – with respect to both the benefits and challenges of deploying a hybrid cloud approach. This article details design elements you can use to ensure your hybrid cloud delivers the performance, flexibility and scalability you need.

Laying the Foundation

The first and foremost component for a hybrid-cloud storage solution must be a scale-out NAS. Since hybrid cloud architectures are relatively new to the market—and even newer in full-scale deployment—many organizations are unaware of the importance of consistency in a scale-out NAS. Many environments are eventually consistent, meaning that files that you write to one node are not immediately accessible from other nodes. This can be caused by not having a proper implementation of the protocols, or not tight enough integration with the virtual file system. The opposite of that is being strictly consistent: files are accessible from all nodes at the same time. Compliant protocol implementations and tight integration with the virtual file system is a good recipe for success.

For architecture of this type to work optimally, it should be based on three layers. Each server in the cluster will run a software stack based on these layers.

  1. The first layer is for persistent storage. It is based on an object store, which provides advantages like extreme scalability. However, the layer must be strictly consistent in itself.
  2. The second layer is comprised of a virtual file system and is the heart of any scale-out NAS. It is in this layer that features like caching, locking, tiering, quota and snapshots are handled.
  3. The third layer houses protocols like SMB and NFS as well as integration points for hypervisors, for example.

It is very important to keep the architecture symmetrical and clean. If you manage to do that, many future architectural challenges will be much easier to solve.

The persistent storage layer deserves closer examination. Because the storage layer is based on an object store, we can now easily scale our storage solution. With a clean and symmetrical architecture, we can reach exabytes of data and trillions of files.

It is important to have a fast and effective self-healing mechanism, since ensuring redundancy is one of the responsibilities of the storage layer. To keep the data footprint low in the data center, the storage layer needs to support different file encodings. Some are good for performance and some for reducing the footprint.

What to Do with Metadata

For virtual file systems, a key component is metadata: pieces of information that describe the structure of the file system. For example, one metadata file can contain information about what files and folders are contained in a single folder in the file system. That means that we will have one metadata file for each folder in our virtual file system. As the virtual file system grows, we will get more and more metadata files.

Though storing metadata in a central location can be a good option for smaller set-ups, but here we are talking about scale-out. So, let’s look at where not to store metadata. Storing metadata in a single server can cause poor scalability, poor performance and poor availability. Since our storage layer is based on an object store, a better place to store all our metadata is in the object store – particularly when we are talking about high quantities of metadata. This will ensure good scalability, good performance and good availability.

Performance-Enhancing Cache

To boost performance, software-defined storage solutions need caching devices. In this case, both speed and size matter – as well as price; finding the sweet spot is important. For an SDS solution, it is also important to protect the data at a higher level by replicating the data to another node before destaging the data to the storage layer.

It becomes increasingly important to support multiple file systems and domains as the storage solution grows in both capacity and features, particularly in virtual or cloud environments. Supporting multiple file systems is also very important. Different applications and use cases prefer different protocols. And sometimes, it is also necessary to be able to access the same data across different protocols.

Flexible and Useful

Support for hypervisors is, of course, needed in the hybrid cloud. Therefore, the scale-out NAS needs to be able to run as hyper-converged as well. Being software-defined makes sense here.

Where there is a flat architecture devoid of external storage, the scale-out NAS must be able to run as a virtual machine and make use of the hypervisor host’s physical resources. The guest virtual machine’s (VM) own images and data will be stored in the virtual file system that the scale-out NAS provides. The guest VMs can use this file system to share files between them, making it perfect for VDI environments as well. Now, why is it important to support many protocols? Well, in a virtual environment, there are many different applications running, having different needs for protocols. By supporting many protocols, we keep the architecture flat, and we have the ability to share data between applications that speak different protocols, to some extent.

So then, a very flexible and useful storage solution is comprised of a software-defined foundation, support for both fast and energy-efficient hardware, an architecture that allows us to start small and scale up, support for bare-metal as well as virtual environments, and support for all major protocols make.

Private and Public Areas

For organizations with more than one location, each site will have its own independent file system. A likely scenario is that different offices have a need for both a private area and an area that they share with other branches. So only parts of the file system will be shared with others.

To gain the flexibility needed to scale the file system outside the four walls of the office, organizations can select a section of a file system and let others mount it at any given point in the other file systems – making sure that the synchronization is made at the file system level in order to have a consistent view of the file system across sites. Being able to specify different file encodings at different sites is useful, for example, if one site is used as a backup target.

Going Hybrid

Vertical scaling solutions are simply not sustainable given the quintillions of bytes of data that are generated daily. A hybrid cloud-based storage system, however, produces clean, efficient and linear scaling up to exabytes of data. One file system spans all servers, offering multiple entry points to alleviate bottlenecks in performance. The possibility of adding nodes creates flexible scale-out. Native support of protocols and flash support for high performance are also important components. There’s no longer a need to buy more and more hardware and build bigger data centers to house it; a scale-out NAS offers cost savings, including energy savings, for a greener as well as a more efficient storage solution.

About the Author

Stefan Bernbo is the founder and CEO of Compuverde. For 20 years, Stefan has designed and built numerous enterprise scale data storage solutions designed to be cost effective for storing huge data sets. From 2004 to 2010 Stefan worked within this field for Storegate, the wide-reaching Internet based storage solution for consumer and business markets, with the highest possible availability and scalability requirements. Previously, Stefan has worked with system and software architecture on several projects with Swedish giant Ericsson (News - Alert), the world-leading provider of telecommunications equipment and services to mobile and fixed network operators. 




Edited by Alicia Young
FOLLOW US

Subscribe to InfoTECH Spotlight eNews

InfoTECH Spotlight eNews delivers the latest news impacting technology in the IT industry each week. Sign up to receive FREE breaking news today!
FREE eNewsletter

infoTECH Whitepapers