What the Words “Converged Infrastructure” Mean Today
There’s really no question that the modern organization is changing. New kinds of user demands are driving infrastructures to shift how they deliver resources and support applications. What’s most interesting is the shift away from traditional data center technologies that have supported business functions for so long. Don’t get me wrong, there will still be a place for traditional servers, storage, and even networking technologies.
However, more customers are asking for alternatives to the compute, storage, and networking they have today. As a direct result of technological evolution we’ve seen a new breed of hardware impact the market. Business of all sizes are specifically asking for fewer moving parts, while still delivering more resources to their users. And so, the concept of a converged and unified platform was born. Today’s conversation will revolve around actually defining convergence and taking a bit of the marketing out of some of the confusion. Where does storage play a role? When is it only compute? What about cloud computing and integration? What are some actual use cases behind these kinds of technologies? Let’s take a closer look at convergence as it applies to network, storage, compute, and the cloud.
Traditional storage is defined by a controller with a rack of shelves containing disk. This can be SSD or HDD arrays. Converged storage is a bit different, of course. This kind of architecture removes a lot of moving parts and consolidates components into a highly redundant node-based storage platform. Everything in one box, scaled out with more nodes as needed. These kinds of solutions are built around capacity, performance, and optimization. They arrive in a variety of sizes as all-flash and hybrid arrays. There are many use cases as well. Let’s assume that you already have a network and compute platform. However, you want to deploy a different kind of storage for a set of virtual desktops. Instead of purchasing another shelf, you decide to offload that workloads to a converged storage system. For this kind of project, it might be less expensive, easier to manage, has better integration, and can meet the needs of the business for the foreseeable future.
This is a bit more of a traditional architecture approach with a few twists. Although in many cases network, storage, and compute are still separate components in a unified architecture design, they’re all directly connected through a powerful fabric backplane. This is certainly great for a larger enterprise or data center provider. This kind of architecture can scale many chassis across a number of locations. Use cases for this involve the need to deploy a data center platform which is utilizing the same vendor across a number of hardware technologies. This is great for policy control, workload delivery, and of course security. However, against some of the newer solutions out there, this kind of architecture can be a bit more expensive. Still, you do get what you pay for.
Now we begin to define some of the newer trends. Taking marketing terms out, converged infrastructure is the combination of storage and compute on one physical appliance. Yes, networking is integrated to some extent, but this is usually though a hypervisor. There isn’t usually a networking piece of hardware involved. Here is where some of the confusion takes place. In some cases, unified architecture is sometimes interchanged with converged infrastructure. However, a good way to look at it is unified architecture does involve the bigger spectrum of data center technologies like network, storage, and compute. Conversely, converged infrastructure is usually just storage and compute. These kinds of platforms are small, powerful, and dynamic in terms of applicable use cases. Furthermore, we begin to introduce concepts around software-defined technologies. Converged infrastructure allows you to integrate with hypervisor technologies and abstract services as they integrate with the underlying physical architecture. One use case is a very specific virtual application delivery project. If, let’s say, you launch a new branch location with a few hundred users, you require that location to have locally accessible resources for virtual applications and a few hosted desktops. Instead of a traditional server and storage architecture, you deploy a multi-node converged system to meet business demands. You can save time with deployment, costs for deploying a less expensive platform, and simplify management by unifying data center controls.
Hyper (Ultra) Converged Infrastructure
Now we get to my favorite part. This is the future of converged infrastructure and, arguably, the possible future of data center resource design. Much like converged infrastructure, hyper-convergence takes the approach from a software perspective. This is where a few new data center concepts emerge. Specifically, the conversation around commodity, vanity-free, systems, as well as Open Compute Project-based designs. In this scenario, the software controls ALL resources living on top of the hypervisor. Through APIs and its own VM translation technologies hyper (ultra) converged systems can be deployed on ANY piece of hardware and any kind of hypervisor. Imagine being able to basically make your own converged infrastructure platform by simply deploying the overlying software that will help you manage storage as well as compute. Furthermore, if you have a second location running a different hypervisor, the convergence software is intelligent enough to translate the VMs from one hypervisor to the other. As long as you meet the underlying hardware requirements, you can deploy hyper-convergence on blades, rack-mount servers, or whatever makes your use-case most efficient. Through APIs, you can further integrate with public cloud components, integrate with other hypervisors, and extend your data center far beyond a private architecture. Next, you can integrate automation and orchestration controls, cross-platform, to develop a truly powerful next-generation data center and cloud model.
The future of the data center and cloud model will allow for more creativity on both the hardware and software side of technology. We’ll be able to better integrate resources with powerful APIs and interconnect cloud components with private, public, and hybrid cloud platforms. Step one, however, is understanding that there are a lot more options than just traditional data center server, compute, and storage platforms. Now, you have the capability to segment entire workloads on their own isolated piece of physical infrastructure. You can extend workloads cross-hypervisor while still creating an agile cloud environment. All of this is driven by new business demands and technology’s push to keep up.
Author : Bill Kleyman