Hyperconvergence by 2020? Virtualized storage is the first step

30 May 2019 | 385 Shares

For any professional systems administrator, there is a great deal of complexity with which to contend. Certainly, more than those of their forebears a generation ago.

Just a few years back, almost all storage – save a few offsite backup repositories – was in-house. As new storage was required, it was physically delivered, cabled in, and the new capabilities made available to where it was needed.

From week to week, apart from ensuring uptime and checking that archives for failover were being created properly, there was little sudden change in topology.

Today’s changing role of the IT function in the enterprise means that there is an increasing number of demands placed on infrastructure, perhaps epitomized by the new consumer-driven attitude of “services on demand, and services now.”

Because almost anyone has the capability and wherewithal to create a new service instance according to imperatives defined in a broader business sense, IT functions now facilitate and support those fluid, scaling and unpredictable demands.

There’s a great deal of new technology that is supporting that new stance. Virtualization technologies are probably one of the most significant change drivers, from servers to desktops and the abstraction of resources that stretch from bare metal servers, to virtual machine clusters, and out into cloud providers like AWS, Azure and their variants. And while none but the very cutting edge are fully hyperconverged as far as systems administration goes, that certainly seems to be the way the wind is blowing. If project X needs more cores, then (theoretically) it can be assigned them; if project Y could use a cluster of GPUs for some serious number-crunching, that’s available at the click of a mouse; likewise with more bandwidth, more storage, or an entire duplicate of a development platform.

Clearly, this malleability comes at a price. Partly, of course, the technology’s very newness has an effect – any new service or facility that comes onto the market tends to have had a zero or two added to its list price. But in the storage convergence markets, because serious, reliable players are thin on the ground, prices remain very high. And that’s despite the relatively low cost of even the very fastest storage media.

But much of the justification for high CAPEX and operating/support costs comes from the business potential in converged storage technology. The ability to switch in extra resources at peak times (or better, have resources switch themselves in as demand peaks) means that end-user (or customer) satisfaction levels remain high. To a service user, a slow service is an offline service, and millions of dollars of revenues can be lost while a business-critical system stumbles under peak workloads, all for the want of a few hundred gigabytes.

Conversely, there are similarly scaled savings to be made by smart resource allocation, at which virtualization excels. Scaling back resources and re-purposing them is an incredible savings creator, and the management systems that typically control and monitor converged storage infrastructures are often pretty good at spotting bottlenecks that need work, or masses of underused storage.

It’s here that significant savings can be made, and the best savings are the ones that can be applied multiple times well into the future. That way, hyperconverged infrastructure investments get to create an ROI that’s long-term. Every new project (or the retirement of projects and working platforms) presents opportunities to recoup and save, redeploy and, basically, make best use of what’s already in the racks, in-house or in the cloud. Every time that happens, there’s less for systems administrators to do, and fewer resources need buying-in, deploying and maintaining.

The knock-on effect for the systems administrator is, additionally, that despite a constantly-changing network topology, network management becomes a great deal simpler. The technology, both for users oblivious to the networks on which they operate, and for the IT function, acts as a platform from which enterprise-levels of administrative control happen. And that’s a great thing, at all levels of the business.

Here at TechHQ, we’re looking at three suppliers of hyperconverged, converged or abstracted storage. As companies at the edge of this relatively new technology, each has a unique take on the market and its requirements. We hope one of the following solutions will suit your growing business.

STARWIND

Gartner has already named StarWind as one of the providers of virtualized technology it deems a “cool” niche player, and its offerings are certainly very fresh when compared to those of its more mainstream competitors.

Though its technology is proprietary (developed by in-house teams), the Virtual SAN platform on offer is available under a freemium model that’s straight out of the FOSS playbook. In short, if you and your team are happy with a CLI, PowerShell interface, then the platform is available for download: and that’s not a limited, try-now-pay-later offering, it’s fully-featured and ready to be deployed at an enterprise level. StarWind’s support is where the company has its income source (it has something of an excellent reputation in this area) so organizations (like the UK’s Oxford University) can just get on with their day-to-day business.

The Virtual SAN solution comes in Hyper-V or vSphere variants and uses just about any commodity hardware out there, so companies can get all the benefits of virtualization of storage, like massive scalability, but without having to sell off racks perfectly good kit to replace them with proprietary boxes. This is fully-converged storage, by several factors of 10 cheaper than its competitors, but at enterprise-grade. You can read more about StarWind and the company’s Virtual SAN offerings here.

VMWARE

Capitalizing on its position carefully carved out in the early days of virtualization, VMware’s vSAN platform is seen by the company of something of a stepping-stone towards a fully-hyperconverged infrastructure. The all-flash architecture of vSAN gives business-critical applications a very quick response time (other circumstances not withstanding), and whether it’s a simple ROBO deployment or one that underpins mission-critical databases, VMware is still the de facto virtualization supplier in many minds.

As you might expect, vSAN is fully-optimized for vSphere, and is also very happy running in remote clouds, making those resources available either as part of a larger pool, or as a mirrored failover or backup facility.

Like Starwind, VMware is keen to stress the potential for data center cost savings (do we sense a theme developing here?) which in VMware’s case comes in two main drivers: reduced remote licensing requirements and better resource use and speed of allocations.

The underlying message appears to be to use what’s already at hand. Plus of course, move one step closer to fully hyperconverged infrastructures.

DATACORE

Unlike some of its competitors, DataCore Software do only one thing: virtualized storage. The solution utilizes just about any storage hardware (so no vendor lock-in or proprietary issues) plus, it’s also not fussy as to which hypervisor is deployed, or where.

This type of malleability in the infrastructure on which the solution runs has its attractions for organizations with massive requirements, like NASA, but also for smaller companies wishing to get a foot on the hyperconvergence ladder.

Out of the box, there’s asynchronous replication and software-defined storage pool creation and management. The latter facility means that data silos can be created for discrete purposes, at will, but unlike their hardware counterparts, the silos can be removed once projects have been completed, and the resources returned to general use.

The DataCore solution integrates well with cloud providers, and development teams already embroiled in container-based projects will find the platform’s use seamless. For developers and production systems alike, there’s on-the-fly optimization, with QoS monitoring 24/7, making virtual storage fit for just about any use in modern business.

* Some of the companies featured on this editorial are commercial partners of TechHQ