Hyperconvergence’s advantages, coming soon to a rack near you
Selling the concept of hyperconvergence has been, since the technology’s emergence a few years ago, an uphill battle.
That’s partly down to the fact that as a concept, it’s difficult to explain to a layperson, (or as they are sometimes known, a key decision-maker). And even among those in a company who have taken the time to master the basic ideas behind the abstraction of hardware and software, it is sometimes dismissed as “a data center thing”, a phrase usually coupled with a question, like, “I thought we were moving everything to the cloud, anyway?”
Similar technology already has a firm place in enterprise IT: virtual servers continue to drive a vast majority of the world’s online commerce, and virtual run-times in the form of containers are also starting to convince the most skeptical in the business world of their worth, in business terms.
With maturity comes acceptance, of course: ask any Linux advocate over a certain age about the struggles they’ve had to get their platform to the stage it’s at now, where open source rules the world. Selling a concept as enterprise-ready takes time and needs a prove-able business case. At TechHQ, we think that hyperconvergence has matured enough both as a technology, but also as a market. There are now indubitably viable offerings out there that offer significant advantages to companies of (nearly) any size. And some are simple enough, at least outwardly, to be deployed as plug and play appliances that offer the type of advantages the technology’s advocates have been talking about all along: service and app scalability, resource unification, and simple management. Choice, maturity and financial advantages: that’s what makes a compelling business case.
Hyperconvergence offers the type of system that’s capable of responding in seconds to massive bursts in demand without any intervention from operators and then dropping right back to save bandwidth or resource usage. The homogenization of infrastructure, compute, and storage means that actual physical infrastructure is, if not irrelevant, then less important than it might have been a few years ago. Add to that the capability, therefore, of creating a simpler network that comprises a largely amorphous playground covering different clouds, edge and data centers, and hyperconvergence looks less like a new technology that excites the nerds, and more like a sound business framework.
It certainly removes from the boardroom a lot of discussion around the negative impacts on projects of creating and re-configuring resources. At its best, hyperconverged infrastructures require almost no real or physical plugging-in of cables, assembling rackmounts or leasing data center real estate. The advantages do go deeper into other departments right across the enterprise. Everyone benefits from app availability and high QoS metrics, from marketing to customer after-care. Developers love the ability to spin up resources at will, duplicate whole stacks and data stores (and do it with relative impunity, thanks to on-the-fly clever de-duping, compression and data management). Deploying new apps into production gets easier, and testing cycles can get shorter, too. DevOps will love the tech; the rest of the organization will love its results.
As hyperconvergence matures, we’re featuring three suppliers of hyperconvergence technology that are successfully making the transition into the mainstream of the IT procurement landscape. Whether your organization is jumping wholesale into a hyperconverged infrastructure, or you’re just testing the water, we’d recommend talking to the people at one (or all) of the following companies. Each has impressive use case histories that prove that hyperconvergence is far from beta: it’s where the grown-ups are.
Starwind shatters the myth that hyperconvergence is a technology only available to those organizations with billion-dollar turnovers. The Starwind HCAs (hyperconverged appliances) are a range of physical devices (all flash, spinning platters, or hybrid) built on Dell OEM or Starwind-branded server hardware, that can be installed into a data center, or even dropped into a ROBO to create a fully integrated software-defined platform.
The engineering team at Starwind helps choose the right hardware for each customer, migrates apps and integrates the new system at no extra cost. Workloads can then be migrated between on-prem, AWS, Azure and/or Google/Oracle Cloud giving the huge benefits available from hyperconvergence immediately. To add more capability, storage, and power, add more appliances which just snap in.
There are two further USPs (you can read more about Starwind and its unique offering here on TechHQ) which deserve honorable mention. First is the support model, which consists of proactive monitoring of your systems’ integrity and addressing problems before they develop (there’s no need for a ticket system); the second is the option to pay-as-you-go, so there’s no big, red figure on the company’s CAPEX ledger. For those with cash reserves, the cost of a single node (you can start with just the one) is less than half of the Tesla Model 3.
The massive, city-sized Google data centers run a proprietary file system called Google File System. GFS was developed by one of the founders of Nutanix, who left the global search giant to start his own company. Like the file system he’d help write, Nutanix’s hyperconvergence solutions allow the simple management of massive, and massively distributed, computing and storage resources.
Nutanix is the software of hyperconvergence that runs on (nearly) every platform in everyday use in data environments today and is happy working alongside other virtualization platforms and different OSes. While the technology underpinning all this is impressive, the company itself has a laser-like business focus, keen to stress the advantages and “wins” that a hyperconverged network brings: scalable apps and services, network management costs dropping, the ability to reallocate resources on the fly, either manually, or triggered by events (demand peaks, for instance) or from preset rules.
To date the company serves over 11 thousand customers all over the world, reflecting its place as the first to market with an enterprise-ready hyperconverged stack. Multinationals, governments, charities and even startups use the platform. Like they used to say about Big Blue, no-one ever got fired for buying IBM Nutanix.
NetApp’s offering is comprehensive and deep; it can provide a few HCI nodes or implement a full SaaS infrastructure for a brand-new data center, plus just about anything in between. The solutions each use the NetApp Data Fabric, so there’s a ready-made environment ready to go as required as business scales and moves in terms of strategy or overall direction.
NetApp HCI (hyperconverged infrastructure) hardware comes in various form factors and is designed to make the entire deployment of hyperconverged infrastructures simple enough for small business owners – as opposed to seasoned IT professionals – to deploy and use. Companies can scale compute and storage separately if required, and by merely adding more units, enterprise-scale HCI is achieved. The platform is capable, out-of-the-box, of pulling into one management console all services over an extended network that reaches to cloud instances (like AWS), edge installations, remote offices, and numerous data centers.
Control can be as granular as you like, with individual apps assigned compute, storage and bandwidth at will, or the HCI software can predict demand over time, and automatically reconfigure resources as required.
NetApp Services is the company’s consultative service, or customers can find a local certified partner – or use a combination of guidance and DIY.
*Some of the companies featured are commercial partners of TechHQ
11 December 2019
10 December 2019