Software-defined business priorities? Hyperconvergence today

23 April 2019

Attitudes in IT departments have changed in the last few years to reflect to a more strategic, empowering function in the business, but the old dichotomy remains: technology and the people who control it can power a business’s processes, but at the same time, always within carefully defined limits.

Those limits are sometimes the result of good practice, like cybersecurity measures, and are sometimes caused by infrastructure. A new venture or direction for the organization could well have been approved, planned and audited, but the IT department throw a virtual spanner in the works: they’ll need a new hardware procurement round, or need new resources created for the development of new services and applications.

This type of situation sometimes, quite wrongly, gives the IT function in a business a bad reputation; it’s the apparent feet-dragging that sticks in the mind, not the unseen and yet entirely mandatory provisions that simply have to be in place. Considerations must be given to issues like new rack space, cybersecurity measures, network bandwidth allocation, backup and archiving capacity, failover, virtualization provision, cluster creation – the list is as long as it is opaque to the non-technical.

And unless development teams get the right tools to do what’s required of them, progress might be slower than the ideal, and testing rounds and QA procedures may have to be extended so that the total time-to-production predictions could come as something of a shock to the initial decision-makers.

Occasionally, however, technology comes along (after an appropriate maturing period) that is a game-changer. In recent memory, the virtualization of servers was one such breakthrough; more recently containerization has begun to make significant inroads into development times – think of containers as a series of ready-mades that can be duplicated and bolted together. Each increase in processing power, speed and capacity of storage and bump to network speed, increases technology’s potential. Virtualization in all its forms relies on the speed and inherent power of recent hardware, so there’s no discernible difference between the abstraction of the service in software and a “real-life” version of a physical device.

Source: Shutterstock

In the modern enterprise, the entire IT infrastructure can now be virtualized, or abstracted – that’s known as hyperconvergence (sometimes abbreviated to HX), with the concept referred to as HCI, or hyperconverged infrastructure. That typically comprise compute, storage and networking, which can be in multiple places, like in-house data centers, edge installations such as remote offices, and public and private clouds. Creation of a hyperconverged IT system has many advantages; in short, the total becomes more than the sum of its parts. HCIs offer the following benefits:

– the organization can deploy any portion of its overall IT for specific applications or services, and change the allocated resources at will, irrespective of how, and from where, those resources are supplied.

– according to set rules, triggered events or by manual intervention, resources can be reallocated on the fly.

– bursts in demand for specific applications or services are addressed automatically by HCI’s controlling mechanisms, which intelligently re-prioritizes available infrastructure, so end-user experiences are not affected negatively.

– similarly, sudden reductions in application or service use can free up resources.

– developers can rapidly create working environments based on production services to test, disassemble, or duplicate existing systems.

– developers can allocate resources quickly for new projects, and the moving of a project into production status is a few mouse clicks away.

– intelligent routines in HCO systems de-duplicate and compress data on the fly, making maximum efficiencies in storage, computing power and network traffic continuously.

Here at TechHQ, we’re looking at three suppliers of hyperconverged solutions, with which companies can test the HCI waters, or start to move entire networks over to this revolutionary next step in software abstraction and virtualization. Once thought to be the remit of the high-end data center administrator, hyperconverged IT infrastructures offer their advantages on any scale, for any business. Read on to discover each supplier’s USPs.

LENOVO

The reliability, performance, and security of any IT provision have always been at the core of Lenovo’s products, from high-end enterprise offerings down to consumer goods. In hyperconverged solutions, reliability, performance, and data security are of paramount importance, given the management function of the devices at their heart. Lenovo’s ThinkAgile HX Series provides a rock-solid framework for a hyperconverged infrastructure, and both the dedicated appliances and certified nodes run Nutanix software – the platform that started the HCI (hyperconverged infrastructure) revolution.

Lenovo ThinkAgile HX provides a range of options for any size business, from smaller units (still capable of running a fully virtualized, multi-branch organization), right up to enterprise-level hardware that can deploy storage, compute and infrastructure fit for mission-critical applications. The entire platform offers the ability to provide massive scalability (burst deployments to cloud platforms, for example), and the company’s round-the-clock ThinkAgile Advantage Single Point of Support means that the solution is massively reliable and business-oriented.

The ThinkAgile portfolio enables you to unite and control all your IT resources, ranging from remote branches, data centers, and hybrid clouds, all in one dashboard. You can read more about the the complete offering here on TechHQ.

NETAPP

NetApp’s offering is both comprehensive and deep; it can provide a few HCI-ready nodes or implement a full SaaS infrastructure for a brand-new data center, plus anything in between. The solutions use the NetApp Data Fabric, so there’s a ready-made environment as required; when business scales and moves in terms of strategy or overall direction, the HCI infrastructure is ready.

NetApp HCI hardware comes in various forms and sizes, both virtual and physical, and is designed to make the deployment of hyperconverged infrastructures simple enough for small business owners, not just seasoned IT professionals. Companies can scale computing and storage separately if required, and by merely adding more units, enterprise-scale HCI is, therefore, a plug-and-play solution. The platform is capable out of the box in a single unit of pulling into a management console all services over an extended network. That can reach to cloud instances (like AWS), edge installations, remote offices, and as many data centers as your enterprise owns.

Control can be as granular as required, with individual apps assigned compute, storage and bandwidth at will, or the HCI software can predict demand over time based on past usage and can therefore automatically reconfigure resources as required.

NetApp Services is the company’s consultative service, or customers can find a local certified partner – or use a combination of guidance, DIY and the platform’s usability.

HEWLETT PACKARD ENTERPRISE (HPE)

Hewlett Packard Enterprise’s HCI offering came about as part of an acquisition – often the way the big players either move their offering to a new market or expand out their customer base. SimpliVity solutions, snapped up by HP a few years ago, offers hyperconvergence technologies that reduce cost and complexity, and can power many areas of business in any size organization. Hyperconvergence (a term coined by a technology journalist rather than any of the companies featured here) refers to the software abstraction of the infrastructure found in data centers: servers, network hardware, gateway devices, and security systems. Since then, however, HCI also means software abstraction of “anywhere the data is.”

In purely IT disaster recovery terms, hyperconvergence improves recovery point objectives and recovery time objectives, reducing backup and recovery times to seconds and vastly improving the ratio of logical data to physical storage. However, it’s HPE SimpliVity’s capability to provide flexible and robust IT infrastructure that’s simple to manage that make the solutions so attractive to organizations, especially those invested in growth.

Hewlett Packard Enterprise is the world’s largest provider of enterprise data center solutions, with over 80 percent of Fortune 500 manufacturing companies using HPE data center products. When the enterprise adopts an ITaaS model, IT resources can be quickly provisioned for any workload, and maintain the management and control needed across the entire infrastructure – irrespective of platform. This capability allows the business to dictate change and let the IT department to respond as a strategic player.

*Some of the companies featured are commercial partners of TechHQ