Edge computing platforms celebrate on-site wins
What do restaurants, wind farms, and virtual reality gaming have in common? The answer, it turns out, is edge computing. While cloud services have transformed the way that enterprises operate, performing much of the heavy lifting in terms of computing power and data storage, it remains important to have hardware on the ground, typically as close to users as possible. The hurdle for operators is getting local and cloud services to work in harmony, consuming minimal resources, and running like clockwork – tasks that have become much easier thanks to the rise of edge computing platforms.
To discover why edge computing has become popular, let’s circle back to our original examples. Large food chains need to have multiple point-of-sale devices (which can include self-service screens) per restaurant, plus digital displays in the kitchens; sometimes in locations where network coverage can be variable or patchy. Intermittent network availability, or the risk of outages, makes any retailer – not just restaurateurs – nervous. And the bigger the firm, the bigger the losses when services are interrupted.
By running their operations on an edge computing platform, restaurants can provide continuous data services with high availability and disconnected, local-running contingencies. Offering the best of both worlds, all transactions can be synchronized when possible to maintain the information advantages of a centralized dashboard. To do this, edge computing platform providers such as Sunlight – which has offices in the UK and Greece – enable operators to run equipment on-site as a ‘micro-cloud’ on a rugged hardware stack. And, thanks to the edge platform, all of these micro-clouds can be seamlessly joined together using software-defined infrastructure, giving centralized monitoring and management.
Markets on the edge
Physical retail is a strong prospect for edge computing platform providers. To survive, stores are having to ramp up their sales game to lure customers back onto high streets and into malls. Devices such as smart mirrors and digital signage that can adapt to customer traffic and promote different offers and services, are just a couple of examples of endpoints that owners will be looking to add to a fault-tolerant network with the ability to make the most of the analytics on offer. There’s also the “traditional” anti-theft CCTV system that’s now capable of tracking potential offenders from store to store – as long as the edge stack has enough computing power to enable this type of more advanced facility.
Keeping such an edge arrangement in mind, it’s easy to see why the edge model would work well for industrial applications too. Examples include wind farms and utilities infrastructure where information needs to be gathered locally, but coordinated so that all of the data streams can be automatically patched together to give an overview of operations. Traffic needs to run in the other direction too, so that sites can be updated remotely without having to perform a truck roll and eating into profits (and causing delays) by deploying teams locally.
A further advantage to edge computing is latency, which plays into the world of online gaming (one of our examples at the top of the article), but applies equally to other fast-moving scenarios such as V2X (vehicle-to-vehicle, vehicle-to-infrastructure, vehicle-to-pedestrian, etc.) communications. Running dynamic applications can demand sub-50 ms roundtrip latency, according to STL Partners – a consultancy group that has worked with Vodafone, Deutsche Telekom, Hewlett Packard Enterprise, and other big names in the telecoms industry.
One of the challenges in telecoms, in terms of managing latency, relates to provider infrastructure. “Each operator has a different topology, which means that although each mobile network is made up of an access, transport and core network, the number of hops it takes for traffic to get through the network is not equivalent across operators,” writes Dalia Adib, Edge computing practice lead at STL Partners.
Moving compute to the edge – for example, to perform rendering tasks in a VR headset, while sending lower-bandwidth orientation data over the network – takes the strain off the communications infrastructure. And while the concept is compelling, the bottleneck until the advent of edge computing platforms has been simplifying the integration process for customers.
Today, thanks to progress made in virtualization and software-defined networking and storage – which allows administrators to initialize, control, change and manage systems programmatically – edge computing platform providers can offer their customers features such as zero-touch edge device onboarding. Platforms can work with a wide range of hardware, different CPU’s such as Intel or Arm designs, and apps built for Linux, Windows, and other architectures.
A Lenovo SE350 edge server running Sunlight’s NexVisor hyperconverged infrastructure (or HCI, a single arrangement combining virtualization, servers, storage networking and systems) won an innovation award in May 2022 by providing a compact, high-performance solution for running data-intensive applications at the edge. “We created the ThinkSystem SE350 to be small and rugged enough to run anywhere – it can even be hung on the wall of a smart factory – without compromising on performance,” explained Lenovo’s Florian Pawletta, who worked with the Sunlight team.
A big advantage of the arrangement is the low memory overhead delivered by the HCI stack (less than 5% for the NexVisor solution), which maximizes the amount of space that’s left to run customer applications. And in a factory setting, this could include gathering sensor data from manufacturing plant to inform predictive maintenance, as well as feeding into machine intelligence and analysis systems.