Throwing some light on the possibilities of object-storage tech in business today
A big challenge to be overcome by the advocates of a new product in the tech space is how to sell the product’s benefits to a broader audience than the often relatively few people that understand the technology in detail.
Those with the right type of mind will indubitably get interested in a new product solely for the inventive way that it works; perhaps how the software comes in at a problem in a new way using cutting-edge methods.
Such individuals excited by new technology for its own sake were once dismissively termed geeks or nerds – and perhaps, those terms still are still used, but the dismissive nature of the moniker is fading. The geeks are, after all, inheriting the Earth.
In the business world, these days inextricably bound up with technology, the emphasis is very much placed on what new technology can contribute to the business’s aims and objectives. Blockchain may be somewhat in vogue, but until a cast-iron use case proves significant, practical and economically viable, the business community isn’t putting its money down.
Most individuals running businesses, for instance, wouldn’t be able to describe the concept of an electronic storage file system, much less understand the concepts behind ZFS caches, S3 buckets or abstractions of distributed storage nodes. But mention the ability to store large amounts of data which are available very quickly, and interest may be piqued. Add to the description the fact that the storage capabilities mean that it’s massively scalable (without, necessarily, a large CAPEX) and the operations-oriented professional begins to see the possibilities.
This is just the case with object-oriented storage. Rather than talk to C-level personnel about metadata pools and abstraction, it’s the more astute “geek” that can point to Amazon’s world-straddling business empire (as one example) and say, “It’s how the big guys manage to store all that data so easily.”
Clearly, a supplier of storage in today’s market needs to have the technical “chops” to explain a product to the IT department; otherwise, any solution is only smoke & mirrors. But it’s the decision makers, the ones for whom the business benefits are clear, that the digital storage supplier must convince.
Object storage is scalable, in ways that data center owners could only have dreamed about just seven or eight years ago. Imagine being able to switch up the amount of space for data seamlessly, with only a few mouse clicks to deploy space. With the providers we consider further down this page, that’s a reality. Each of the two suppliers’ solutions offer slightly different solutions to the old problem: how to add storage easily. In the past, adding storage was a case of buying, installing and carefully configuring individual hardware units, or renting remote storage and stringing it together to all a company’s other resources by hand.
In addition to the speed at which storage can scale using object storage technologies, the data thus managed is accessible in milliseconds, with very low latencies. Clearly, the faster the throughput required, and the higher its availability, the more those repositories will cost – but in almost every case, the costs are lower than those of “traditional” storage deployments.
Due to its rapid availability, bringing new storage online into an object storage platform means that there’s no disconnect as resources are added (or indeed removed). Resources can be distributed between the in-house data center, and several clouds, like AWS, Google Cloud Services or Microsoft Azure, and can be managed, if required, by technologies from the likes of Quantum, Veritas, or Rubrik.
With data aggregated from discrete silos, there are always security concerns. However, the suppliers below each also offer solutions to this issue, like at-rest encryption, SSL for transiting data, and even data that’s fragmented and distributed across multiple nodes with chunks reserved as parity.
If your organization needs massive, scalable storage that’s much, much cheaper than building new hardware facilities, comes with baked-in security, and is managed from a single console (or similar), read on.
With the industry’s most S3-compatible object storage, it comes as no surprise to learn that the company’s HyperStore platform (on-premise storage modules) provide a cost-effective solution to limitlessly scaling storage for business and other data in one or multiple locations. HyperStore also integrates seamlessly with AWS, GCP, Azure and other S3-compatible cloud providers. .
The HyperStore management console provides unified control of storage resources that may be spread across multiple locations, the in-house repositories on the company’s hardware and can even intermingle any NAS facilities into the eco-system, via the HyperFile offering.
HyperStore creates a unified view of all data across multiple locations, with all the potential advantages of metadata that drives faster and more insightful extraction on valuable analytics. This means that the considerable investment made any organizations into storage (own or third-party) can create better productivity through the faster results and abilities you’ll get that weren’t available with legacy platforms.
Companies can scale up massively at will, by adding HyperStore modules or spinning up cloud storage with zero downtime (any S3 compatible endpoint can be brought into the mix), and encryption at rest plus encrypted traffic between nodes provides cast-iron security.
Cloudian’s offerings are object storage solutions much favored by organizations like video streaming or data capture companies, but in a practical business sense, its platform provides one thing: safe, massively-scalable storage, plus central control and overall cost-efficiencies – did we mention cuts of up to 75 percent in storage costs? You can read more about Cloudian here.
Scality’s software-defined storage services cover off a broad range of verticals: the media industry is clearly a big user of the technology (and Scality boasts some large, well-known household names among its roster of clients), security firms (video capture on a massive scale), ISPs, the public sector, and healthcare. The latter’s data requirements are growing as more medical information gets digitized, plus of course, there are rich data capture methods in modern hospitals such as 3D scans (MRI and so on).
The Scality RING enables IT teams to create massively-scalable storage lakes that accommodate data demands from modern apps with agility, effectively guaranteeing 100 percent uptime and fewer management headaches for provisioning teams. TCO can be “up to 90%” lower than traditional data centers according to Scality, as organizations can leverage mass storage from the cloud and other all-purpose suppliers.
Scality RING flips x86 servers’ use into becoming storage nodes that can join and scale under software abstraction, plus the Scality S3 Connector does just what it says, bringing into the storage aggregation any S3-compatible store. S3-compatible APIs and SDKs “play nicely”, so the possibilities for Scality users are limited by business constraints, rather than infrastructure dictating what is, or isn’t possible.
*Some of companies profiled are commercial partners of TechHQ
8 December 2022
7 December 2022
7 December 2022