Accurate data to nanosecond levels? That’s InfluxDB — part one of our review
The revolution in technology that’s been going on since the first PCs appeared on desktops in the 1960s has reached every aspect of life and has changed — beyond recognition — every industry in the world. Even verticals that have remained steadfastly “blue-collar” like mining, oil & gas, utilities, and heavy engineering; all have altered significantly.
While it’s a broad generalization, it could be said that most sectors have a slightly different mix of technology types: industrial companies, for example, may use more IIoT than most and might invest more heavily in OT (operational technology). Most companies are interested in new developments in API technologies that can link up networks of partners and suppliers with whom they work.
What’s relevant is that whatever the makeup of any organization’s technology stack, the unifying factor is data — those mind-bogglingly large amounts of zeros and ones flowing through cables and fibers, traversing continents and being constantly created, moved, stored and analyzed. If data is, as some term it, the new gold, then the ability to ingest, analyze and create meaningful results from all that information is the means by which a new generation of pioneers can make their fortunes.
In practical terms, storing and analyzing data accurately relies on monitoring the infrastructure around it comprises keeping tabs on the network, the applications, the IIoT devices, the databases that store information, the failover & backup systems, and so on. Also of practical importance are the abilities to scale quickly, to unify data silos, and provide access to data (often in real-time) that the various business functions need. And, as the ultimate curveball, all the above needs to be achieved without throwing infinite engineering resources at any of those requirements.
Properly captured, stored, and managed data provides the key to the business’s requirements. Application performance metrics correlate significantly to the quality of customer experience; data regarding service adoption and business transaction throughputs can inform landmark decisions; DevOps teams get better steer and clearer goals — suddenly, the way digital information is gathered and curated is critical in every business function.
In industrial settings, plant and machinery feed information constantly regarding performance, real-time tolerances, and be set to automatically attenuate according to real-time information, like other systems’ faults, or ML algorithms that predict faults, or suggest the best times for downtime & maintenance.
In all of this, whether it’s passively gathering sensor data, or actively monitoring any aspect of the business via data, keeping accurate data in real-time is now critically important. For this, many enterprises are using dedicated time series database technology — the fastest growing type of database in terms of usage in the world today. That’s because time-accurate, fast read-write data records, drawn from every relevant source, unlock possibilities that simply were never available with silo-ed data, operating different schemas.
Time series databases like InfluxDB (which is accurate to the nanosecond level, to give you some idea of its potential granularity) creates for any organization the ability to:
– Hit more stringent SLAs, internal or external,
– Provide access to real-time data to anyone, or any entity, anywhere, to gain insight and take action
– Help guide application/service development that’s based on what’s actually happening in the business,
– Create the basis of empirical knowledge on which the best user experiences are built,
– Reduce waste, minimize downtime, predict maintenance cycles in industrial settings,
– Pinpoint problematic devices, or applications,
– Provide the ability to scale without huge engineering overheads,
– Better allocate resources where they are most needed,
– Leverage external services on real-time data— like machine learning engines, for example,
– Better use of home-grown or third-party tools that ingest data for specialist analysis and insight.
In the next article in this series of two we’ll dive much deeper into the intricacies of the open-source time series database InfluxDB. We’ll look at its technical specifications, how it’s integrated in real-life case studies, the possibilities it offers, and how it acts as a canonical source of information for all enterprise data.
But for now, it’s sufficient to know that InfluxDB works alongside the tools and services in which most businesses have already invested, like trend analysis software, reporting tools, visualization and dashboards, and real-time adaptive code sets that control a massive variety of hardware and software.
Because InfluxDB undertakes much of the heavy-lifting of data storage and organization (like keeping the finest-grained information temporarily before effectively “zooming out” when such detail’s value expires), every existing or new application or service benefits.
Developers love its open-source credentials, its scalability, and hybrid or multi-cloud readiness, and its ability to work alongside container-based applications, or monolithic legacy software.
Industrial operations staff see it as the source of information that can safely determine schedules, machinery replacement, can predict downtimes, and join discrete data sources to produce the basis for accurate analysis.
Finally, because it’s extensible and open, legacy data processing, capture, and communication tools “plug in,” so older technology will receive an extension in usability and ROI. Finance teams and auditors love InfluxDB too!
But for now, we urge you to read for yourself the technical papers and other information available on InfluxData’s site. Whether it’s for better network monitoring, real-time financial analysis of a new app’s uptake and use, or as a nanosecond-accurate data hub for IIoT, the time series InfluxDB is where you should be basing your new gold deposits.