The Data-in-Motion concept that’s changing industries fast: Confluent

The advantages of shifting from roll-your-own to paid-for Kafka are apparent. Here's how to get the impetus to make the leap to the professional deployment you need.
16 July 2021 | 2880 Shares

Photo by CHUTTERSNAP on Unsplash

There have been two recent drivers to changes in most organizations’ strategies. Firstly, the immediacy of high-quality experiences that are available on mobile and desktop. The second is the multiplication by a significant factor of the number of data sources available to the organization. The sources emanate from internal or external systems. (For this article, an internal system could be on-premise or cloud-hosted: the definition stems from ownership, not location). External systems are practically infinite in number, like social media channels, information from IoT devices, services that pre-aggregate data, and so forth.

Talk of “big data” a few years ago centered around how companies could use their newly found resources by subjecting the information to processing that could show trends in behavior or places where any business activities might be missing the mark in some way.

But the tired cliché holds — technology moves very quickly. Today, organizations find significant advantages to processing and acting on data as it travels into and across the organization. This is “data in motion,” and the ability to make decisions in real-time or near-real-time is proving particularly advantageous for organizations capable of ingesting data, performing calculations on it, and producing relevant information in different formats.

Examples could be streams of event data about insurance claims, currency movements, Tweets, customers’ retail orders, emails, geospatial information from transport, and financial market analysis services. Increasingly, event stream processors also get feeds from sensors based on physical assets such as vehicles, mobile devices, or machinery.

Platforms like Confluent‘s implementation of services (based on Apache Kafka) can process input data as it arrives (AKA “data in motion”). Options then exist to create real-time events from the processed information, and/or it can be stored for more traditional data analysis or business intelligence purposes.

The processing of and acting on disparate data is beginning to be used in several sectors, some of which, like insurance technology and fintech, are more mature in their use than others. In the last 18 months, supply chains in retail, pharma, and food, for example, have been quick to adopt event stream processing platforms, a process of change that has had to be accelerated.

Where once companies relied on batch processing of data (after-the-fact), vital updates to supply chain partners and customers were available because of the ability to ingest and act on heterogeneous data in near-real-time. Customer experience quality improves when information is available quickly and can be presented down preferred channels – a text update, an instant message about a delayed delivery.

Operational quality also improves for each element of even a complex supply chain, which might comprise an ocean freight operator, several last-mile delivery services, and warehouse and distribution facilities. Data in motion represents better stock control, replenishment with less spoilage (vital in some F&B operations), freight container allocations, choice of shipper — the effects of timely data, processed quickly and acted upon in milliseconds multiplies as it travels up and down the chain.

Quality assurance for pharma and farming operations, fraud prevention in seconds for fintech: the capture, calculation on, and processing of data in motion are the foundation on which organizations are building industry-changing platforms. In some sectors like banking, disruption to the “old guard” is an accepted fact. Other industries’ use of IoT and IIoT via event stream processing of data in motion is beginning to change sectors like manufacturing and engineering. On the not-too-distant horizon are new generations of autonomous transport. Undoubtedly, the definition of the critical role played by real-time decision-making based on data-in-motion.

With performance and throughput of information becoming a business imperative, many organizations leverage technologies like Kafka or Amazon KAFKA-VARIATION to move towards a stream data model in all their operations.

Use cases include simple filtering of raw data to serve it to targeted consumers through developing the ability to scale production systems thanks to simplified I/O systems (a subject we will be covering in our next article looking at Confluent’s cloud offerings).

Once data has been captured, aggregated, and processed in real-time, it can be deployed right across an organization’s functions: DevOps creating new micro-service applications, to Marketing, for developing new customer-facing touchpoints, or IIoT attenuators trimming the controls on production lines. The possibilities of real-time stream computing are changing industries, and with support, reliability, and security baked in, Confluent’s data in motion technologies are the tools that many use to transition to a faster, data-driven business model.

We’ll be looking in more detail about the Confluent solutions based on Kafka in a future article and focusing primarily on the cloud/on-premise arbitration and consolidation services the company offers.

If you’d like to learn more about Confluent and how data in motion is creating change, get started for free on Confluent Cloud.