How the Keen event streaming platform levels playing fields in SaaS

12 August 2020 | 3338 Shares

Source: Shutterstock

An accepted fact in enterprise IT today is that capturing and examining the wealth of data across the enterprise can yield significant results.

But, according to IBM, we currently generate 2.5 quintillion bytes of data every day, across the planet, with estimates from Forrester that data quantities will double every two hours by 2025.

Despite the large figures, the fact remains that every API, application, or service, down to the individual database instance is a source of significant intelligence. Capturing data accurately from a myriad of sources, and then placing it so it’s presented to applications (in-house and third-party) gives organizations access to new levels of resources that previously may have been untapped.

Certainly, given enough time and energy, the tools out there in the open-source community can be bolted together to produce a normalized data “lake” that holds all critical information. But there’s often no time nor available resources to create just such a repository, whatever value it may bring.

Introducing Keen

However, there is a solution that collects event data of any color or type in the modern enterprise, normalizes and enriches it, helps query it, and offers it via rich interfaces to make analytics more accessible and impactful at a business level. The Keen platform does much of the heavy lifting required to achieve just that. The alternative is really only to start teams retraining in Apache Kafka and Cassandra, and building JS presentation layers: that’s a costly and resource-heavy approach.

Because it’s built from open-source elements, teams can easily interact with the various components, using parts as extensible additions to existing applications and services in the legacy stack. By the same means, all the resources from the Keen platform are available to new projects. And data can be stored (encrypted) wherever needed, pushed to S3, held locally, re-parsed, archived and so on — the entirety of the data available across the enterprise or from third-parties becomes a better, more valuable resource.

Data streams can be captured from anything connected to a network, including applications, servers, IoT devices, proprietary systems or services, using one of the SDKs or the RESTful API. Data that’s enriched by the Keen cloud-based platform (for instance, by adding location data via a simple code snippet) can be presented in a dashboard that’s easily constructed using built-in libraries. Data at any stage of collection or transformation can, of course, be fed into any legacy application or app under construction.

Queries, calculations, and transformations are handled by the Compute element of Keen, so there’s no construction of lengthy SQL statements, nor any need for manual database attenuation — the Keen stack comes with a host of capabilities under the hood — but more on these in due course.

In our next article, we’ll dive into the technical components of the Keen platform. Until then, you can get 30 days of premium-level access to the Keen platform, or to proceed in your own time frame, download the source code and get your hands dirty in your particular DevOps environment. Either option will let you see how Keen can make positive inroads to your workflow.

With most organizations possessing under exploited information, we recommend Keen as the best way to start wide data capture, processing, and presentation, in whatever context you need. So, the next time the phone rings with a data-related query, Keen could be there to help solve your problems much more quickly.