Developing analytical applications at the speed of data with Tinybird
Marketing executives in the tech world are fond of broad statements like “moving at the speed of data.” Regardless of how you might interpret that, one thing is certain: gaining utility from data is far from a speedy process. At a time in the evolution of technology where rapid improvements in hardware enable massively parallel processing and multi-threading are par for the course, and even a modest laptop comes with eight or more processor cores, outsiders to the data “world” would then be forgiven for thinking that something doesn’t add up. With all these resources on tap, why do even “cutting edge” data-intensive applications seem to come with built-in delays when presenting information that’s theoretically available in just a few milliseconds?
In reality, getting data from creation to useful presentation is quite hard. Batch processing of data still dominates the data landscape, so most data consumers are forced to wait for the insights that data might offer. Data warehouses have admirably addressed the business intelligence needs of organizations with widespread and disparate data sources, but the sheer vastness of the data and the complexity to analyze it means that the average dashboard shows analytics on data that’s hours or days old. Executives and decision-makers have put up with this for a while, but the average consumer expects better. Thanks to the public internet, gigabit bandwidth, and the smartphone, consumers demand applications that respond right now. Slow data might work for internal business intelligence, but it doesn’t work for user-facing applications.
Whether you’re an IT manager, a data engineer, a developer, or a business analyst, you understand that the processes involved in Extract, Transform, Load (ETL) operations – or any variations thereon – are complex, and complexity eats resources, especially time. It is not simple to ingest, normalize, process and present information in a way that is useful, but companies want to create user-facing applications and services based on data. If they succeed, it can be highly lucrative. If they fail, a competitor may edge them out. Startups and the careers they support live and die by the cleverness of code and the responsiveness of applications in the hands of their users.
An effective data-based application has to have its ducks in a row. It must ingest the freshest data, transform it into valuable analytics, and present those results to the user, all in a timeframe that’s acceptable (or better) to the user. In reality, this means milliseconds.
But ducks are cumbersome beasts. In contrast, Tinybird has become an accelerating force for application teams that need speed, scale and simplicity when building with data. Where application backends might once have been constructed from an amalgam of databases, orchestrators, stream processors, and the other trappings of the “modern data stack”, Tinybird offers the ingest-to-publish data pipeline on tap. And it’s a solution that developers love because it’s delightful and empowering to work with, allowing them to remain creative and effective while eliminating the mundane complexities of working with massive amounts of data.
The Tinybird workflow is surprisingly bare and seemingly simple. Data can be ingested easily and in realtime using native connectors for multiple sources. The platform does the heavy lifting of provisioning, maintaining, and scaling the data infrastructure needed to support low-latency and high-concurrency. Transforming and enriching that data, in an interface that the platform terms Pipes, is achieved with simple SQL.
But the unique magic of Tinybird is how then these SQL queries can be published as fully-documented, low-latency APIs in a single click. With this framework, developers can build proofs of concept with just a few dozen lines of code, and even in highly complex production environments with many different data sources and transformations, the platform shines in its simplicity and ability to remove friction from the paths developers follow to push their products to market.
Use-cases for Tinybird range from real-time financial processing, to usage-based billing, to eCommerce personalization, to anomaly detection and log analysis, to user-facing analytics dashboards that update as soon as new data gets created. Any application that needs to analyze and process large amounts of data and present it to a user within an application is a good fit for the Tinybird platform.
In many ways, batch processing is an anomaly born of habit and expectations lowered by the realities of working with data warehouses. But it need not be the case. Companies that know the value of the data they collect and want to leverage it in their products and user experiences see Tinybird as a catalyst for this development.
You can read the documentation and see the code examples for yourself, and sign up for free to try the service. Tinybird is winning over companies that really do want to build applications that, to borrow a phrase, move at the speed of data.