Breaking the data-on-glass ceiling
If it were possible to get two metaphors into a single technology analysis headline then this might be it.
In reverse order then, a glass ceiling is (obviously) an unseen barrier (often applied to the plight of gender inequality) that keeps a given demographic group (or thing) from getting from one place to another and progressing.
Also here is the notion of ‘data-on-glass’ i.e. data that is so transparent that it fails to have enough substance to tie it down to any particular application, cloud service or, analytics engine, database or some other entity within an organization’s total IT stack.
Breaking the data on glass ceiling (it’s not an industry term yet, but it could be) refers to the act of a) creating a more opaque and substantive surface for information to sit upon and b) using it to progress the level of intelligence within the company as a whole.
So why are we so concerned with this double metaphorical proposition?
I blame cloud computing
The answer, as in so many technology discussions these days, is cloud.
Modern cloud computing environments are typified by their use of dynamic components spread across hybrid multi-cloud instances, all of which create millions of lines of code and billions of dependencies (the relationships between different pieces of code) that themselves are constantly changing.
This reality (well, this virtualized virtual computing reality) has given rise to a growing number of firms populating the Application Performance Management (APM) space.
Often preferring to label themselves ‘information intelligence’ specialists, the usual suspects here include Dynatrace, New Relic, AppDynamics, Sumo Logic, Solar Winds, Microsoft (for its System Center) and Datadog.
Well deployed intelligently architected APM is said to be able to push customers towards a more autonomous level of IT operations (Ops) management. This is putting a solid backing behind so-called data on glass so that we can see and know exactly where it belongs and what job it does.
In programming circles, we like to talk about the use of DevOps as a coming together of developers with operations staff to ensure our IT systems are built in a way that is sympathetic to the staff who have to look after them. In APM, we extend the autonomous notion of self-healing systems into so-called NoOps i.e. backend systems management happens all by itself.
Taking Dynatrace as an example. The company has said that itself faced these development and operational challenges several years ago when it was reinventing its business and its core software intelligence platform. Ongoing customer interest in lessons learned and best practices developed during the company’s path to NoOps led Dynatrace to codify its know-how into Keptn.
YOU MIGHT LIKE
Spend on cloud finally overtakes the data center
Keptn (pronounced kept-in) is an open-source ‘control plane’ that provides the automation and orchestration of the processes and tools needed for continuous delivery and automated operations for cloud-native environments.
“In talking with CIOs and CTOs of our many enterprise customers, it’s become clear that advanced levels of automation and intelligence are required to bridge the growing gap between limited IT resources and the exponential increase in scale and complexity of dynamic enterprise clouds and the growing cloud-native workloads now being deployed,” said John Van Siclen, CEO of Dynatrace.
APM goes mainstream
The next stage for APM, as it breaks through the glass data ceiling, is that it goes mainstream.
Mainstream APM is APM that gets discussed at board meetings, mainstream APM is APM with abstracted user interfaces that allow non-technical business users to run traces over their application usage to see if NoOps is keeping everything nice and tidy… and mainstream APM is APM that provides that vital observability factor that we have already suggested could be the next biggest IT term that everybody gets to know.
But there are challenges ahead. You can’t just turn APM on and that’s partly why the glass data ceiling exists in the first place.
Organizations will go through data discovery process to be able to classify all the sources of information they have running through the workflows that populate the business at any one time.
These same organizations will need to be able to track all the microservices, application components and processes that execute across their full cloud estate and be able to continuously map dependencies between these entities in real-time. That’s the kind of message that the APM crowd will be hitting us with on the road ahead.
Glass data is bad data, but it does have a limit and we can break the ceiling over this negative facet of IT on the road to NoOps.
APM data is on the up, raise a glass to that.
3 April 2020
2 April 2020