Charting vulnerabilities in software containers

The numbers on software container vulnerability are alarming.
27 February 2023

Software container vulnerabilities – a digital pandemic?

We live in a world that’s increasingly, by necessity, obsessed with cybersecurity, and in which more and more businesses rely on the cloud – or have even gone cloud-native – to survive and thrive. If, in that world, we told you that 87% of software containers have flaws, and that a full 90% are not yet using zero trust techniques… you’d probably have questions. We did when we read those statistics in the Kubernetes report for 2023 from Sysdig, a leading cloud security firm.

So much so that we sat down with Michael Isbitski, Director of Cybersecurity Strategy at Sysdig to ask him what on earth was happening in the world of cloud security and software containers.

THQ:

The new report has some stunning statistics in it about software containers. What’s the background to those figures?

MI:

We’ve been producing the Kubernetes report for seven years now, so this is the sixth edition, and what we’re seeing is the development of a trend. We’ve been tracking container security for years, but what these stats represent is the development of a new standard in software containers – containers that exist for less than five minutes. They tend to spin up, do what they need to do, and tear down. Orchestration platforms, and Kubernetes in particular, is responsible for these new kinds of containers, and they exist this way by design. Last year, we saw the numbers of these kinds of software containers go up significantly. And when you start diving in on this, it begs some questions, like what does that mean for IT? What does it mean for your usage and cost control?

But also, what does it mean for your security?

Are you gathering the right things for audits? What do you do when there’s a security event? Do you even know when there’s a security event? And how do you fix the container? Because there’s nothing in the patch, it’s starting up and terminating too fast for that to be a realistic option. So that underpins everything. It’s foundational to a lot of the observations in the report. But the three main themes are that:

  • software supply chains are creating a lot of risks, because there are vulnerabilities that are inherent in images, – people source them, they are public, people probably don’t know what’s in it, or they can’t fix everything, but they’re going to accept some of that risk and run it anyway.
  • Permissions aren’t being properly allocated, and things are going unused. That’s another big component of zero trust that’s overlooked. There’s too much focus sometimes on network segmentation, and not enough on the identity piece of someone’s cloud. It can get complex or you got a lot of different users (human and services), and not even people are asking whether those identities are permissioned correctly.
  • And then there’s the cloud expense. The workloads are spinning up and terminating too quickly. How do you even get a handle on what your baselines are, so you can know how to cut back? We’re hearing from customers right now that the macroeconomic environment is crushing, but they have no idea how to cut back.

That equates to a lot of vulnerabilities.

What is actually happening?

THQ:

Why do we think this is happening? Nobody, after all, is going to say “Ooh, I’ll have increased vulnerability, please” – so what gives?

MI:

In my advisory work, I get asked that a lot. It’s tended to be a question of open source software security. We’re told they’re well vetted, that’s the promise of open source.

But are they? And to what level? How well vetted do we mean by well vetted? And is that a phrase with a commonly understood definition between developers and companies? You can go down this rabbit hole of security strength – you have to analyze the images, you have to analyze the dependencies, because there’s bad stuff in there. And this is a lot like the mobile app space too, because there’s just so much code being created, those things get pushed into container images, and then that will get pushed to a public registry, and then somebody’s going to pull from it.

And attackers know that.

They do things like name squatting, or typo squatting. So they name their nastiness similarly to something that’s genuinely useful, a junior developer might not realize they’re not looking at unique identifiers. And the next thing you know, you have malicious code in your running infrastructure.

So it’s likely trending up just because it’s the way of software vulnerabilities – container images are becoming the new artefact you’re working with. Typically, organizations will pull from public registries, pull into their own private ones, and then scan those things to make sure they’re clean. And they’ll refer to them as like “golden images” or “golden source.” But even that’s tricky, because it’s still a lot of code, and you need to be continuously assessing. So it can be really tricky to stay ahead of that.

A pandemic of flaws?

THQ:

That partly answers our main question, which is if developers know this is a threat, why do they keep doing it? To the extent that 87% of software containers have flaws in them. What are they not getting? The point is taken though, about it being difficult to scan everything as thoroughly and as continuously as it needs to be – so flaws would seem to be an inescapable and growing pandemic.

MI:

Yeah, that’s exactly where your mind would go. It can sometimes get dismissed as poor coding practices, so companies think “What are our security teams not doing, that this issue is this bad?”

There are a couple of things to stress. One is that there’s positivity in the fact that we know this is going on, and that companies are actively trying to do the right things. They’re scanning, and seeing that things are this bad. So context is really important – knowing what you’re faced with is half the battle.

THQ:

Without being flippant or grim, it’s akin to a cancer battle, right? You can not know about it, do nothing about it, and then be surprised when you die of “Nothing.” Or you can scan, know how bad it is, and then at least have treatment options that can fight the thing?

MI:

Gruesome, but not inaccurate, yeah. There’s a lot of container images with bad stuff in them. And it’s not, not necessarily the fault of the organization that is consuming those images, it’s more or less the nature of the beast with how you source in supply chains.

Acceptable risks?

And then to meet your business goals, you’re going to accept some risk and deploy the things. So a lot of our customers have become our customers because we offer them the scanning capabilities to see what’s what, but also the runtime security aspect, which helps to let them know when something is being attacked, and then helps them block that attack. That’s where runtime protection comes in.

But there’s often that divide between the mindset of “There is a threat, therefore we must test everything,” – and you can, and you probably should – and the mindset that this is business and you’re also bound by time, right? Because release windows tend to be short now, so can the scanner finish in time? And you’re going to find things, and then you still have to make the same release decision. So if you have 1000 vulnerabilities, are you going to block that release? Or block the build?

Security teams are stuck between a rock and a hard place. We have to be enablers for the business, and we have to do it securely. So it’s a balancing act, right? So you’re doing the scanning of your pipelines, but you also have to lean on that runtime.

In-use exposure.

That’s usually where in-use exposure comes in. That acknowledges the need to scan as much as possible, but also considers what’s happening in runtime to inform what you scan as a priority, and as a most likely area of attack.

That’s where we’re working now, to intelligently maximize the effectiveness of the scans that are run, to stand the best chance of spotting and mitigating an attack, by using in-use exposure as a kind of radar to identify the most likely areas to be vulnerable.

 

In Part 2 of this article, we’ll dive deeper into the practical application of in-use exposure to reduce software container vulnerabilities.