Avoiding “obvious” data breach errors, Part 2
In Part 1 of this article, we talked to David Burden, CIO at digital identity specialists, ForgeRock, about the effectiveness of multi-factor authentication, the devious social engineering tactics of would-be hackers and scammers, and the burden of education that lies on businesses to ensure their staff are clued up about the coercive tactics used by hackers to get staff to make data breach errors and give them access to company systems.
While we had David in the chair, we wanted to explore more about how such education should be implemented.
THQ: So, to coin a phrase, education, education, education is the way to avoid some of the most frequent data breach errors, like email scams?
Yes it is, but I think it’s also about having a team that is trying to keep up with the different attack vectors as a way in. It’s an ever-changing world, and it never stops. The only reason hackers use email scams is because they work. If you educate people more, to the point where it’s really hard work to make an email scam work, they’ll find a newer, weaker option. So you need a core team that really understand the different ways into a company.
Every door is a front door
It’s not just about securing that front door (like email), it’s about securing every door and window. The front door’s only the front door as long as it’s the easiest way in.
So how do you bolt all your windows and doors against the electronic Bogeyman, while still letting your friends and neighbors into your party?
It’s about validating who’s in, and who’s doing what. It’s about looking at their identity, looking at their permissions, looking at their profiles, their personas, making sure that they should be doing what they’re doing.
The good thing now is that we’re at a stage where AI can help with that – naturally, it’s a high data workload for either humans or traditional systems to deal with, validating all this identity and permissions data. But AI makes it achievable in a useful timeframe.
Exactly how does the AI help with that process?
Well, there are two aspects. There’s the identity, and then there’s the access portion. So the access alone, just that front door, the job is to making sure that when people sign in, we know who they are, that they’re allowed in, and that it’s really them. We use many data points to confirm we know who they are, like their location, or behavior, or IP. We bring many data points together to check that they are who they say they are. And then we look at their behavior once they’re in. So the identity aspect is really looking at what they do once they’re in your enterprise, validating their behavior, often looking at their personas, looking at what people should and shouldn’t have access to, always looking for behaviors that might be outside the norm. Machine learning and AI can really help with both doing that, and doing it at a sufficient speed so that the information is actionable if it needs to be.
Total data visibility?
We spoke to Terry Ray at Imperva recently about the possibility of total data visibility and those behavior quirks that need actioning. So that’s where the AI kicks in? Freeing analysts to be analysts and letting the AI flag up when things don’t follow normalistic patterns?
Yes, and of course the next progression from what we have is zero trust. That’s been in the industry for a long time, but again, I think it’s a behavioral thing. Looking at people’s behavior once they’re into your enterprise, making sure it’s valid and fits with the persona of their role. So that’s where AI and ML can come in.
And what happens when the AI flags up some unusual behavior?
Whatever you like, really. You could have a workflow to flag, you can have a workflow to block immediately, a workflow to take people down another sort of challenge route, it depends on the scenario and how you configure your product. But that’s for companies to customize for themselves – of all the ways to block, or challenge, or monitor, which ways work best for them depending on the observed severity or the area of challenge.
So it’s a kind of scaled danger profiler, with attributable pathways at each level?
Yes. It comes down to scoring and ranking and looking at that behavior and responding appropriately within the system.
The Uber affair
You’ve said that businesses can learn particular lessons from the Uber attack. What lessons are those?
Well, again, if we talk about Uber specifically, there was there’s a lot to unpack. We’ve talked about MFA and public MFA fatigue being a way around the system. I think the original attack vector was through an open VPN connection. And then once people managed to understand and get through MFA and get some credentials, they were in. So while I can’t talk about Uber’s practices at all, any company has levels of trust. And even if they had a principle of zero trust within the network, it seems that some credentials were hard coded within a shared drive that probably a lot of people had access to.
Once you’re into that shared area and you’ve got those credentials, it’s Christmas morning. So if you want to learn lessons from that, you need some really great password hygiene around those sort of king or queen passwords that are used for credentials. So making sure you rotate those passwords frequently, maybe have a way to do one-off authentication or time-based authentication for some of those things, have got to be lessons we can learn from that.
But again, once the hackers were in, once they had those credentials, they were free to roam unquestioned. But some element of time-out or, as we discussed, using AI to check at various points in a user’s data journey whether their request is a valid request from a valid person in a valid part of the world would slow down that free-roaming.
It shouldn’t be difficult
So again, using AI to ensure user validation in the data journey and solid password hygiene is the way forward?
Yes – password hygiene is not that difficult. Again, this is not Uber-specific, but I’ve seen it happen. Good credential hygiene around your password, password rotation, finding ways to store passwords that probably aren’t in a text file, using things like LastPass, or one password for some of those more, you know, important credentials that cannot be and shouldn’t be shared in the text file.
Do all that, and then keep up your education of staff, so it never becomes “that thing we learned about way back when,” but is something that sticks in their minds, especially as the situation updates to tackle the evolving threat landscape.
Avoiding what should be obvious data breach errors should be… well, obvious. But there are companies large and small getting suckered every day through email scams, social engineering, poor credential hygiene, user data inattention, and inadequate staff training on cybersecurity issues and pressures.
Stop making obvious data breach errors. It’s the least you owe to your staff, your supply chain, your customers – and your own bottom line.
5 December 2022
5 December 2022