AI, cybersecurity, and the role of the big red button

When do you press the button?
22 August 2023

You don’t need to blow up your system to get use out of a big red button.

• Companies need a “big red button” for their generative AI for day-to-day process maintenance and identity management.
• But a kill switch will also have value as part of a strong cybersecurity portfolio within the next few years.
• We need to retain control of generative AI, and a big red button helps us do that.

The more generative AI is rolled out into business applications around the world, the more essential it becomes that companies have access to a kill switch – a big red button, as it’s semi-comically called – they can use to stop the AI’s actions on their data or their other systems.

Partly because of the language – kill switch, big red button (frequently also used, equally incorrectly, to denote the “thing” that starts a nuclear war) – and partly because of decades of pop culture and science fiction that have fed us the idea of big red buttons meaning an absolute end to something – many people throughout the tech industry and the wider civilian world have come to imagine the big red button as a centralized final sanction.

You know… to be used when the machines rise up and kill us all. That kind of final reel terminator-squisher that fries the “brains” of the malevolent robots or computer systems that have risen up and determined that we humanoids are just a waste of skin.

Sam Altman of OpenAI, the company behind ChatGPT, the Optimus Prime of large language model generative AI, has acknowledged that, in the event of global catastrophic GenAI revolution, the company could absolutely shut down its server farms and data centers and effectively take its generative AI child down.

Those of you old enough to remember War Games (1983) will naturally find that deeply reassuring. Let alone those of you old enough to remember 2001: A Space Odyssey (1968).

But the crucial point is that companies are not in any sense confronting – or expecting to confront – such now overworn science fiction cliches as a global Skynet-style rise of the machines (Terminator, 1984). They’re confronting the possibility that the large language model generative AI they choose to use goes wrong in terms of the functions it performs – from training surgical robots to approving or refusing mortgage loans, to guiding a customer through a problem in a courteous way.

If and when that happens, you don’t need to shut down the whole existence of your chosen generative AI, Altman off-switch style. The point is that you may not in any real sense need to permanently turn off the generative AI at your end, either – when your printer has a paper jam, you don’t blow up the printer.

You might want to, but you don’t. So the nature of a big red button in most business applications of generative AI is hugely removed from the science fictional and linguistically loaded idea we’re used to.

A sledgehammer might WORK like a big red button, but it's rather more permanent.

The more permanent alternative to a kill switch should remain strictly a fantasy.

In Part 1 of this article, we spoke to Kevin Bocek, Vice President, Security Strategy & Threat Intelligence at Venafi (a company specializing in machine identity), to get an idea of why businesses might very well need a big red button for their generative AI – akin to similar real-world buttons on any piece of hardcore manufacturing machinery.

While we had him in the chair, we asked Kevin about the way the future could look as regards regulation and taxation of generative AI.

Regulation and the big red button.

THQ:

We’ve said that there are solid reasons why companies would want a big red button for their AI. And yes, we understand it’s not a real big red button, however much that depresses us and destroys our imagination. But are we moving towards an idea that companies might not be allowed to use generative AI without such a big red… erm… piece of disconnecting code?

KB:

It’s not impossible, because generative AI will be deployed in both life and death situations and life-changing decisions. We’ve said that certification of the models will be necessary, because if you go to a doctor or a lawyer, you want to know they have the skills to do the job, and AI will be the same – you’ll want to know it’s fit for purpose and properly trained.

When it comes to the big red button and the regulatory environment, it feels believable that companies will need not only to have a way of managing the identity of their LLM generative AI, but also a way of pausing, correcting, enabling and disabling the system, a big red –

THQ:

-Pressable thing, yes.

KB:

As we’ve said, we have big red buttons on practically everything in a business operation. Computers definitely have built-in kill switches, just like enterprise systems have built-in kill switches.

There are kill switches which dictate which code your machine’s allowed to run, either in your local computer or at the server. And a large language model is, in and of itself, just code.

So the kill switch is something we know well – we just have to apply it to new technology. And the fact that generative AI will be performing tasks at a high level, with a high level of impact, means it’s not impossible that regulation when it comes will demand particular standards of operation, which could include a kill switch.

With great power comes great responsibility.

THQ:

That’s the thing, isn’t it? The more responsibility we put on these systems, the more power we give them, and so the more controllable they need to be.

KB:

Yeah. And we’ve barely started. Today, you might use ChatGPT to write an email.

You might, as a developer, use it to write some code.

You might use it to enable an experience with a customer that’s the basis of your business. We’re really not at the point where it’s making decisions, taking actions on itself.

But that time is coming.

Businesses and boards are already planning how their operation will look in 2024, 2025, 2026, and the impact of generative AI is very much going into budgetary planning. It certainly is going into skills planning. Already, businesses are thinking “What are the skills that we’re going to need as we look forward? Do we need the same skills? The same people?” People are still important, but the questions fit in with the idea of a kill switch.

The kill switch or "big red button" is probably coming soon, to a business near you.

When do you press the button? And whose hand needs to be on it?

Who presses the big red button?

A business takes on an accountability risk running generative AI for high-level functions that used to be performed by people.

That is something that a machine in and of itself does not do. It doesn’t take on accountability and risk. That’s why we have people. That’s why we employ people, all the way from the managing director down to the frontline workers – accountability and risk. So generative AI can do amazing things, but ultimately, there still is going to have to be someone making a decision to ask whether the technology is doing what it’s supposed to be doing – and if it isn’t, they’ll have to press the kill switch.

THQ:

You’ve said that generative AI might do jobs that people used to do – so are the same people important as were important before the development of generative AI?

KB:

I can give you an example there. Recently, I’ve been talking to finance teams, who have been performing really complex financial analysis.

They’re using generative AI to write themselves sub-programs that make their lives easier, dealing with complex financial matters. In the past, if they wanted to build code that helped them overcome a problem with their daily work, they’d have to either talk to the IT team, or get really sophisticated quantitative analysts to build those sub-systems. Now they can do it at their desks and go about their day.

It’s empowering them to do their jobs faster, make better decisions. So that tells you the type of skills we need.

There are going to be a whole lot more coders, what we might have called in the past “developers,” but they’ll be actually employed in other primary roles.

It goes back to the idea that we really haven’t seen the impact of generative AI yet. And from a cybersecurity perspective, that means we haven’t seen what the risks are going to be yet either, or the controls that we’re going to have to put in place – like the kill switch.

A year from now, two years from now, we’re going to start to see the risks play out. From a cybersecurity perspective, the adversaries are already working – we’ve seen the LLMs they’re working on, WormGPT and others, and those will only get better.

We’ll start to see malicious LLMs that are supposed to be clean ones, and we’ll see standard LLMs that the bad actors will attempt to poison and tamper with. We know already that they’ll also try to steal whole models – which are likely to be the most valuable piece of business intelligence in future businesses. And they’ll try to hold them to ransom.

That’s a whole new level of accountability for chief security officers and security teams. Which gets us back to the idea of the kill switch. We need to have that ultimate sanction not just to readjust on a daily basis as new models need identity processing and permission to do or not do things. We need to be able to choose to use a model or not use it based on human decision-making.

A big red button, or kill switch, won't actually BE a big red button.

We know, we KNOW it’s won’t look like this. But it should – for purely therapeutic reasons, if nothing else.

The kill switch, the big red button that isn’t, will also function as part of an accountability mechanism to deal with the cybersecurity threats of the next few years too.

Sometimes, you really need to not press the big red button. It’s a human-based judgment call.