The big red button: why we need an AI kill switch

Every major industrial machine has a big red button - why not generative AI?
21 August 2023

Sometimes, you need a big red button.

• Companies are waking up to the need for a “big red button” for their generative AI.
• A so-called “kill switch” would be useful in terms of helping companies get back on track in case of AI drift or significant error.
• The idea needn’t be as apocalyptic for businesses as the imagery suggests.

In any large-scale industrial machine plant, and for that matter on any single large industrial machine, there’s a big red button, or its equivalent.

A button or a lever that, in the event of things going significantly wrong, can be pressed or pulled to put an end to the process, to allow for calibration, resetting, analysis of error, the removal of trapped human limbs – whatever is necessary to get the process running smoothly and accurately again.

The more that generative AI is rolled out across the whole business world, the more the call has come for the development of a big red button for that technology, too – a kill switch that can essentially interrupt the connection between a company’s processes and its generative AI, so humans (or, potentially, other systems) can sort out what’s gone wrong before we end up in a HAL 9000 situation and the whole thing goes to the electronic dogs.

We wondered how such a kill switch would work with such an integrated technology as generative AI, so we grabbed a chair across from Kevin Bocek, Vice President, Security Strategy & Threat Intelligence at Venafi (a company specializing in machine identity), to get our heads around why it’ll soon be business-critical to have a kill switch for our generative AI.

The danger that requires a big red button.

THQ:

Why do we need to kill switch for generative AI? What’s the danger that makes a big red button necessary for this technology?

KB:

Any machine that speeds up your processes faster than you can perform them is going to need a kill switch. As you say, big manufacturing machines have a big red button because if something goes wrong at speed, the error is rapidly magnified, so you need to be able to shut it down in a hurry.

If you think about where we are with generative AI, it’s very much an unknown, but now it’s not only helping businesses automate their processes faster, it’s generating content, generating code, and all the signs point to a near future in which it will be generating actual business actions. The internet brought us the ability to connect with anything. Generative AI brings us the ability to have intelligence, anywhere, anytime.

That’s very different and very new and very fast. The typical concerns about that are a) is this going to operate incorrectly? B) is this going to be something that could be copied many times over, whether it be because of an adversary, or because of generative AI copying itself out many times, and turning us all into staple manufacturers or something.

There’s also a need to certify this technology, because it’s very powerful, and it’s going to be performing work to a necessarily high standard. When you certify a doctor, or when you certify a lawyer, or any professional, you certify that they have the ability to do the work they’ll be doing, that they bring the appropriate expertise to the job that lets us have a high level of confidence they’ll do it well and in the interests of those who seek their expertise.

As you look to the future of generative AI, the same thing applies. The EU and the UK will be certifying these models. But if you take a look at ChatGPT today, there’s a new model every week. Actually, they refer to them by dates.

To someone who is building a system, for example to empower surgical robots, or make decisions about mortgaging, that runs counter to what the regulators are thinking. So we’re gonna have to certify those models rapidly.

Hal 9000 in 2001 is what a lot of people equate with needing a big red button.

HAL 9000 is not the real reason why we need a big red button. All we’re saying is 2001 would have been a much shorter and more merciful  movie if they’d had one.

The history of the internet shows the need for the big red button.

For all those reasons, having the ability to turn off generative AI or our machine learning model with a kill switch or a big red button starts to become important. We’ve got a switch to turn off the gas. We’ve got a switch to turn off the electricity. Everything that can potentially either cause harm or cause disruption to the workflow comes with a big red button. This type of powerful technology needs to have that big red button for all the reasons that we just mentioned, and many more reasons that we won’t even get to.

A global big red button would be possible, but that's not likely to be of much help.

The idea of a GLOBAL big red button probably isn’t much help.

If we think of the internet and the history of the internet – we’re in the equivalent of the Gopher stage of generative AI right now.

THQ:

For the callow youths among our readers, Gopher, bless it, was the internet before the age of HTTP.

Yes, really. That’s where we are with the capabilities of generative AI right now.

KB:

Exactly. Where this leads to, we don’t know. But for all those reasons, if we’ve learned anything, we’ve learned that the idea of a kill switch is going to be really, really important.

THQ:

As you say, the idea of a kill switch is hardly new – we have one on most technologies, particularly in a business environment.

But given that generative AI is being woven into so many more things across such a broad societal level, how do you get a kill switch that can work on those levels?

Probably not an actual big red button.

KB:

When you think about a modern business, it’s all a set of connected machines making transactions, making decisions, connecting out to use other services, you’re absolutely right.

And when we think about the world of AI and machine learning, absolutely, we want to use the best large language model for communication. I want to use the best large language model that understands our business. I want to use the best machine learning model that understands our customer data. I’m not giving that away. The moment I give that away, then actually, I’m not a business anymore. I’m just making someone else’s business happen.

So you’re absolutely right, that is all a connected world. And we should probably clarify that there won’t be an actual big red button that says “Kill.”

THQ:

Why would you mess with a great sci-fi fantasy like that?

KB:

Sorry! But absolutely, the ability to identify, understand, disable and re-enable the large language model we’re using is important.

The thing is, a kill switch may not mean we actually kill the system, or our connection with the system.

THQ:

It’s like television science-fiction has lied to us our whole life!

KB:

What it may mean though, is that we’ve decided that some models are not allowed for some services. We may decide that some models have risks in them that we deem unsafe for particular parts of our operation. That’s also part of the idea of a kill switch – it doesn’t have to only be used in an emergency, like “ChatGPT’s gone rogue and it killing everybody!” It can be a normal operating function, just like in a manufacturing scenario. Sometimes you have to stop the production line to make things safe. And it’s much the same idea with generative AI.

THQ:

Not so much a kill switch as a pause button? Less a big red button than a small amber button?

Technical failure requires a big red button even in mundane systems.

Imagine a printer jam… but in systems responsible for granting or refusing mortgages. That’s why you need the button.

KB:

The thing is, whether you think of it as a kill switch or a pause button, a big red button or an amber lever, being able to manage the identity of these models becomes increasingly important.

THQ:

Fundamental, even. You have to know what’s what if you’re going to implement a stop order on one large language model for particular work, right?

KB:

Exactly. We said that with ChatGPT, almost every day there’s a different model.

As we use this technology to make life-changing or life and death decisions, knowing which ones we’re using for what level of decision-making becomes important – and so, certifying them will become critical. That means having strong identity information, both on what we’re consuming from the cloud and also what we’re using with our own data.

We give a web server a TLS certificate when we’re convinced it’s fit for purpose. For generative AI, we need to have the same type of identity certification so that we can either allow or disallow systems like this, so we have the power to interrupt the actions of the systems if at any point we decide something’s not right, or we just need to check what’s happening. We need that idea of a kill switch, a big red button, to correct and modify the system as we go along.

THQ:

And we don’t have it yet.

 

In Part 2 of this article, we’ll delve deeper into the practicalities of a big red button to grind your generative AI to a halt – how it might be delivered and what its impact might be.

Sometimes, you just have to press the big red button.