How to mobilize to deliver ethics to artificial intelligence

We need to make some big calls - and pretty soon.
11 August 2023

AI. No ethics. The most important election of recent times. No biggie…

• Artificial intelligence without ethics should be tackled like climate change.
• International co-operation may well be necessary.
• Individual nations may not have what it takes to get the job done.

There has never been a technology that needs good ethics as much as artificial intelligence does. The consequences of getting this moment in techno-history wrong are disastrous. Not, as has been widely publicized, world-ending disastrous, but disastrous in terms of our ongoing understanding of – and striving towards – a more equitable world than we inherited.

In Part 1 of this article, we talked with Richard Foster-Fletcher, Chair of MKAI, and Simon Bain, CEO of web platform OmniIndex, who aim to establish ethics for artificial intelligence, about what “good ethics” might look like for artificial intelligence. 

In Part 2, Richard and Simon explained the potential consequences of failing to give artificial intelligence some ethics, irrespective of the complexity of the process of establishing absolute harms as a basis for what those ethics should look like.

That left us with one fairly massive question.

THQ:

So how do we give our artificial intelligence ethics? Genuine question – it’s out there already, doing a thousand different jobs. How do we teach it to be progressive technology (without unnecessarily overstepping boundaries of ethical difference)?

RF-F:
By next year.

THQ:

Excuse us?

RF-F:

We’ve got to do that by next year, otherwise it’s going to screw with the elections in the US.

THQ:
Oh. Yeah. Everybody that knows about this technology seems to be deeply worried about exactly that. “The AI election,” as they call it.

SB:
And they’re right to worry. If you look at human nature, it has always been tribal. We’re a tribe group. We like our own tribe. Anybody comes near our tribe, we’ll throw something at them to try and get rid of them.

The problem the internet has caused over the last 20 years is that it’s made us more tribal, with people only reading and viewing within their own grouping. When you have AI pushing more and more of that information across, that gets exacerbated. And this is where I think generative AI can be very damaging.

The only way you can stop that is to make sure that people within politics don’t use the tools to push themselves. But that’s an impossible ask.

Does your artificial intelligence policy work without ethics?

AI. Ethics. National politicians. No, that can’t posibly work.

THQ:
Yeah, what’s that line? A lie goes around the world while the truth is still getting its shoes on? In terms of the election and generative AI, if you have entirely believable video footage of something that is still in reality a lie, broadcast to an echo-chambered public operating on concentrated confirmation bias, then you have no chance of fighting against that. You’re relying on, as you say, the people who are highly invested in a specific result to be moral enough human beings to not misuse the technology.

And at that point you’re putting a hell of a lot of faith in human beings.

SB:
You are. But I think that’ll be quite a short term effect – one or two elections – before the majority of people who are sane and who do listen to both sides will suddenly react and come up. That middle ground has always been there, they’ve just been hidden and have been pushed away.

And I think for the next couple of elections that may be the case. Then all of a sudden when people realize that the people on either extreme are using these tools, that centrist ground will say “Hold on a minute, this is wrong, let’s actually go out and do something about it.”

The how of ethics and artificial intelligence.

THQ:

So, while we wait for the great centrist revolution, we come back to the question of how we give our artificial intelligence ethics.

RF-F:
Well, we have great depth of understanding. We don’t have the breadth of ability to combat this yet. The breadth is where you need the different perspectives from around the world and different people to be able to say “Have you asked this question? What would happen if…?”

That’s very much doable from the implementation perspective. The deployment perspective of AI is a little more complex in terms of the model building the data, but we can start there. And that means you put together external AI audits, you have multi-stakeholder feedback in there. And I would venture from two sides, one from appointed groups that are very well educated but have diversity across the group, say 10-20 people looking at this.

And then I think you’ve got to go out to hundreds, if not thousands of people and have them look at this and share thoughts that companies will never otherwise access in a million years because of the culture and the way that organizations have to be created.

THQ:
How realistic do we think that is, given companies’ perceived need to not be “unnecessarily” audited or unnecessarily criticized.

The who of ethics and artificial intelligence.

SB:
My own view is that this has to come out of the UN. We have a whole range of agreements in the UN which are bilateral across the world. Whether it be to do with embryonic research, whether it’s to do with nuclear disarmament. There’s a whole range of things that governments agree to. It’s very difficult to get the agreement, but governments do agree to them.

Could the UN be the power we need to apply ethics to artificial intelligence?

An international power to tackle an international problem? Call the UN!

If we have a similar understanding about AI, and if we put it in the same bracket as other tools such as embryonic research, which then takes it away from a single organization or handful of organizations (because organizations have to make money, first and foremost), we can say “Right, you can’t go beyond this line,” and then make sure that all other organizations audit that as well.

I think it can be self-managing within guidelines from the UN. It has been done before and I think it can be done again. What I think is really bad are policymakers being driven by individuals and organizations who have their own fixed agenda. We need to make sure that they do not become the sole voice here.

THQ:
So what are the practicalities of how we get it done? If we get it done via the UN, that’s great. But that depends on political buy-in more than anything else to raise that agenda, yes? Meanwhile, the polling shows a majority of Americans are preparing to vote for Donald Trump, who was famous for pulling out of international accords and threatened to pull the US out of NATO, and the UK government has pulled itself out of the EU, is exploring ways to pull itself out of the European Convention on Human Rights, so it can behave in ways that are currently against that convention.

So why do they think they’d be enthusiastic about adding any power to the UN to make rules over anybody operating generative AI?

RF-F:
It’s like smoking really, isn’t it? It’s not clear what the harms are to anybody in the short term by doing this. And technology companies are brilliant at providing this ease and convenience that we love and now sort of depend on.

I think there’s definitely a role, as Simon says, for the UN, because I think there is so much bias from each country and each corporation, too. We aren’t going to see wholesale change until we see a wholesale change in business, because this linear economy can’t carry on. So we know that talking about this in isolation, we do get a bit stumped, but when we talk about it alongside climate change and equality and diversity, you start to see a picture emerging that needs to pull this in for sure, and hopefully will be understood as part of it. 

No artificial intelligence without ethics allowed.

No artificial intelligence without ethics allowed.

THQ:

Just a gentle reminder – there are plenty of people I both the US and Europe who believe climate change is a hoax, and Net Zero is an unnecessary con on the working class.

RF-F:
One of the key things to recognize is that mental harms and physical harms need to be put on the same page in the same place. We would never have allowed something with the power of ChatGPT be deployed in pharmaceuticals or food or anything like that, but because with generative AI, all the potential harms are mental, we don’t take it anything like as seriously. But the harms are immense. We need to take mental health more seriously and then you start to see the ramifications and you can legislate against them.

THQ:
That’s the point, isn’t it? It’s all intertwined and interfolded with other things and other elements of real life. The idea that we can solve x-part of this, while the rest of it is still a swirling mass of weirdness, suggests we need to go right back to basics and build up from there.

You’re going to tell us that’s too dystopian a view, aren’t you?

SB:
Yeah, it is dystopian. I think we have to go back to history and have a look through the many things that were going to kill the world, whether it be the printing press, the television, the VCR, whatever it happens to be. They’ve all had their time of notoriety, but society has a tendency in the end to manage those things.

Where I would be concerned is to have a national government try to manage it, because a national government is… just not very good at managing things. So I come back to the UN.

It has some power, and then it’s down to individual governments to try and make the system work. This is an international issue, so it needs an international solution. Right now, the best hope we have is the UN.

How the UN works #101.