What are the risks of artificial intelligence without ethics?

Just say for a second we ignore the ethics question. Who dies?
10 August 2023

The future of equality efforts hangs in the balance.

• Unless it’s trained, the ethics of artificial intelligence are a synthesis of the internet.
• The internet was never designed to teach ethics to artificial intelligence.
• Without ethics, the worldwide take-up of artificial intelligence will make inequality stronger and more divisive.

It’s becoming increasingly clear the more we add generative AI into our business systems that artificial intelligence, an entirely machine-based system, needs some code of ethics.

But adding ethics to artificial intelligence is not in any sense as easy as it might sound.

In Part 1 of this article, we talked with Richard Foster-Fletcher, Chair of MKAI, and Simon Bain, CEO of web platform OmniIndex, both of whom are fighting to get this done, about the complexity of thinking we understand what “good ethics” – and especially “bias-free ethics” – might look like.

That’s especially difficult, given that we have to solve for the naturally-grown ethical biases of the people training the AI (most of whom will probably be white, westernized men), as well as the ethics of the company, and a country, and arrive at an ethical model that can be applied worldwide to artificial intelligence technology, which has nothing beyond the internet as a whole, and what we teach it of our concept of ethics, to guide it.

THQ:
We were just talking about the tricky business of moralistic data. As you say, whose morals are we talking about? How certain are we of the ground on which we’re playing here?

RF-F:
Yeah. There are some absolute morals that we can all share, and others we need to be very careful of.

SB:
I think we need to break it down, too. We have, say, Chat GPT Global, which has all the internet data in it, and as Richard said, that’s probably 60, 70% US-based. But then you’ve got ChatGPT as a private sandbox system for an organization. Now their data is purely for them. There are going to be biases in that data, obviously, but it is a much smaller dataset and therefore is much less likely to have some of those ethical problems, being that it’s based on an organization’s business model.

If they start bringing in external data, then we’ve got problems. I think we need to differentiate what we’re talking about here. Are we talking about AI leading the world and every answer coming specifically from the internet? God forbid, we all know how good Wikipedia is. Or are we talking about artificial intelligence, generative AI in this instance, being a subset of data for an industry to get answers for a specific industry within their organizational data? 

I think we need to be careful to differentiate the cases.

Can artifical intelligence systems cope with ethics?

THQ:

Oh, definitely. But is there also a degree to which the number and the extent of the harms that can be done are lowered simply by the scale crunching down to individual industries, individual companies? Or do we just know there’s something wrong within the system?

SB:
I think the headlines are just saying there’s something wrong within the system.

My own view is that a lot of the hype we’ve been having recently that “AI is going to blow up the world” and all the rest of it is the best marketing and PR pitch I’ve ever seen.

The IT industry is pretty good at PR and marketing, but both OpenAI and Google, through Bard and Microsoft played a blinder when they said AI could blow up the world. Brilliant. That’s got everybody talking about it.

It’s the biggest load of BS I’ve ever heard, but it does have an awful lot of people talking about it, which is exactly what they wanted. Because what you’ve got to remember about the reason Microsoft put so much money into OpenAI and the reason Google has got Bard, is not for the greater good.

It’s to sell you more advertising. And the reason Google came out with Bard so quickly, and I’m pretty certain they didn’t want to, was because Microsoft had a lead on them. When it comes to advertising inside their browser, we have to remember what the uses of those particular tools were.

I mean, Google’s just announced that the Gemini Project, which is actually powered by DeepMind, a true AI application, is used in the National Health Service in the UK as well as elsewhere. But we have to be careful again of what it is we’re actually looking at and why these systems came about in the first place. Because it wasn’t to help us, it was to make revenue.

THQ:
That’s all absolutely true. But the point is that whatever their initial purpose was, they’ve been taken up and taken across the board and very soon they’re going to be in more or less everything. So they quickly outgrow that initial purpose while still fulfilling it. And so it becomes a bigger thing to deal with.

Also, of course, they’ve got at least 100 years of science fiction to help them in the idea that “the machines are going to kill us.”

So what is the scale of this issue, Richard?

RF-F:
What ChatGPT and the others have done is answer the question, “What would it be like if I could chat with the internet?”

That means the scale of the problem is significant because no matter what the data is doing, there has to be this layer of interpretation attached to that. So you’ve got Google, for example, and somebody types in “CEO” and then presses “Image Search.” What do you want them to show?

Do you want them to represent the data, which maybe looks like 90% white men over here in this part of the world? Or do you want them not to be representative of the data as it exists, and show a variety of people?

That’s just an example of the choice they have to make. What do you want to show? An unpleasant but accurate picture, or an aspiration but inaccurate one?

That’s a tough one, right?

The ethical issues of using the internet to train artificial intelligence.

Say the three of us developed a dating app and were selling it to users for $20 a month. And then we have a meeting and say “Hey, look at all this data we’ve got. We should sell the data and make even more money!”

But now we’re selling data into aggregators that were never intended for that purpose. So we’ve got GPS data, we’ve got timestamps of data of when people sent messages and how many times and so on. And yes, that’s extremely powerful and useful. That’s why Meta bought WhatsApp: it doesn’t read the messages, but the metadata is worth a fortune. But it wasn’t intended for that purpose.

Our dating app now is producing data that we never intended to, and that’s the situation we’ve got with the entire internet. 

It was never intended to be training data for this kind of artificial intelligence, let alone to try to teach it ethics.

So the problem exists on a global scale.

THQ:
That’s almost as scary as the “Artificial intelligence will burn the world” headlines. Only slightly more intellectual and real.

So what happens if we don’t address this? As you say, it’s not going to blow up the world, but in what ways will it negatively affect the nature of society as we understand it now? 

RF-F:
It’s cat and mouse, isn’t it? That’s the problem – you start to lose track of what’s reaction to the world and what’s been created in the world and you can no longer really understand sources or truth or where things have come from and who’s written them and why they’ve been written.

For instance, we’ve always had very clever marketing people and campaigns, but we’ve known the purpose. You look up at the billboard and there’s a bunch of young, beautiful people drinking Diet Coke. It’s obvious – you’ll be more popular if you drink Diet Coke. It’s a dumb message, but we get it. They want us to go and buy Diet Coke.

THQ:

*Pops can.* Sorry, do continue.

RF-F:

Then we get into the world of social media algorithms, AI, and large language models, and we have no idea what’s what – no idea of motivation, or response, or outcome. So there is a complete inequality of understanding between the user and what they’re putting in, what’s done with that, what the implications are of that. 

The calls for ethics in artificial intelligence are growing.

The calls for ethics in artificial intelligence are growing.

SB:
History is written by the victors. It’s never written by those who supposedly lost. For instance, take Charles Babbage. Absolutely brilliant, man. But did he invent the engine? Or did his supposed sidekick, Ada Lovelace? Well, probably she did, but he was the one who wrote the paper.

What we’re doing or what we might end up doing with generative AI and artificial intelligence of this nature is rebuilding all of those prejudices and enhancing them, which means that in 20 years’ time, 30 years’ time, instead of having equality in the marketplace and the workplace, we’ll have an even larger amount of inequality. Of patriarchy. Of white privilege. Of heteronormativity.

We’ve spent 20 to 30 years trying to make the world a slightly less prejudicial place.

We haven’t done a very good job of it, from what I can see. But we are in deep danger of knocking it backwards because of the inbuilt prejudices and everything that’s been written on the internet. If we’re using these tools to make decisions, they’re going to make those decisions based on what they know.

And what they know may not be the truth.

THQ:

In fact, it’s vastly unlikely to be the truth.

SB:

Exactly. Look at England in 1066. Did King Harold get an arrow through his eye? It’s more likely that he disappeared and went hiding for a little bit before going across to France or wherever. But he didn’t create the tapestry, the accepted record of events.

THQ:
Dear gods, we’ve just realized. As far as artificial intelligence is concerned, the internet is the tapestry of record, and the tapestry of accepted ethics. Come to that, the “Bayeux Tapestry”… isn’t even, actually, a tapestry. It’s an embroidery.

But of course, the arrow in the eye makes for a much better human narrative. The fact that it’s almost certainly not true has always been seen as somehow less relevant than the quality of the story.

Artificial intelligence without ethics is likely to favor quantity of data over quality - just like the Bayeux Tapestry.

History – it’s all fun and games till someone “loses an eye.” #WinkyFace

SB:
Yeah, exactly. And that’s what we’re now taught.

THQ:
So it won’t destroy the world, but it might make the world we think we know unrecognizable to future generations?

SB:
That might be a little bit too exaggerated, but it’s going to make it harder for equality to take place, because we’re building on a state of inequality to begin with, and we’re teaching those models with an unequal dataset. And as soon as you have that, then it gets built in and propagated out. As Rich said earlier, it’s so much more data now, so many more decisions.

Artificial intelligence will be a mirror of our own - with or without ethics.

What will our newest magic mirror show us about our society? That depends on whether we teach it ethics.

THQ:
The big idea in science fact and science fiction both is that technology is a mirror of the society that creates it. So the question is how do we solve this problem of teaching bias and flawed ethics to artificial intelligence, without tearing down and fixing the society we know?

 

In Part 3 of this article… we’ll find out.

Frankenstein taught us that created systems will emulate their creators, so leaving them without ethics is probably an enormously bad idea.