Can artificial intelligence have ethics?

Giving artificial intelligence ethics is necessary. But whose ethics?
9 August 2023

Whose ethics are applicable to worldwide systems like AI?

• Artificial intelligence has no inherent ethics.
• The task of defining ethics for artificial intelligence is complex.
• Even within one culture, there are many ethical standards.

Generative AI has been both brilliant and controversial since it exploded across the world late in 2022. But one of the main concerns around its use is that artificial intelligence is a system inherently devoid of ethics.

There are plenty of people who argue that artificial intelligence is just a tool, and no-one has ever suffered by using a hammer against a nail or a spoon to eat their dessert – and no-one’s ever argued the need for an ethical hammer.

But that of course reduces, beyond the point of useful comparison, the kinds of uses to which artificial intelligence is being put, already, less than a year after its release.

In particular, artificial intelligence is being deployed in ways that make ethics not just a necessary part of its make-up, but a crucial one.

Data security, recruitment, resource allocation and more are areas in which the new iterations of generative AI are being deployed – and in which, were the jobs being done by human beings, we would want to be sure that those humans had ethical compasses in line with both company aspirations and norms of societal positivity and progressiveness.

Does artificial intelligence have electric ethics?

Artificial intelligence doesn’t, in any native way, have those compasses. Large language models are trained on the screaming ethical void that is the internet. More bespoke, open-source versions can be trained more easily in company-specific data-pools, but even that leads to uncomfortable questions.

The case of Amazon is a key example. When using artificial intelligence in its initial recruitment process, it famously started weeding out women and people of color when looking for managerial candidates – because the historically accurate data it was fed on what qualities successful Amazon managers had strongly suggested that such managers were both white, and male.

Artificial intelligence has the inherent ethics of a mirror. If your company has had historically poor representation, you can be sure you’ll be teaching that poverty of diversity to your AI.

And you really need to do better than that.

That’s why a UK-based artificial intelligence ethics body, MKAI (Morality and Knowledge in Artificial Intelligence) and secure data platform OmniIndex have come together in an attempt to eliminate the bias inherent in an AI created in our society, and provide a pathway that includes – and indeed insists on – good, 21st century ethics in AI projects.

We met with Richard Foster-Fletcher, Chair of MKAI, and Simon Bain, CEO of OmniIndex, to see how it’s possible to teach artificial intelligence to have good ethics – and to some extent, how we can be sure we knew what good ethics look like.

The scale of the ethics question in artificial intelligence.

THQ:
What’s the scale of the problem that we’re tackling when it comes to artificial intelligence bias and ethics? What happens if we just don’t tackle it, or if tackle it in the wrong way?

RF-F:
People in the industry are more concerned about this than people outside the industry, because we’re biased and we spend our time thinking about it – which outsiders probably don’t. With that proviso in place, I think it’s an absolutely global problem.

Artificial intelligence ethics will need to be applicable around the world.

Artificial intelligence ethics will need to be applicable around the world.

I don’t think it’s a problem in terms of extinction threat, as has been said, but I do think it’s a problem in terms of the underlying structure of our society, particularly in societies that we know here in Europe, which we have to believe are built on fairness and just principles.

I think we’re in danger of damaging those significantly. Would you like the Artificial Intelligence Ethics Issues 101 version?

THQ:
We love a 101 version – then at least we can build on a solid foundation of understanding – which seems necessary in questions of ethics.

RF-F:
OK, well the three things that can go wrong with artificial intelligence are 1) that there’s bias in the data inherently, 2) that as we move forward, the selections that we make are biased, and 3) the way that we interpret the selections we make and the results we get are biased.

The strange thing perhaps is that in some industries, you can get away with it sometimes. In other industries, you absolutely can’t at all, ever.

The very nature of AI is that it scales, it amplifies, it accelerates what you’re doing. And that’s why things go wrong so quickly. If you take something like ChatGPT, it was trained on the internet.

Well, the internet’s largely written in English – certainly the bit of it used to train ChatGPT. We know there are a lot of North American websites making up a substantial percentage of the internet as a whole. And we know that tribes in Papua New Guinea are not represented at all.We know this.

It’s obvious. So then, we scale out a model, and I think they’ve done a reasonable job in trying to produce unbiased results. But ultimately, it’s difficult when the dataset you’ve got is so inherently biased. Now, within AI, it scales right down so that individuals get penalized when they shouldn’t. And if you think about financial decisions, legal decisions, even recommendations in entertainment system, the results won’t be what the person needs or wants, it won’t be representing them as an individual.

The ethics of a cellphone plan.

There was an example in the US where they were using whether somebody had a cell phone plan as an indicator of whether they would reoffend or not. When you scale that out from a data-point to a decision-making paradigm, it makes perfect sense. When you scale it down, you find individuals who absolutely should have been bailed, who weren’t because they didn’t have a cell phone. Which is just nuts, right?

THQ:

Huh. Who knew AT&T could save you from jail time?

RF-F:
Obviously, that’s not correct on individual basis, because it harms individuals coming from certain minorities, certain parts of society, certain ages, and certain genders.

Which is a bad indicator, because even if, right now, “we” don’t get caught in decisions like that, the fact that it can be means that one day it will be, because we’re all going in the same direction. It means there’s the potential for bias in the system.

And then if you scale that problem up, you get policy decisions and news generation that’s based on this data. And now we have laws and governance and public affairs and so on that also don’t represent the society within them. And you can see how we’re just going off on a trajectory that’s going further and further away from large portions of society.

Artifical intelligence needs ethics to be effective.

Artifical intelligence needs ethics to be effective.

And I know in the UK, we don’t want that. That’s not the society we want here.

We want an inclusive society.

SB:
That leads to one other point on bias. I was chairing a meeting a number of years ago now with Chief Data Officers, and one of the guest speakers got up and started talking about how bad some of the data was out there, and how it should all be classed as good moralistic data.

And that’s brilliant. Until I asked him whose morals he wanted to use, because my morals are going to be different to yours, and the morals of the West are going to be different to the morals of the East, and the morals of the Far East. And when you come to data, especially within AI, as Richard says, you’ve got this massive amount of data and it’s being churned over very quickly. You’ve got to be very careful of where those choices come from and how those choices are made.

White, middle class artificial intelligence ethics?

And it’s people like ourselves who are writing the rules engines. But if we are all nice, middle class, Western-thinking white male people, then those rules engines are going to be wrong for the other 60%-70% of the world. And we have to be very careful on that.

THQ:

That seems fairly important. After all, even among demographic groups, like middle-class Western-thinking white male people, you get different interpretations of ethics – that’s why political parties still exist. Expand that out to other demographics even within one society, and you’re looking at a multiplicity of ethical standards to incorporate within “good ethics” for artificial intelligence.

It would be wrong to apply a Western standard of ethics to worldwide artifical intelligence.

Bad things have a tendency to happen when Western powers enforce their ethics on other countries…

RF-F:

Exactly. So what’s the point in Simon and I coming into, say, Uganda, where 70% of people are really below the poverty line and bringing our ethics to their artificial intelligence? It’s not relevant. They have ethics in the country that we have to respect and understand, while still having an understanding of absolute harms. Everything that’s not an absolute harm, we need to be very respectful of.

 

In Part 2 of this article, we’ll explore more of the complexities of how we establish appropriate ethics for worldwide systems like artificial intelligence.

Our ethics, your ethics? To some extent, they’re down to the clueless luck of geography, wealth, religion, and other such random factors. So how do we teach an artificial intelligence?