Amnesty International faces backlash for use of AI-generated images

AI image generation becomes the latest morality debate around the use of generative artificial intelligence.
5 May 2023

AI-generated hands: something’s off. Source: Shutterstock.

AI image generation is getting less light-hearted. Less philosophical. Less of a question of what is art and what isn’t.

The brutality systematically used by Colombian police to deal with national protests in 2021 was well documented. Amnesty International documented cases of human rights abuses committed by the Colombian police force during 2021.

This week, to commemorate the two-year anniversary of those events, Amnesty International tweeted an image of police officers manhandling a female protestor. During the protests, in cases documented by Bogotá-based Temblores, women were abducted, taken to dark buildings, and raped by groups of policemen.

The woman pictured in the tweet, however, was not one of them.

The police officers shown are also innocent – or as innocent as an AI-generated image can be.

The image does have the quality of an AI-generated picture, featuring warped facial features and small mistakes like out-of-date police uniforms. The tricolor carried by the “protestor” has the correct colors, but they appear in the wrong order.

Photograph: Amnesty International. Source: the Guardian.

The now-deleted tweet came under fire, with photojournalists and media scholars warning that the use of AI-generated images might undermine Amnesty’s work and feed conspiracy theories.

Claims of fake news, and news that actually is faked, are rife in the internet age. Rumors of Donald Trump’s arrest saw images that appeared to be photos of his arrest go viral, while in actual fact, he remained unbothered by police.

“As we know, artificial intelligence lies. What sort of credibility do you have when you start publishing images created by artificial intelligence?” said Juancho Torres, a photojournalist based in Bogotá.

Source: @danmoyn on Twitter.

Be it in relation to a news story that could well be expected to break, as with Trump, or the light-hearted image of Pope Francis in a long white puffer jacket – “dripped-out” – AI-generated “photos” are cropping up increasingly frequently. Their believability might rest on context, but as the images AI can create become more realistic – or less uncanny? – it is worth wondering what route photojournalism will have to take.

The images used by Amnesty International were flagged as AI-generated but were removed from social media all the same. “We have removed the images from social media posts, as we don’t want the criticism for the use of AI-generated images to distract from the core message in support of the victims and their calls for justice in Colombia,” Erika Guevara Rosas, director for Americas at Amnesty, said.

When an image elicits an emotional response, its credibility has to hold up. To have been moved by something and then find out it was generated specifically for this reason is probably going to cause some indignation; although the end goal of AI appears to be giving it feeling, any indication of it as an emotional entity cause terror and outrage. We don’t want to believe that an image faked by a robot could “trick” us emotionally.

The reasoning behind the choice, according to Amnesty, was to protect the identities of real protestors, who might still face criminal prosecution. In a way, the use of AI-generated images weighed up facelessness against falsehood – and falsehood came out on top.

Jake Moore, global cyber security advisor with Eset, said: “Using generative artificial images simply weakens true reporting, even if these were made to amplify a point. When people realise a creation is used to plant an image in people’s minds, it completely diminishes confidence going forward and ruins credibility. Although a picture paints a thousand words, a fake picture will never be authentic and immediately removes trust.”

As AI is used more in place of humans – journalism is struggling against the tide of generative chatbot-written articles – the question of trust will arise again and again. Where is the line drawn around fake news? Can an article about a real event, generated by artificial intelligence, be considered factual? Is a fake image of real-life events a betrayal of photojournalism, or a safer way of illustrating real distress?

The battle for the soul of photojournalism is yet to be fought all the way out. Amnesty’s good intentions have ultimately not helped AI’s case as a useful tool to add to the arsenal of photojournalists around the world – especially those covering the truth of human horrors.