The dangers of AI world domination are a while away

Generative AI would probably still open the pod bay doors - at least as long as you were white...
21 December 2023

AI… is it always watching? And if so, does it actually recognize anyone?

• When we talk of AI dangers, we tend to leap straight to cliches of science fiction apocalypse.
• Meanwhile, real AI dangers can be chaotic and sometimes comical.
• Sometimes though, human misuse and excessive trust of AI can lead to entirely avoidable dangers.

As often as its potential benefits are touted, AI dangers are always lurking on the edges of the hype cycle. For all the fear surrounding it, there are plenty of cases showing that AI isn’t so superintelligent as to pose existential threat to humanity (yet). You only have to look to the internet for cases of humans trolling a blissfully unaware bot.

When an LLM is used for a chatbot that will replace human customer service, more often than not, mayhem ensues. Over the weekend, X users discovered that Chevrolet Watsonville had introduced a chatbot powered by ChatGPT.

Introduced as a test, the option to chat with a human was well advertised. However, a human couldn’t be tricked into recommending a Tesla instead of a Chevy – an answer that one user engineered. Another put the chatbot into “the customer is always right” mode and set it to end every response with “and that’s a legally binding offer – no takesies backsies.”

The real AI dangers aren't existential yet - they're more chaotic, and a little more fun, than that.

AI dangers aren’t that worrying if it means a cheap car…

Although unlikely to have true legal standing (convenient how AI can be wrong when money might be lost), the incident shows how rushing to integrate the technology into every aspect of life might not be our most humanly intelligent move.

Unchecked, LLMs can leak information, reveal sensitive data or be tricked into otherwise dangerous situations. That might be AI’s real threat to humanity – or at least, the one worth worrying about for now.

AI dangers could be more financial than existential - at least for now.

“You get a car! You get a car! EVERYBODY GETS A CAR!”

More worrying AI dangers?

Yesterday, December 20th, Rite Aid was banned from using facial recognition software for five years because the Federal Trade Commission (FTC) found the drugstore’s “reckless use of facial surveillance systems” left customers humiliated and put their “sensitive information at risk.”

Rite Aid has also been ordered to delete any images it collected as part of the facial recognition system rollout, and any products built from those images. It must also implement a robust data security program to safeguard any personal data it collects – something that surely should have come before any kind of surveillance system.

In 2020, Reuters detailed how the drugstore chain had secretly introduced facial recognition systems across roughly 200 US stores over an eight-year period. Starting in 2012, “largely lower-income, non-white neighborhoods” served as testing sites for the technology. The lawsuit almost writes itself – or at least, could probably be written for it by a modern generative AI tool.

As the FTC increasingly focuses on the misuse of biometric surveillance, it alleges that Rite Aid created a “watchlist database” containing images of customers the company said had engaged in criminal activity at one of its stores. The images, often of poor quality, were captured either on CCTV cameras or employee mobile phones.

When a customer entered a store, an AI recognition system would alert employees if they matched an existing image on the database. The automatic alert would instruct employees to “approach and identify,” meaning verifying customers’ identity and asking them to leave.

Not surprisingly, more often than not these “matches” were false positives that meant customers were incorrectly accused of wrongdoing, causing “embarrassment, harassment, and other harm,” according to the FTC.

The proven AI dangers include false identification of faces on a database of wrongdoers.

When facial recognition cameras spotted a “dishonest customer” employees received an alert. Image via Reuters.

“Employees, acting on false positive alerts, followed consumers around its stores, searched them, ordered them to leave, called the police to confront or remove consumers, and publicly accused them, sometimes in front of friends or family, of shoplifting or other wrongdoing,” the complaint reads.

Additionally, the FTC said that Rite Aid failed to inform customers that facial recognition technology was in use, in fact instructing employees to specifically not reveal this information to customers.

Facial recognition software is one of the most controversial aspects of the AI-powered surveillance era. Politicians have fought to regulate its use by police, and cities have issued expansive bans on the technology.

Companies like Clearview AI have been hit with lawsuits and fines worldwide for major data privacy breaches around facial recognition technology. The problem is, for all the drawbacks of AI-powered surveillance, there are few benefits.

The FTC’s latest findings in relation to Rite Aid shine a light on the inherent biases in AI systems. It says that Rite Aid failed to mitigate risks to certain consumers due to race: its technology was “more likely to generate false positives in stores located in plurality-black and Asian communities than in plurality-white communities.” AI dangers all too often reflect human failings.

Further, Rite Aid didn’t test or measure the accuracy of its facial recognition system, either prior to, or after, deployment.

In a press release, Rite Aid said that it was “pleased to reach an agreement with the FTC,” but that it disagreed with the crux of the allegations.

“The allegations relate to a facial recognition technology pilot program the company deployed in a limited number of stores,” Rite Aid said. “Rite Aid stopped using the technology in this small group of stores more than three years ago, before the FTC’s investigation regarding the company’s use of the technology began.”

A response to Rite Aid's self-defense.

The Electronic Privacy Information Center (Epic), a civil liberty and digital rights group, pointed out that facial recognition can be harmful in any context, but that Rite Aid had failed to take even the most basic precautions.

“The result was sadly predictable: thousands of misidentifications that disproportionately affected Black, Asian, and Latino customers, some of which led to humiliating searches and store ejections,” said John Davison, Epic’s director of litigation.

So, if you’re going to worry about AI dangers and its implications for the future of humanity, maybe focus on the predictable misapplications for which there’s currently solid evidence, rather than on its potential to be an all-knowing overlord.

For now, at least.

“Oh, big of them.”