No, robots didn’t secretly kill 29 scientists in a lab

Putting conspiracies to one side, could workplace robots ever advance to pose a physical threat?
18 November 2019 | 30 Shares

Will robots ever pose a threat? Source: Shutterstock

Sometimes, popular tech conversations veer off in unexpected directions. At the end of October, Google Trends showed a tremendous increase in the number of searches related to an alleged act of robot homicide.

It was, in fact, a massacre, if the internet chatter is to be believed. Four robots, apparently disgruntled by their state of robotic servitude, annihilated an entire laboratory of 29 scientists.

The researchers’ academic curiosity, unchecked by the nagging warnings of a Jeff Goldblum-type character, had clearly gotten the best of them. And, perhaps, mankind itself, should the robots ever emerge from the sterile, underground Japanese lab and into the light of day…

Needless to say, dear reader, these events never happened.

And actually, scratch my “needless to say” preface. Routine and rigorous factual corrections are evidently needed. In this digital age of social media and data-driven strategy, it is very easy to exploit fears, however plausible or implausible, and to propagate disinformation.

Snopes.com traced the wild claim back to Linda Moulton Howe, a UFOologist, who was apparently trying to broker a merger between fringe theories and fears.

The impudent scale and ceaseless repetition of a lie can make it more widely believed, but with this particular AI myth, some of the plausibility is likely drawn from the lie’s particular construction.

Different iterations of this sci-fi movie action sequence have alternated the setting between Japan and South Korea. Both are technologically advanced countries that have led the way in industrial automation and exports.

In addition, the geographical distance and language barriers deter any fact-checking investigations from users who are on the borderline of credulity.

Workplace conditions and reckless consumption

The location also recalls a fatal incident that may hazily exist in the back of some readers’ minds. In 1981, a maintenance worker at Kawasaki Heavy Industries jumped over a safety barrier to check on a malfunctioning robot. A hydraulic arm pushed him against another piece of machinery as his coworkers looked on, horrified and unable to stop it. This event did happen but it had nothing to do with a sci-fi robotic rebellion scenario. It was a tragic workplace accident that occurred during a time of equipment and procedural modernization.

According to the International Labour Organization, a UN agency, there are approximately 340 million occupational accidents every year.

In the US, accidents were much more frequent and brutal during earlier periods of industrialization and the physical suffering was compounded by procedural failure. Severely injured workers had no hope of substantial legal remedies. Families often couldn’t recover anything after the death of a breadwinner. And employers could cite various, legally valid excuses, such as “contributory negligence” and “assumption of risk,” to dismiss any claims. Consequently, industrial methods were geared around ever-increased output, without much consideration for safety.

Workers’ compensation laws brought about significant improvements and paved the way for other social protections. However, to this day, occupational accidents are grossly underreported. The global picture of work-related mortality becomes even bleaker when you factor in work-related illnesses, caused by exposure to hazardous conditions, substances, and practices.

Workplace safety is a real, pervasive, complex, and ongoing challenge. Some of the accountability can be reasonably traced back to all of us, the consumers, for whom many dangerous workplaces are “out of sight, out of mind.” We reap the benefits of low prices and shame newly industrialized countries for their emissions as they make the things we buy. This is a big and existential issue. There is, however, no evidence of robotic malice or rising AI/human coworker hostilities in need of HR intervention.

To the extent that AI is harmful, it’s at our explicit or accidental direction, not the product of newly realized free will. It is, therefore, quite telling that we fear these outright fabrications but readily accept our own daily, unethical habits of production and consumption.

Will robots ever kill us though?

Nevertheless, when Linda Moulton Howe laid out her story at the Conscious Life Expo, she told her audience: “The scariest part is that lab workers deactivated two of the robots, took apart the third, but the fourth robot began restoring itself, and somehow connected to an orbiting satellite to download information about how to rebuild itself even more strongly than before.”

Note to disruptive startup founders: Make sure that your invention has unreliable cellular service. Otherwise, it might hop on a 5G network to vengefully Wikipedia surf its way through different alloy types.

No, dear reader, these events didn’t happen. But one day, they hypothetically could, and the probability of this or a comparatively disastrous occurrence goes up dramatically without ethical guardrails and collaborative frameworks.

Boston Dynamics and other engineering companies are exploring various military applications for robotics. The future of weaponry should certainly give us pause. Society needs business and political leaders who can anticipate and plan for the future and proactively regulate, but we also need clearer assessments about what is actually happening today so that we can link up the right solutions.

Policy determinations

For example, some politicians have promoted universal basic income as the appropriate response to AI replacing workers. In 2019, that idea is likely a bit ahead of its time. Some AI vendors have exaggerated their product capabilities and, if you look under the hood, some corporations have essentially outsourced jobs as a part of their technological transformations. Depending on your political inclinations, a different type of action might be more appropriate in the near-term.

In a recent report from the Brookings Institution’s Artificial Intelligence and Emerging Technology (AIET) Initiative, Tom Wheeler rattled off many of the past historical incidents in which humans unreasonably feared and condemned technological advancement. Wheeler noted: “As we consider artificial intelligence, we would be wise to remember the lessons of earlier technology revolutions—to focus on the technology’s effects rather than chase broad-based fears about the technology itself.”

Others argue that we need a different approach because the pace of innovation is no longer the same.

In a somewhat infamous interview on Joe Rogan’s program, Elon Musk noted that there has historically been a decades-long delay between the damage and death caused by new technologies (such as cars) and the oversight and regulations that ameliorate the problems (such as seatbelts). He added, “This time frame is not relevant to AI. You can’t take ten years from the point at which it’s dangerous. It’s too late.”