Rogue Drones and Tall Tales

Newsletter offer

Receive our Behind the Headlines email and we’ll post a free copy of Byline Times

Sam Altman, CEO of OpenAI, wants you to know that everything is super. How has his world tour gone? “It’s been super great!”. Does he have a mentor? “I’ve been super fortunate to have had great mentors”. What’s the big threat he’s worried about? “Superintelligence”. 

Altman’s whistlestop visit to London in late May was a chance for adoring fans and sceptics alike to hear him answer some carefully selected and pre-approved questions on stage at University College London. The queue for ticketholders stretched right down the street. For OpenAI, the trip to the UK was also a chance for Altman to meet Rishi Sunak, the latest in the list of world leaders to listen to the 38-year-old tech bro.

Prior to December last year, OpenAI wasn’t on the public radar at all. It was the release of ChatGPT that changed all that. Its large language model became the hottest software around. Students delighted in it. Copywriters panicked. Journalists inevitably turned to it for an easy 200-word opening paragraph to show how convincing it was. Then came the existential dread.       

Superintelligence has long been the stuff of sci-fi. It still is, but somehow the past few months have seen it being treated as imminent, despite the fact that we aren’t anywhere near that point – and might never be. A cynic might wonder if there is a vested interest in a Silicon Valley tech company maintaining its lead by asking for a moratorium on AI progress. Each week seems to bring yet another letter calling for a halt to development, signed by the very people who make the technologies. Where was this concern earlier, as they were building them? 

How Important is it that this Article is Written by a Human?

Artificial intelligence has already made its way into newsrooms – what are the risks?

Science Fiction and Fictional Science

Not everyone is convinced of the threat. There is vocal pushback from numerous other researchers who question the fearmongering, the motivation, and the silence on the AI issues already on the ground today: bias, uneven distribution, sustainability, and labour exploitation. But that doesn’t make for good clickbait. Instead, we see headlines so doom-laden that they could’ve been generated with the prompt: “write a title about the end of the world via an evil computer”.

Columnists, some of whose knowledge of technology comes from having watched The Terminator in the 80s, were quick to pontificate about the urgent need for global action right now, quick as you can, before the robot uprising.

In early June, most of the dailies were carrying the story that an AI-enabled drone had killed its operator in a simulated test. This was based on an anecdote by a Colonel in the U.S. Air Force who had stated that, in a simulation: “the system started realising that, while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

Nice tale. Shame it wasn’t true. A retraction followed. But it’s a good example of AI’s alignment problem: that if we don’t properly phrase the commands, we run the risk of Mickey Mouse’s panic over the unstoppable brooms in the Sorcerer’s Apprentice. The (fictional) problem is not a sentient drone with bad intentions; the problem is that we, the human operators, have given an order that is badly worded. That’s a tale we’ve told for years, right back to the Ancient Greek myth of King Midas: when offered a reward, Midas asked that everything he touch be turned to gold, but he wasn’t specific enough, so his food and drink turned to gold too and he died of hunger. That tale has as much truth in it as the rogue drone one, but it shows we’ve been worrying about this for over 2,000 years. 

ENJOYING THIS ARTICLE? HELP US TO PRODUCE MORE

Receive the monthly Byline Times newspaper and help to support fearless, independent journalism that breaks stories, shapes the agenda and holds power to account.

We’re not funded by a billionaire oligarch or an offshore hedge-fund. We rely on our readers to fund our journalism. If you like what we do, please subscribe.

Discrimination, Ghost Workers & Environmental Harm

The rogue drone story is also a good example of the deceptive and hyperbolic headlines rolled out on a regular basis, pushing the narrative that AI is a threat. News framing shapes our perceptions; done well, it’s an important contribution to public understanding of the technology – and we need that. Done badly, and it perpetuates the dystopia.

We do need regulation around AI, but the existential risk from superintelligence shouldn’t be the reason. The UK government’s national AI strategy specifically acknowledges that “we have a responsibility to not only look at the extreme risks that could be made real with AGI, but also to consider the dual-use threats we are already faced with today” but the latter are the stories that aren’t being told. 

Missing, too, are the headlines about the harms already here. Bias and discrimination as a result of technologies such as facial recognition are already well known. In addition to that, companies are outsourcing the labelling, flagging and moderation of data required for machine learning, which has resulted in the largely unregulated employment of poorly paid ‘ghost workers’, often exposed to disturbing and harmful content, such as hate speech, violence, and graphic images. It is work that is vital to AI development but it’s unseen and undervalued.

Likewise, we chose to ignore that many of the components used in AI hardware, such as magnets and transistors, require rare earth minerals, often sourced from countries in the Global South in hazardous working conditions. There are significant environmental impacts too, with academics highlighting the 360,000 gallons of water needed daily to cool a middle-sized data centre.

If the UK government want to show they’re serious about the responsible development of AI, it’s okay to keep one eye on the distant future, but there’s work to be done now on real and tangible harms. If we want to show we’re serious about an AI future, we need to focus on the present.