© FT Montage/Getty Images

What are the risks posed by AI?

Artificial intelligence is not new, and nor is anxiety about its power. It is more than 25 years since IBM’s supercomputer Deep Blue beat chess grandmaster Garry Kasparov — and the sophistication and capabilities of AI have increased ever since. 

But recent leaps forward in AI have heightened alarm among technologists and jolted regulators around the world into action. They fear that AI could wipe out masses of jobs and reshape society if it continues to develop on its current trajectory. 

The surge in interest in — and fears over — the technology can be traced back to the launch of ChatGPT by OpenAI in November last year. Within two months, 100mn people were using the chatbot — the fastest adoption of a new consumer application ever. 

The remarkable power of ChatGPT sparked joy and wonder among early users, but also apprehension. The chatbot represented a major advance in intelligence and exhibited distinctly human qualities: creativity, reasoning abilities, even a sense of humour. 

Its launch therefore marked a significant step towards artificial general intelligence: a computer system capable of performing any human task and generating new scientific knowledge. 

The release of ChatGPT also sparked an arms race, fuelled by tens of billions of dollars of investment from big tech firms and venture capitalists, between a handful of companies aiming to build powerful chatbots trained on large language models. 

The main competitors — including OpenAI boss Sam Altman and Twitter owner Elon Musk — are walking a tightrope. They argue that the gains from a superintelligent computer will dramatically expand the sum of human knowledge and help society solve its most wicked problems. 

But, even as they throw huge resources into winning that race, they are warning that there is a marginal risk that AI could destroy human life altogether. 

What is the worst-case scenario? 

Some experts are concerned that unregulated, uncontrolled AI could ultimately pose a threat to human existence.

There are a number of ways for the doomsday scenario to play out. One common fear is that an artificial general intelligence that is able to teach itself would rapidly supplant human intelligence, developing in ever-quicker cycles into “superintelligence”. That could herald the redundancy, or extinction, of the human race.

The so-called “Terminator” scenario is familiar from science fiction, but those in the vanguard developing AI warn it is now a plausible risk. 

In May, the Centre for AI Safety published a short note, which said: “Mitigating the risks of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.”

Among the few thousand signatories were OpenAI’s Altman, executives at Google and Microsoft, senior figures at a number of other AI start-ups, and Geoffrey Hinton — often described as the “godfather of AI” for his work on deep learning. They were not sceptics or luddites but some of the pioneers of the technology, pushing its frontier forward. 

Line chart showing that research interest in artificial intelligence has soared by charting count of AI research publications by topic

Musk was absent from the list of signatories but has separately urged a six-month pause on developing anything more sophisticated than OpenAI’s latest generative AI model, GPT4. The Tesla and SpaceX boss also has a long history in AI. Musk co-founded OpenAI but left the company in 2018. In July, he formally launched his own competitor, xAI. 

“Ultimately, these nation-state battles will seem parochial compared to a digital super intelligence,” argued Musk recently, discussing the launch of his xAI. 

What are the more immediate concerns?

AI poses many more immediate risks. These include: ending the need for humans to be employed in certain jobs; enabling the spread of convincing misinformation; breaching copyright; and manipulating humans through the use of rogue chatbots.

Mustafa Suleyman, founder of chatbot maker Inflection, recently warned that AI would create “a serious number of losers” as white-collar employees were put out of work by intelligent robots.

Lawyers, copywriters and coders are among those who fear their roles could be disrupted or supplanted by chatbots which are already able to ingest reams of data and spit out reasoned arguments.

More creative roles are also at risk. Hollywood’s top writers and actors have engaged in strike action this summer partly because they fear that AI could ape their work.

Can the AI industry mitigate the risks?

OpenAI and others are now investing heavily in “aligning” their AI models to a set of goals and ensuring they don’t stray from those. Those efforts are intended to prevent chatbots from spreading harm, hate or misinformation. 

But a highly powerful tool could be co-opted by bad actors for any of those uses, and Musk and others have expressed concerns that a rogue AI could invert the principles used to “align” it and instead do the reverse.

Competition between the US and China has led to massive increases in AI and defence spending simultaneously, as the two countries vie for dominance of the technology and to shore up their national security. At the intersection of the two, AI is likely to transform modern warfare, expanding the ability of autonomous machines to find and kill human targets. 

Rachman Review: How much should we fear artificial intelligence?

Politicians are grappling with the risks and benefits of the technology. Listen here.

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments