Prime Minister Narendra Modi’s face is analysed on a screen to create an avatar
Prime Minister Narendra Modi’s face is analysed on a screen to create an avatar. The Indian election this year did not appear to be disfigured by digital manipulation © Himanshu Sharma/dpa/AP

One of our shoutiest moral panics these days is the fear that artificial intelligence-enabled deepfakes will degrade democracy. Half of the world’s population are voting in 70 countries this year. Some 1,500 experts polled by the World Economic Forum in late 2023 reckoned that misinformation and disinformation were the most severe global risk over the next two years. Even extreme weather risks and interstate armed conflict were seen as less threatening. 

But, type it gently, their concerns appear overblown. Not for the first time, the Davos consensus might be wrong.

Deception has been a feature of human nature since the Greeks dumped a wooden horse outside Troy’s walls. More recently, the Daily Mail’s publication of the Zinoviev letter — a forged document purportedly from the Soviet head of Comintern — had a big impact on the British general election of 1924.

Of course, that was before the internet age. The concern now is that the power of AI might industrialise such disinformation. The internet has cut the cost of content distribution to zero. Generative AI is slashing the cost of content generation to zero. The result may be an overwhelming volume of information that can, as the US political strategist Steve Bannon memorably put it, “flood the zone with shit”.

Deepfakes — realistic, AI-generated audio, image or video impersonations — pose a particular threat. The latest avatars generated by leading AI companies are so good that they are all but indistinguishable from the real thing. In such a world of “counterfeit people”, as the late philosopher Daniel Dennett called them, who can you trust online? The danger is not so much that voters will trust the untrustworthy but that they will distrust the trustworthy.

Yet, so far at least, deepfakes are not wreaking as much political damage as feared. Some generative AI start-ups argue that the problem is more about distribution than generation, passing the buck to the giant platform companies. At the Munich Security Conference in February, 20 of those big tech companies, including Google, Meta and TikTok, pledged to stifle deepfakes designed to mislead. How far the companies are living up to their promises is, as yet, hard to tell but the relative lack of scandals is encouraging. 

The open-source intelligence movement, which includes legions of cyber sleuths, has also been effective at debunking disinformation. US academics have created a Political Deepfakes Incidents Database to track and expose the phenomenon, recording 114 cases up to this January. And it could well be that the increasing use of AI tools by millions of users is itself deepening public understanding of the technology, inoculating people against deepfakes.

Tech-savvy India, which has just held the world’s biggest democratic election with 642mn people casting a vote, was an interesting test case. There was extensive use of AI tools to impersonate candidates and celebrities, generate endorsements from dead politicians and throw mud at opponents in the political maelstrom of Indian democracy. Yet the election did not appear to be disfigured by the digital manipulation.

Two Harvard Kennedy School experts, Vandinika Shukla and Bruce Schneier, who studied the use of AI in the campaign, concluded that the technology was mostly used constructively.

For example, some politicians used the official Bhashini platform and AI apps to dub their speeches into India’s 22 official languages, deepening connections with voters. “The technology’s ability to produce non-consensual deepfakes of anyone can make it harder to tell truth from fiction, but its consensual uses are likely to make democracy more accessible,” they wrote.

This does not mean use of deepfakes is always benign. They have already been used to cause criminal damage and personal distress. Earlier this year, the British engineering company Arup was scammed out of $25mn in Hong Kong after fraudsters used digitally cloned video of a senior manager to order a financial transfer. This month, explicit deepfake images of 50 girls from Bacchus Marsh Grammar school in Australia were circulated online. It appeared that the girls’ photos had been lifted from social media posts and manipulated to create the images.

Criminals are often among the earliest adopters of any new technology. It is their sinister use of deepfakes to target private individuals that should concern us most. Public uses of the technology for nefarious means are more likely to be rapidly exposed and countered. We should worry more about politicians spouting authentic nonsense than fake AI avatars generating inauthentic gibberish.

john.thornhill@ft.com

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments