The artist Refik Anadol used artificial intelligence to visualise nearly 2 million historical Ottoman documents and photographs in his installation called ‘Archive Dreaming’. AI is the world’s most useful, and tireless, research assistant © Chris McGrath/Getty Images

Here’s one question that even the smartest minds aided by the most powerful machines will struggle to answer: at what point do the societal costs of not exploiting a transformative technology outweigh the conspicuous risks of using it?

Much public attention has rightly focused on the worrisome uses of artificial intelligence, such as killer robots or omnipresent facial recognition technology. This has led to demands for stricter regulation, as is now developing in the EU. But what remains unknowable are the benefits that may be lost to society by not fully using AI in responsible ways. We understandably recoil at the possibility of technology companies gaining privileged access to confidential medical records. Yet we rarely recognise their valuable input in helping roll out vaccination campaigns fastest to clinically vulnerable groups during the current pandemic.

To talk with Demis Hassabis, the co-founder of Google DeepMind, is to be reminded of the intellectual excitement surrounding the technology. As Hassabis sees it, AI is the ultimate general purpose learning machine, one that can empower humanity to tackle the greatest challenges of our times: healthcare, energy transition and economic productivity. Think of it as the world’s most useful, and tireless, research assistant. 

While DeepMind shot to global prominence by building games-playing systems such as AlphaGo, it has since focused on crunchier real-life challenges. Its AlphaFold system can enable scientists to model protein structures, promising faster drug discovery. The company has helped cut electricity consumption at Google’s vast data centres by more than 30 per cent — even if large-scale AI models remain voracious consumers of energy. The London-based research company has also helped develop text-to-speech systems that give voice to our increasingly ubiquitous digital assistants.

Demis Hassabis, the co-founder of Google DeepMind, sees AI as the ultimate general purpose learning machine that can help humanity tackle challenges such as healthcare, energy transition and economic productivity © Kim Hong-Ji/Reuters

Although some experts suggest that returns on the deep learning techniques used by DeepMind are now diminishing, Hassabis argues we have only scratched the surface of AI’s potential. “It’s going to yield some incredible breakthroughs in the next 10 plus years that will really advance our understanding of the natural world,” he says. To his mind, it would be negligent if we do not in future arm all doctors with the latest medical knowledge that only AI can deliver at scale.  

But the AI field has been increasingly clouded by controversies over algorithmic bias, the erosion of privacy, the excessive concentration of corporate power and the longer-term threat of a runaway Superintelligence. For his part, Hassabis supports moves to establish clearer rules on the use of sensitive data. He accepts that AI companies must work even harder with civil society to enforce ethical guidelines and criticises “totally ridiculous applications” of the technology, including parole and sentencing decisions, because of the impossibility of codifying nuanced human judgment. And he says much collaborative research needs to be done with outside partners over what goals and values to embed in AI systems over the next decade.

Yet the broader concern is whether the big technology companies have themselves become so dominant that they now jeopardise the development of AI for the public good, having asset-stripped many universities of their leading researchers. One senior tech executive says he fears a “privatised version of China” in which omniscient technology companies learn more about us than we know about ourselves.

In spite of reports that DeepMind has sought greater autonomy from its parent company, Hassabis insists the relationship brings benefits. Having bought DeepMind in 2014, Google has helped scale its research in a way that would have been otherwise unimaginable. Yet other AI experts have reached a different conclusion: profits and ethics sleep uncomfortably together. A group of senior researchers at OpenAI has just quit the San Francisco-based research company that developed the GPT-3 language generation model to launch Anthropic, which focuses on AI safety. The researchers were said to be unsettled by the increasing corporate influence over OpenAI following a $1bn investment by Microsoft.

The uses of AI are too varied and consequential for any one government, company or research organisation to determine. But the profit motive that currently directs so much research in the field risks distorting its outcomes. Public debate about where the balance lies between innovation and regulation may be raucous and messy, but it is both inevitable and good that it is growing louder.

john.thornhill@ft.com

Listen to the latest episodes of Tech Tonic

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments