Brain with inserted in socket plug wire or charging cord. Concept technology wired transmission of data, information, knowledge in brain nervous system, mental or psychic connection or charging brain
Advances in AI are coming faster than our ability to think through the consequences © Getty Images/iStockphoto

The list of existential threats to mankind on which wealthy philanthropists have focused their attention — catastrophic climate change, pandemics and the like — has a new addition: artificially intelligent machines that turn against their human creators.

Artificial intelligence (AI) could pose a threat “greater than the danger of nuclear warheads, by a lot”, according to Elon Musk, the entrepreneur behind electric car maker Tesla. As the author James Barrat put it, a superhuman intelligence, equipped with the ability to learn but without the ability to empathise, might well be Our Final Invention.

Even if the machines are not going to kill us, there are plenty of reasons to worry AI will be used for ill as well as for good, and that advances in the field are coming faster than our ability to think through the consequences.

Between facial recognition and autonomous drones, AI’s potential impact on warfare is already obvious, stirring employee concern at Google and other pioneers in the field. Faced with an internal revolt, Google last year said it would drop much of its work for the Pentagon and withhold AI technology that could be used for weapons. That may restore harmony at the Googleplex, but it is hardly likely to end the AI arms race. Russian president Vladimir Putin puts it this way: “Whoever becomes the leader in this sphere will become the ruler of the world.”

Other fears include whether AI algorithms are reinforcing racial stereotypes, gender biases and other prejudices as a result of a lack of diversity among scientists in the field — and of what happens to society when robots can do most jobs. All of which is to say promoting debate on the ethics and consequences of AI, and nudging the science, business and regulation of AI in the right direction, seems a worthy use of philanthropic dollars. It is also fascinating — an often underestimated reason for picking a philanthropic cause.

Explaining his gift of $150m to Oxford university, part of which will go to creating an Institute for Ethics in AI, Steve Schwarzman, founder of private equity house Blackstone, told Forbes in June he wanted “to be part of this dialogue, to try and help the system regulate itself so innocent people who’re just living their lives don’t end up disadvantaged. If you start dislocating people, and your tax revenues go down, your social costs go up, your voting patterns change . . . you could endanger the underpinnings of liberal democracies.” Last year he wrote an even bigger cheque to Massachusetts Institute of Technology to create a new centre for AI research.

Those donations together make Schwarzman probably AI’s largest donor. Pierre Omidyar, founder of eBay, Nicolas Berggruen, the financier once known as the “homeless billionaire” before he settled in Los Angeles, and LinkedIn founder Reid Hoffman are among the others. In this field, however, there is a limit to the power of philanthropy.

Hoffman is the biggest backer of OpenAI, a hugely ambitious project founded in San Francisco by Musk — before he left, citing conflicts of interest as Tesla boosts the autonomous capabilities of its cars. OpenAI was set up as a non-profit with the aim of building a neural network the size of a human brain — a so-called artificial general intelligence — making its work open source so that it would be a blueprint for the safe and ethical development of AI.

But that put OpenAI in direct competition with big tech companies, which have the resources to pay for scientific talent and computing power. Last year it switched to a for-profit structure, saying it needed billions of dollars in investment, and this summer it announced it was aligning itself with Microsoft, which is putting in $1bn to help OpenAI pay for computing services from Azure, Microsoft’s cloud.

In the field of AI, charitable dollars seem best channelled to the ethical debate rather than the technology itself. Berggruen, whose ventures include a $1m annual prize for philosophy, sounds like he is having the most fun. The Berggruen Institute’s “Transformations of the Human” project places philosophers and artists in key research sites to foster dialogue with technologists, to “contribute to both human and non-human flourishing”. When it comes to philanthropy, robots may need support, too.

Stephen is reading . . . 

Human Compatible, by Stuart Russell. The British AI researcher says we must rethink machine learning to make sure superhuman computers never have power over us. We must make them uncertain what we want, so they have to keep asking.

@StephenFoley

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments