A patent predicament: who owns an AI-generated invention?
We’ll send you a myFT Daily Digest email rounding up the latest Artificial intelligence news every morning.
If a computer has an idea, who owns it? This may sound like the start of a riddle but the question is taxing intellectual property experts worldwide — and their response will have a profound effect on how we innovate.
More than ever, people make things with their heads rather than their hands. Demand for intellectual property rights is growing at a faster rate than the global economy, according to the World Intellectual Property Organization (Wipo). Its data show that a record 3.17m patent applications were filed in 2017, up 6 per cent on the previous year.
Trends in patent applications also hint at shifts in the balance of the world economy. Asia received about two-thirds of all applications in 2017, with China seeing the highest volume.
Much of this growth is happening in artificial intelligence — a field that threatens to render its inventors obsolete. There have been about 340,000 applications related to AI since the 1950s, and 53 per cent of all patents related to AI have been published since 2013.
Now, scientists are beginning to develop machines capable of coming up with ideas outside their creator’s expertise. This raises the question of who owns the intellectual property for an AI-generated invention.
A team of lawyers is trying to have Dabus, a system developed by Missouri-based AI expert Stephen Thaler, recognised as the inventor of two ideas by US, UK and European authorities. The first is a food container designed to fit tightly to others, and the second is a light that flickers in a pattern that mimics brain activity, making it harder to ignore and therefore useful in emergencies.
Dabus (which stands for device for the autonomous bootstrapping of unified sentience) uses an artificial neural system to mimic the creative process of a human brain. It turns information it has learnt into ideas and then uses its cumulative experience to judge their merit.
Under the legal systems of the UK, US, China, Germany and many others, however, only a human can be recognised as an inventor. This could render Dabus’s innovations unpatentable, even though they would have qualified for protection if a human had conceived them.
Ryan Abbott, professor of law and health sciences at the University of Surrey and one of the lawyers campaigning on behalf of Dabus, argues that the AI system should be recognised as the inventor. The system’s creator would then hold the patent in the same way that someone named successor in title can take ownership rights of an estate or business.
The problem is that if AI cannot be recognised as an inventor, the owners of the AI will not have any protection for the ideas generated by their work. This may discourage them from pushing further development. Not recognising the AI as an inventor threatens innovation by “failing to encourage the production of socially valuable inventions”, argues Mr Abbott’s team.
While the idea of granting intellectual property protections to a machine may seem a niche concern now, it will become more pressing as AI systems invent more routinely. Indeed, it is possible that AI is already generating new ideas but that its role is being concealed because of legal uncertainty, according to Mr Abbott.
“AI will need to play a more important role in research and development,” he says. “These machines may be the best way we have to come up with inventions . . . We need a framework for protecting this stuff.”
The timeline for machine-owned inventions may be closer than many assume. Half of experts working in AI believe machines will be able to carry out most professions at least as well as a typical human by 2040-50, according to a survey by The Future of Humanity Institute at Oxford university.
Some 90 per cent of respondents believe it will reach this level by 2075. One in 10 experts said that within two years of achieving this, AI will reach a level that “greatly surpasses the performance of every human”. Three-quarters said it would achieve this “superintelligence” within 30 years of reaching equality with humans.
The ownership debate could be a red herring for more urgent concerns, however. Given the large amount of information required for an AI system to begin developing insights, data access and ownership are more pressing than questions of inventorship, argues Francis Gurry, director-general of Wipo.
“What would be the point in granting a property right to a machine?” he asks. “Personally I don’t think they’re the most important questions . . . Sooner or later you’ve got a human somewhere.”
A research report from Wipo found that some experts are concerned that the development of AI could lead to a “race to the bottom” for data protection, as governments may liberalise privacy laws to facilitate the improvement of the technology.
Both Mr Abbott and Mr Gurry agree that the world’s patent systems need to move faster to keep up with the pace of technological change. “Patent offices are not famous for moving quickly,” says Mr Abbott.
Get alerts on when a new story is published
Part 6/6: How will we stay safe in 2050?
This final chapter of our series investigates the future of security.
This includes: cyber experts’ race to keep up with hackers; how humans will adapt to the chaos brought on by climate change; and why flu is mostly likely to cause a pandemic. The first five chapters are available below.
Supported by Mitsubishi Heavy Industries Group