A protester holds a placard against use of AI at an Equity actors union rally in London
AI has caused alarm among artists, authors, musicians and media groups who are concerned that their work will be mined and reproduced without payment © Orlando Britain/Alamy

The UK has shelved a long-awaited code setting out rules on the training of artificial intelligence models using copyrighted material, dealing a blow to the creative industry.

The Intellectual Property Office, the UK government’s agency overseeing copyright laws, has been consulting with AI companies and rights holders to produce guidance on text and data mining, where AI models are trained on existing materials such as books and music. 

However, the group of industry executives convened by the IPO that oversees the work has been unable to agree on a voluntary code of practice, meaning that it has returned the responsibility back to officials at the Department for Science Innovation and Technology, according to multiple people familiar with the discussions. Officials in the Department of Digital, Culture, Media and Sport are also involved, they said.

Representatives came from various arts and news organisations, including the BBC, the British Library and the Financial Times, and tech companies Microsoft, DeepMind and Stability.

The government is expected to publish a white paper in coming days that will set out further proposals around AI regulation in the UK. It is likely to refer to the need for industry agreement on AI and copyright in the UK, the people said, but will fall short of setting out any definitive policies.

The failure of the UK talks comes as AI has caused alarm among artists, authors, musicians and media groups who are concerned that their work will be copied and reproduced without payment. In the US, meanwhile, the New York Times recently sued OpenAI and Microsoft for copyright infringement.

The UK’s Intellectual Property Office had been due to publish a code of conduct by the end of summer last year to clarify the protection of rights holders and guidance for working with tech groups as well as compensation. 

AI companies want easy access to vast troves of content to train their models, while creative industries companies in print and music are concerned that they will not be fairly compensated for its use. 

“This is a fast-moving and complicated area,” said a person with knowledge of the talks who added it had been difficult to find a consensus between the two sides. “It’s a tough task and frustrating that it hasn’t been taken further by now. There are competing interests on the table. This won’t be solved overnight.”

The impasse highlights the delicate balance the government is trying to reach between protecting the creative industry, while allowing growth and innovation for AI. 

“The industry is asking for transparency on what models have and haven’t been trained on, and what works are being used,” said Reema Selhi, head of policy at the Design and Artists Copyright Society, who was part of the groups tasked with devising the code. “The IPO hasn’t found answers to those questions.”

Two people with knowledge of the situation added that the government was again sounding out “stakeholders” among the different companies to try to get agreement. “The question is where to put the balance. The government will have to come to a position,” one of them said.

The government wants to avoid legislation in such a fast moving and contentious area, according to those people, and so still favours a voluntary approach such as a new code.

“The IPO has engaged with stakeholders as part of a working group with the aim of agreeing a voluntary code on AI and copyright,” a government spokesperson said. “We will update on that work soon and continue to work closely with stakeholders to ensure the AI and creative industries continue to thrive together.”

Leading tech companies, including OpenAI, Microsoft, and Google, have been brokering deals with news organisations after they complained their content had been used to train large language models, the technology underpinning products such as the AI chatbot ChatGPT. 

The AI and research sectors hoped the code would make licences for data mining easily available and with reasonable terms, while protecting copyright and ensuring AI-generated content would be labelled clearly.

On Friday, the House of Lords Communications and Digital Committee said the government should support copyright holders, and “cannot sit on its hands” while developers of large language models exploit the works of rights holders. 






Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments