
A bill that allows artificial intelligence models to be trained on copyrighted material without the rights holders’ knowledge has been passed in the UK. This followed a months-long debate about whether it should be amended to force tech companies to disclose information about their training data.
Bill was ping-ponging between the House of Lords and House of Commons for weeks
The Data (Use and Access) Bill contains a host of new rules around data sharing, but the most contentious relate to AI. In January, Baroness Beeban Kidron, a House of Lords member, filmmaker, and AI ethics expert, proposed an amendment that would require operators of AI models “to disclose information regarding text and data used in the pre-training, training, and fine-tuning of general-purpose AI models.”
She argued that artists and other rights holders deserve transparency and accountability from AI developers, particularly when their work is used without consent to train systems that may later compete with them creatively or commercially.
Nevertheless, many members of the House of Commons disagreed. They claimed that the amendment would discourage companies from developing and releasing AI products in the UK, as disclosure requirements would add an undue burden and force them to reveal their proprietary data sources.
This disagreement sparked a weeks-long legislative back-and-forth between the two Houses, with the amendment repeatedly rejected, rewritten, and replaced by alternative proposals.
Hundreds of creatives, including Paul McCartney, Elton John, and Dua Lipa, signed an open letter urging the UK government to support the Lords’ amendment and uphold stronger copyright protections in the AI era.
Government refused the amendment because discussions around AI and copyright are ongoing
In addition, the UK Government, represented in the House of Commons, did not want to delve into the details of AI copyright law in this particular bill, arguing that doing so could introduce regulatory burdens without allocated funding.
Furthermore, it is separately digesting the results of a consultation exploring possible ways to protect rights holders while enabling AI innovation. One of the proposals discussed in this consultation was to allow AI developers to train their models on creators’ online content by default unless rights holders explicitly opt out.
Bodies representing the creative industries largely rejected this proposal, as it put the onus on creators to exclude their content rather than requiring AI developers to seek consent. Tech companies didn’t like it either, arguing it would complicate identifying legally usable content for commercial AI training and that they’d rather have unrestricted access to all of it. Policy experts say allowing some creators to opt out would result in biased models.
Meta’s former global affairs chief and former UK Prime Minister, Nick Clegg, said it was “implausible” to seek permission from every artist given the vast scale of data used to train AI models, and that doing so when no other country does would “kill” the UK’s AI industry.
Furthermore, the Government may want to consider the results of the long-awaited Getty Images v Stability AI trial, which started on Monday. Getty alleges Stability AI copied millions of copyrighted images without permission to train its AI image generator, Stable Diffusion, while Stability AI denies infringing Getty’s trademark rights and argues that the lawsuit aims to stifle innovation in the AI space.
A final compromise
On June 10, the House of Commons voted to reject Baroness Kidron’s amendment for the final time but agreed to a compromise: the Government would publish a report outlining its proposals on copyright and AI within nine months of the Data (Use and Access) Bill receiving Royal Assent. The House of Lords ultimately accepted this concession, opting not to delay the passage of the bill, which also includes important provisions for digital infrastructure and public service improvements, any further.
Ayesha Bhatti, head of digital policy for the UK and EU at the Center for Data Innovation, is pleased that the “prescriptive” rules that the Lords’ amendment would have brought were rejected. She told TechRepublic in an email: “Mandating sweeping new transparency obligations at this stage would have risked unintended consequences for the UK’s AI ecosystem, just as global competition intensifies.”
Edward Machin, data, privacy and cybersecurity counsel with law firm Ropes & Gray, says the bill offers “evolution, not revolution” and that the Government has merely “kicked the can down the road” when it comes to the AI copyright debate.
He told TechRepublic in an email: “Against the backdrop of the UK’s first major trial on AI and copyright works, debates globally on how to regulate artificial intelligence, and the Government wanting to maintain a light-touch regime for AI, the Parliamentary ping-pong over whether to include these provisions in the Bill makes clear that there will now be a real fight over when and how to legislate AI in the UK.”
Read TechRepublic’s recent news coverage about London Tech Week.