The Society of Authors’ recently announced their policy on Artificial Intelligence (AI) for the book trade:
There is no lack of media coverage on artificial intelligence (‘AI’) systems and the social media trends that the release or update of AI services have generated.
Whatever your area of work, whether you are an academic, an illustrator, a poet, a scriptwriter or a translator (to name a few), AI systems are being trained on existing copyright-protected works (input) and these same systems are being used to generate works ‘in the style of’ those existing works (output).
The AI development race is opaque, unfettered and unregulated, and driven primarily by the profit motives of large corporations, despite some likely adverse impacts. The ethical and moral ramifications of these AI systems are complex, and the legal ramifications are not limited to the infringement of copyright’s economic rights, but may include infringement of an author’s moral rights of attribution and integrity and right to object to false attribution; infringement of data protection laws; invasions of privacy; and acts of passing off.
And these aren’t issues for a hypothetical future.
For example, we have already seen one major genre publication (Clarksworld) close to submissions as a result of being flooded by AI-generated stories. Dominant retail groups are dedicating listings to ebooks written solely by AI systems. Audiobook voice actors and performers are having their voices replicated by AI systems without acknowledgement, authorisation, or compensation. The Italian data-protection authority has banned ChatGPT altogether over concerns that it is in breach of data privacy laws. Even Elon Musk and co-founder of Apple Steve Wozniak are among the 3,000 artificial intelligence experts and technologists to have signed an open letter urging developers to pause their work on creating even more powerful systems until ‘we are confident that their effects will be positive and the risks will be manageable’.
This isn’t “us vs them”
This is not to detract from the potential benefits of machine learning – see for example the recent report by the Publishers Association ‘People plus machines: the role of artificial intelligence in publishing’.
Some creators are already making use of AI systems to assist them in creating their works. But with progress comes risks – these risks need to be assessed, and safeguards need to be put in place to ensure that the creative industries will continue to thrive.
The debate over regulating new technologies is often polarised to oppose author interests with those of ‘big tech’, for instance. It is a misconception implying that innovation is hindered by a strong intellectual property regime, and that innovation can only happen to the detriment of authors and creators. Such false binaries risk the livelihoods – career and income – of an entire swathe of the UK economy.
These misconceptions ignore the fact that the intellectual property regime, and the authors and creators who rely on it, are key to technical innovations like AI being possible in the first place.
These rights regimes are equally important to creators, who contribute significant amounts of revenue to the UK economy – to the tune of £115.9–132.1 billion in GVA. Individual creators are far from insignificant economic actors. They create goods and provide services that all contribute to the UK’s national economy. We are deeply concerned about the reported misuse of authors’ works in the development of AI systems and the longer-term impact that such systems might have on their livelihoods. The consent and remuneration of authors whose works are being used to develop and train AI systems is vital.
The creative industries are founded on human creativity. There is no reason – cultural, financial, professional or otherwise – why that should change. Creators’ business interests and their rights must be preserved and respected.
What we are asking for
Author consent should be required before their work is used by an AI system, in line with existing copyright law. In the UK, the Copyright, Designs and Patents Act 1988 provides economic and moral rights to authors whereby an author’s work cannot be used without their consent – these provisions apply across the board, including to AI developers and any use by AI systems.
Creators must be asked before their work is used- using opt-in rather than opt out systems.
Authors should also have clear rights to object to deep fakes and ‘in the style of’ imitations. If AI is used to judge or evaluate work authors should have the right of review by a human assessor.
We are fortunate in the UK to have a sophisticated, world-leading licensing system which already works well, and ensures the collective management of rights and subsequent remuneration of authors for the exploitation of those rights. These systems are open and transparent, and subject to government regulation. AI developers may wish to work with creators to consider models to extend such systems to machine learning
Such consent cannot be assumed or taken by rights grabs. Publishing contracts should confirm that publishers are not able to license the whole or any part of an author’s work for machine learning purposes without explicit, specific consent from the author.
Transparency over data sources (input)
Developers should be required to publish the data sources they have used to train their AI systems. This information is currently treated as proprietary by system developers. And the obligation of transparency should include full transparency so that copyright holders know when their works have been used.
These uses should be in compliance with existing copyright laws. There is growing evidence that several well-known AI systems have been trained on pirated material in clear infringement of copyright. These AI systems cannot ‘un-learn’ the information they have been given. If copyright has been infringed, the infringing works cannot be removed simply by deleting a row from a database. Those works will remain part of the system for as long as it is online.
Government must go further to regulate for greater protections for writers, artists and other creative professionals whose work makes these systems possible.
Creators should always be paid fairly for their work. If the developers of these systems are being paid for their work, it is unreasonable to expect creators not to be.
Transparency over output
Readers and viewers deserve transparency when they purchase a copy of a creative work – anything produced by an AI system should be labelled as such.
But machines cannot be authors. Our copyright regime relies on concepts of human originality and skill and labour. Only humans can create and receive copyright protection.
Publishers and producers should be mindful of the value of supporting human creativity and avoid using AI tools which remove human contribution but risk damaging the fragile and connected workflows that creators rely on to survive.
What we’re doing
We continue to work closely with industry partners, nationally and internationally, and government officials.
We were encouraged by the renewed commitment by government in its White Paper to underpin any development in this area by principles that we can all applaud:
- Safety, security and robustness
- Appropraite transparency and explainability
- Accountability and governance
- Contestability and redress
Read the initial SoA response to the White Paper here.
Read our practical steps for authors regarding the use of AI here.
However, these principles do not go far enough. We will continue to work with government officials for the better protection of the rights of writers, artists and other creative professionals whose work makes these AI systems possible.
In line with the recent report by the House of Lords, which urged government, in no uncertain terms, to protect IP and better fund and support the creative industries, we are hopeful that, together, we can achieve positive change.
Write to us
We encourage members to write to us with questions and examples of AI so we can continue to monitor the landscape of this rapidly developing issue.
See here for further information.