[A big thanks to Praharsh for his inputs on the post.]
This is Part II of the two-part post on the recent Bombay High Court (BomHC) order in the case of Arijit Singh v. Codible Ventures LLP that has initiated a judicial discussion on the protection of artists’ personality rights against the unauthorised use of their voices by AI tools. In Part I of the post (here), the BomHC’s decision and analysis on the issue of AI cloning of singers’ voices was discussed. Part II will explore the global context of AI voice cloning and also pose certain questions to consider for similar cases that may arise in the future.
Clone the Tone: What is the Position of Protection against AI Voice Cloning Internationally?
Free access to AI voice cloning technologies has definitely caused a ruckus across jurisdictions, with its improper use ranging from politics to entertainment to crime. These technologies, capable of accurately replicating voices with 95% accuracy in multiple languages and accents, have been misused more often than put to good use. India is seeing a notable rise in AI-driven scams due to freely accessible voice-generation tools. Over a dozen websites now offer these services, which has spurred concerns globally about the misuse of such technology.
December 2023 saw artists from South Africa to Europe, and Japan to the United States unite to safeguard their professional and personal identities against the misuse of AI-generated voices. Some major incidents highlighting the misuse of AI voice cloning include Jay-Z taking legal action against deepfakes that featured him rapping Hamlet and Billy Joel, Voice actors suing Lovo (an AI startup) for incorporating their voices into a chatbot named Poe, a viral track featuring AI-generated versions of Drake and The Weeknd which was subsequently removed from streaming services, a voice actor Armando Plata found his voice copied for advertisements without his consent, Scarlett Johansson considering a right of publicity claim against OpenAI over its voice assistant, Sky, which was alleged to sound similar to her, and most recently, an incident from Maryland where a high school principal was framed as racist through an AI-faked voice recording.
In the US, a voice isn’t explicitly protected under copyright law, but there are potential protections under the right of publicity, which is enforced through state laws related to the appropriation of likeness, name, and voice. Legal precedents like Midler v. Ford (where Bette Midler rejected an offer to voice a Ford commercial, prompting Ford to use a voice double and an altered version of her song, leading the court to rule in Midler’s favour for voice appropriation) demonstrate that unauthorised use of a person’s voice can be actionable. Most recently, the state of Tennessee enacted the Ensuring Likeness Voice and Image Security (ELVIS) Act to replace the Personal Rights Protection Act, aimed specifically at protecting music industry professionals from unauthorised AI voice cloning. Also, recently introduced as a Bill in the U.S. Congress, the No Fakes Act aims to protect actors and singers from unauthorised AI replicas.
In the UK, there isn’t a standalone right of publicity, which means that voice actors have limited control over how their voices are used commercially. They might rely on the “law of passing off” to protect their interests, which requires demonstrating substantial reputation and goodwill associated with their voice. Moreover, both in the EU and the US, privacy laws also come into play alongside intellectual property protections.
To also have a look at the conditions of jurisdictions closer to home, let’s consider China. The Beijing Internet Court delivered the country’s first ruling on AI-generated voice rights in June 2024, finding that a software company infringed on an individual’s ‘personality rights’—which includes the ‘use’ and ‘publicisation’ of their likeness or image under the Chinese Civil Code—by using an AI tool to replicate their voice without consent and distributing it on various platforms. These developments underscore the urgent need for a proactive stand against voice cloning in India as well.
Of Machines and Men: What Does the Future Hold for Such Digital Duplicacy?
While for the time being the BomHC has ordered various entities to remove content that violates Singh’s personality rights, the larger matter of the personality and moral rights of the singer being infringed remains unresolved, with the case scheduled for September 2. However, this order is bound to attract attention for rightly emphasising the importance of safeguarding personality rights in the digital age, where AI tools can easily replicate a celebrity’s personal attributes. BomHC’s ruling aptly recognised that unauthorised exploitation of a celebrity’s persona not only infringes on their legal rights but also jeopardises their career and personal brand. It clearly establishes that AI cannot be used to exploit celebrity personas for profit, emphasising the need for ethical use of technology. This decision is likely to influence future legal standards on personality rights and the application of emerging technologies.
However, this order is only the starting point for a significant body of jurisprudence that is likely to evolve as more technologies emerge, demanding greater judicial prudence from Indian courts. The order also raises several questions about how similar cases will be addressed both in this specific context and in general for future cases:
Firstly, in the present case, the Court observed clear infringement by the defendants who explicitly used Arijit Singh’s likeness, such as through blogs detailing how to mimic his voice. However, in scenarios where such explicit references are absent, how will the Court determine whether there is an infringement of that particular celebrity’s personality rights and not of some other celebrity who may have similar attributes as the former? For instance, if someone argues that their personality rights are infringed by an AI cloning their voice. However, their voice/ personality closely resembles another’s work that closely resembles their persona—like in a hypothetical dispute between Amitabh Bachchan and singer Sudesh Bhosale, who mimics Bachchan’s voice—how would the Court assess the distinctiveness and recognizability of the persona in question to establish whether personality rights have been violated?
Secondly, BomHC noted that freedom of speech and expression allows for critique and commentary but did not expand on the extent to which fair use of an artist’s personality rights for parodies and other creative works (for example remixes of popular songs in voices of other famous singers) would be determined. How can we assess the culpability of AI as the ‘intentional’ creator of content that copies an artist if it happens as an unintentional byproduct of its functioning? For instance, ChatGPT was trained on copyrighted books by J.K. Rowling, who sued the company over copyright infringement. While the outcome of this matter remains pending, OpenAI contends that the AI model does not intentionally plagiarise or replicate specific copyrighted texts but generates text based on patterns and information from a wide range of sources available on the internet. Is this argument tenable when AI software reproduces and ends up infringing the IPRs of artists? A similar question can be raised when a human mimics the content of another versus when an AI does it. While mimicry artists are generally permitted to imitate voices as part of their craft, the use of AI to replicate or clone a celebrity’s voice raises questions on whether that should be treated any differently. Another concern is it might affect content creators, like YouTubers such as Anshuman Sharma, who use such techniques to put out remixes of popular songs in the voices of different singers.
Additionally, do non-famous people get any protection if their voice is copied and used? While cases involving well-known personalities have garnered attention, what would be the standards to determine whether a ‘normal’ person, who has no such popularity, has had their voice misappropriated and misused?
Lastly, What quantum of compensation will be granted by the court after the trial as a remedy to deter those who may have already earned millions from generating such videos? The monetary penalties imposed post-trial will truly determine how seriously this offence is treated.
Moreover, Ameet Datta, in a LinkedIn post, has highlighted some intriguing gaps in the BomHC’s analysis of the case. He points out that the lawsuit seems to hinge solely on publicity rights rather than a claim rooted in passing off, despite references to elements of passing off and claims of dilution and tarnishment. This focus could likely complicate the plaintiff’s position and jeopardise the entire suit or specific claims. He also questions how the plaintiff can sustain a claim regarding the “456 songs unauthorizedly uploaded to the AI platform,” given that the performer doesn’t retain copyright in these recordings. Furthermore, he notes that the legal implications of AI training, potential defences, and the role of individual users selecting voices and personas for songs seem underexplored by the court. Additionally, the court’s lack of scrutiny regarding the plaintiff’s justification for combining diverse and seemingly unrelated defendants into a single claim is questionable according to him. These incongruities do make one ponder on the degree of nuance needed to take jurisprudence forward on these issues.
These questions and queries are not limited to personality rights but extend to all areas of IPR. As these technologies advance and their applications expand, the distinction between creations made by man and those generated by machines will increasingly become less clear. Therefore, it is likely that we will continue to face fundamental questions about the scope of such IPR protections against digitised creations.