Quantcast
Channel: SpicyIP
Viewing all articles
Browse latest Browse all 2950

The Idea Of Mandatory Automated Filters: Bringing About A Rapid Discourse Shift From Intermediary Immunity To Intermediary Responsibility

$
0
0

Image from here

We’re pleased to bring to you a guest post by Simrat Kaur, wherein she analyses the impact of mandating automatic filtering of copyright infringing content by internet intermediaries. Simrat is a New Delhi based IP lawyer. She pursued her undergraduate law course from Rajiv Gandhi National University of Law, Punjab and masters law course from National University of Singapore. After having worked with leading Indian law firms (Anand & Anand and Luthra & Luthra Law Offices), she is currently practicing under the banner “The Endretta”.

The Idea Of Mandatory Automated Filters: Bringing About A Rapid Discourse Shift From Intermediary Immunity To Intermediary Responsibility

Simrat Kaur

A couple of months back, Europe took a policy leap on intermediary liability when it gave life to the controversial Article 13 (now Article 17) of the EU Copyright Directive which, as a natural consequence of the language adopted, makes automatic filtering mandatory for intermediaries. It found place in the final text despite sharp criticism from various sectors, tech in particular (the battle is still not over, though – as per a recent update on the official twitter account of Chancellery of Prime Minister of Poland, a complaint has been filed by Poland against the Directive, before CJEU). While world was eyeing at the developments in Europe, India too proposed a similar provision in the form of Rule 3(9) of the Draft Intermediary Amendment Rules, 2018. Just like in Europe, it received a fragmented response from the stakeholders. Tech sector opposed it but content creators including Indian Music Industry welcomed the proposal. While the debate was unfolding, Telecom and IT minister Ravi Shankar Prasad was quick to mention, immediately after he assumed office; that notifying intermediary guidelines is one of his top priorities, leaving us with the question –  Will Rule 3(9) see the light of the day??

It is being argued  that the rule will lead to general surveillance and have a chilling effect on free speech. Multiple other concerns are also being raised against the idea of mandatory automated tools. An attempt has been made here, to discuss the most relevant ones:

Lack of Contextual Understanding

Inadequacy of AI and other automatic filters to understand context is the primary argument on the basis of which the proposal is being opposed. Opponents argue – How will these filters understand if a statement is defamatory or not? How will they differentiate between war footage uploaded by terrorist organisations and the same footage used by human rights activists for study and awareness programmes?

Above concerns are, indeed, valid. When we humans, too tend to be context insensitive, sometimes (semantic noise in human communication is a common problem – we often focus on meaning/interpretation of words rather than context and end up giving, or, falling prey to misunderstandings); it’s hard to rely on automatic tools for contextual understanding. But does that mean we should drop the provision? Can’t we adopt a sector specific approach? Since available technology is not quite fit for many context based issues which require assessment by humans, we can look at selective and restrictive application of the provision in relation to such unlawful content which could be easily spotted by filters. Music copyright infringement is one such area. Available technology can efficiently spot music infringing works (at least verbatim unauthorised copies, if not transformative ones) and block them. This is because, analogous copyrighted content is available with the platforms and the filtering technology just needs to match all the content being uploaded against the same.

Fair Use

There is a counter argument being advanced against the use of filters for music infringing works too – fair use. It is said that meta-tagging; hashing and content fingerprinting are all limited in their capacity to determine fair use and hence suffer from ‘false positives’ (erroneously filtering lawful content). A video used in news reporting / for education purposes could be removed by the filters as the same can’t be assessed without human intervention. In the Myspace judgment, Delhi High Court, too advocated against the idea of running filters to filter copyright infringing works on the ground that it may “snuff out creativity” and violate the right of “fair use”. But there is a pressing need to reassess this approach, particularly in light of the fact that intermediaries are showing active involvement in the distribution of uploaded content by optimising its presentation and promoting it (it shows that their role is no more technical and passive). If they are earning from content, they should shoulder the responsibility to root out infringements. Possible over-removal of legitimate content should not be available as an excuse. Also, since copyright is the rule and fair use is an exception; plethora of advantages of filters in reducing piracy shouldn’t be overlooked owing to collateral damage that those may entail, by over removal in some fair use cases. Intermediaries should put in place efficient counter notice mechanisms and human moderators to take care of erroneous removals. Moreover, the leading content recognition technology company Audible Magic claims that “positive identification rates exceed 99% with false positive rates of less than 106”. From this, it appears that adverse impact is anyway disproportionate.

Competition Concerns

Second argument is related to competition – it is argued that mandatory filtering will consolidate power and stifle competition as start ups and micro enterprises won’t be able to afford the technology. This will strengthen the monopoly of incumbents and act as a barrier to new entrants. But, can’t we address this by exempting start ups from the obligation? Companies which are less than 3-5 years old and have less than 10 lakh users per month, could be subjected to lighter obligations. Moreover, as per the impact assessment report of EU Commission on the modernisation of EU Copyright Rules, “a small scale online service provider with a relatively low number of monthly transactions can obtain such services for €900 a month”. Prima facie, it does not seem to be very unreasonable, even if it is for audio files only (pricing details of Audiomagic is available here https://www.audiblemagic.com/compliance-service/#pricing ) Also, once the law makes filtering mandatory, commercial fingerprinting and filtering technology will be in great demand. More companies will jump into this space, giving tough competition to current players i.e. Audible Magic and Vobile, which will in turn help in lowering the prices.

Lack of Control Over the Content

As far as the complaints of intermediaries like Amazon Web Services which do not even have access to their customers’ data are concerned, the same could be taken care of by doing away with the subject neutrality of the law and exempting such intermediaries from the obligation.

Conclusion

Given the above, rather than deleting Rule 3(9), customising it to strike the right balance between competing interests, is advisable. If platforms, particularly those which are, in a way, actively communicating the content to public and are earning billions as a result, must ensure that those who create that content are fairly remunerated. The regime of notice and takedown appears to be dysfunctional now; particularly in the manner it is practiced for high volume areas like music copyright. Given the large number of uploads, it becomes very costly and practically impossible for right-holders to report each infringement to platforms and request for the take down. One of Indian Music Industry (IMI)’s reports says that “95.7% of takedown notices issued by IFPI are for repeat content i.e., when we issue a notice, it is taken down and pops up again”.  For instance, a couple of years ago, One Direction’s “Drag Me Down” was reported to have reappeared for more than 2,700 times on YouTube following the first notice. Further, the value gap figures being released by IFPI(International Federation of the Phonographic Industry) every year; make it evident that we need something over and above ‘notice and take down’, to save the interests of creators. Though argument of value gap is rejected by many on the basis of lack of independent empirical evidence, however, strong voices coming from within the music industry and the relative low growth of music creation / distribution businesses in the last decade, do not fail to create a strong presumption that the problem is very much real. In his open letter to European Parliament, Paul McCartney said “Unfortunately, the value gap jeopardizes the music ecosystem. We need an Internet that is fair and sustainable for all. But today some User Upload Content Platforms refuse to compensate artists and all music creators fairly for their work, while they exploit it for their own profit. The value gap is that gulf between the value these platforms derive from music and the value they pay creators.” These facts and figures support the argument that ‘notice and take down’ should be replaced with ‘notice and stay down’ and for that, practically speaking, filtering is essential. If intermediaries are still not obligated to take a bit of enforcement responsibility by implementing filters; digital piracy will remain an intractable reality and the value gap will persist. Though a lot of work needs to be done in other areas too (for instance working and role of the collective rights management organisations), in order to have more money flow to music industry, nevertheless requiring user upload platforms to deploy automatic content recognition technologies is a step in the right direction to strike the necessary balance and close the value gap. It will certainly help copyright societies and artists to negotiate competitive license terms with user upload platforms. Currently these platforms see no incentive to draw fair terms, because the content to be licensed is already available on their platforms (users upload it, anyway) and they tangentially benefit from infringing content through advertising or increased traffic. As regards the users, in the long run, it will be beneficial for them too. They will have access to more content. This is because, more the rights of content creators are protected, more will they be incentivised to innovate and create more content.


Viewing all articles
Browse latest Browse all 2950

Trending Articles