The Delfi v. Estonia decision, issued in June of 2015 by the European Court of Human Rights, marked a low point in that court’s caselaw regarding freedom of expression on the Internet. This week, a new European Court ruling slightly qualifies and narrows the scope of Delfi. The outcome of the new decision is not as detrimental as one could have anticipated; however, it looks to be another step toward requiring European Internet intermediaries to monitor user-generated content on their sites.
Background: The Delfi decision of 2015
In Delfi, the Grand Chamber ruled that holding an online news portal liable as the publisher of unlawful comments posted by its users was consistent with the right to freedom of expression enshrined in Article 10 of the European Convention on Human Rights. It distinguished a similar ruling by the First Section of the Court, which upheld the imposition of liability for comments that were deemed defamatory. For the Grand Chamber, the comments rose to the level of hate speech and incitement to violence, which were clearly unlawful and particularly harmful forms of speech.
The Grand Chamber declined to inform its Delfi analysis with caselaw interpreting the EU E-Commerce Directive and other relevant international instruments on the right to freedom of expression. According to these materials, a broad range of intermediaries should be shielded from liability for content that they transmit or host. The E-Commerce Directive and other materials indicate that intermediaries should not have a duty to monitor user-generated content on their sites, and decisions regarding liability should examine whether the intermediary had actual knowledge of unlawful material.
Although the European Court’s jurisdiction is limited to interpreting the European Convention and its protocols, I have argued in a forthcoming piece that the Delfi decision undermines legal certainty regarding intermediary liability in Europe and is problematic for the right to freedom of expression (see similar views here, here and here). The Court specified that its reasoning in Delfi only applied to “a large professionally managed Internet news portal run on a commercial basis which published news articles of its own and invited its readers to comment on them,” excluding most social networking services. Nevertheless, it is increasingly difficult to distinguish intermediaries on the basis of whether they provide or “edit” content.
The Court’s latest decision on the liability of an online intermediary
In Magyar Tartalomszolgaltatok Egyesulete and Index.hu Zrt v. Hungary (MTE), the Fourth Section of the European Court qualified Delfi and explained that Internet content providers will be held to the highest standards of conduct when hate speech or incitement to violence is posted on their sites. Because the comments in question in this case were merely defamatory, Hungary’s decision to hold the applicants – a self-regulatory body of Internet content providers and a major Internet news portal – strictly liable constituted a violation of Article 10 of the European Convention (a welcome contrast to the First Section’s holding in the 2013 Delfi decision).
In both Delfi and MTE, the European Court of Human Rights addressed a state’s interpretation of the E-Commerce Directive in a way that the applicants claimed was unforeseeable (and perhaps inconsistent with the Directive itself, although Member States have some discretion when transposing directives). This places the Court in an awkward position, and it raises the question of whether both cases would have been more appropriately resolved by the Court of Justice of the European Union. As in Delfi, the local courts determined that the domestic version of the E-Commerce Directive was inapplicable to the case. The European Court upheld the legality of this decision, noting that Hungary’s law narrowly covered “electronic services of commercial nature, in particular to purchases on the Internet” (para. 20).
With regard to the necessity and proportionality of liability, the comments were found to involve a matter of public interest and to have caused little harm to the domestic plaintiff. Several aspects of Delfi were reaffirmed: the commercial nature of an online content provider remains relevant to determining its responsibilities, and a Web site’s hosting of comments is still basically equivalent to a journalist’s publication of speech by third parties in the form of interviews (paras. 70 and 79). For the Court, strict liability for user-generated comments is inconsistent with “the right to impart information on the Internet” (para. 82). In the circumstances of MTE, the applicants were deemed to have fulfilled their responsibilities by using a notice-and-takedown mechanism; however, where “hate speech and direct threats to the physical integrity of individuals” is concerned, states may hold online content providers liable “if they failed to take measures to remove clearly unlawful comments without delay, even without notice from the alleged victim or from third parties” (91).
In summary, the Court suggests that an online content provider – particularly one that is large and commercial in nature – should monitor user-generated content posted on its site for “clearly unlawful speech,” such as hate speech and incitement to violence, and remove it immediately. In contrast, defamatory speech may be addressed through ordinary due diligence measures, such as notice-and-takedown mechanisms. But imposing a duty to monitor is not only inconsistent with the E-Commerce Directive and other international instruments, it is also impractical as the Court envisions it. A company cannot only monitor for certain types of speech – a duty to monitor extends to all content, and when faced with the threat of liability, a host will typically be aggressive in removing content that is controversial. Furthermore, detecting speech that is “incitement to violence” often requires an appreciation of context that is above the pay grade of a content moderator.
When read together with recent positions of the European Parliament on online terrorist radicalization, Delfi and MTE may be read to justify calls to impose a duty on certain online services in Europe to monitor user-generated content. This result would be a setback for freedom of expression online, and it would further undermine legal certainty with regard to international standards on intermediary liability.