Jane Doe v Meta Platforms, Inc.: Facebook sued by Rohingya over hate speech

Jane Doe v Meta Platforms, Inc.: Facebook sued by Rohingya over hate speech

Last week, in a last effort for access to effective remedy, the Rohingya sued Facebook (now Meta), arguing that Facebook’s failure to monitor content and its design and algorithm contributed to violence against them. United Nations’ human rights experts have publicly reported that the hate speech and Facebook’s role in its spread contributed to a potential genocide in Myanmar. 

Whether the case will be successful is highly debatable. In the United States, (social) platforms such as Facebook are protected by a law commonly known as ‘section 230’. This refers to Section 230 of the Communications Decency Act of 1996 (47 U.S.C. §230), which states that an “interactive computer service” cannot be treated as a publisher of certain content posted by a third party. As a result, platforms have effectively been able to avoid liability in situations where users have posted harmful content on their platforms. Yet, the timing of the lawsuit so shortly after recent whistleblower complaints from inside Facebook may be helpful in the present case, and the leaked documents are frequently cited in the lawsuit itself.

Background

Myanmar’s history is shaped by the Tatmadaw, its military, which grabbed power through a coup in 1962. To justify its grasp on the country, the Tatmadaw has consistently used the Rohingya, a Muslim minority in Myanmar, as an imagined threat to the predominantly Buddhist country. By publicly oppressing and marginalizing this supposed threat, the government found its popularity increased. Facebook was introduced in Myanmar in 2011. By then, the political repression and ethnic violence in the country was well-known internationally. Facebook at the time had gathered that continued user growth was critical to its success. In order to achieve this, Facebook started to target developing countries with a free product, whereby its app could be used without incurring any data charges. It would be pre-loaded on phones bought in mobile shops. Before, only about 1% of the population had a mobile phone, so naturally, Facebook gained immensely in popularity. Yet, the Myanmar population was newly exposed to internet and the massive spread of (mis)information, contributing to misperception and dangerous situations. Within developing countries targeted by Facebook, the internet is synonymous with the app. This in turn comes with enormous powers for the social platform, which it was not equipped to handle, even when warned. 

The Lawsuit

The lawsuit alleges that Facebook is very well aware of the power of hate, fear, and anger, and how these emotions result in more engagement with content. Its algorithm is therefore specifically tailored to facilitate engagement, by amplifying hateful or divisive content. Facebook, by its very design, turned out to be the perfect tool for the Tatmadaw to promote its hateful message against the Rohingya. This campaign led to the radicalization of users, and many posts and accounts have since been linked to the Tatmadaw’s ethnic cleansing campaign of August 2017. During this campaign, the ‘Clearance Operations’, an estimated 10,000 Rohingya Muslims were brutally killed. Many women and girls were subjected to rape, gang rape, sexual mutilation, and sexual humiliation, permanently scarring them physically and mentally. 

The recently leaked whistleblower documents indicate that “Facebook executives were fully aware that posts ordering hits by the Myanmar government on the minority Muslim Rohingya were spreading wildly on Facebook”, and that “… the issue of the Rohingya being targeted on Facebook was well known inside the company for years.” Whereas initially Facebook could claim that a lack of Burmese speakers was the cause of them not being able to flag certain posts, they were obviously aware of the life-threatening messages being spread well before the Clearance Operations.  

Ultimately, the lawsuit claims that through the design of its algorithm, Facebook contributed to the development and creation of hate speech and misinformation, which in turn was allowed to radicalize users without constraint. Facebook was well aware of what was happening on its platform, and had reason to expect that violence could result from the online activity. Users were allowed to use Facebook in a way that Facebook should have known created an unreasonable risk to the vulnerable Rohingya. Finally, despite having all this knowledge, Facebook failed to timely invest in local moderators and fact checkers to shut down specific accounts or groups. 

Notably, the lawsuit also includes the possibility that Facebook will raise the Section 230 defence. The lawsuit pre-emptively handles this defence by stating that if that Section conflicts with Burmese law, the law of Burma will apply. The latter does not have a similar protection for online companies regarding content spread on their platform. 

Facebook’s response

Facebook’s response to the allegations in the lawsuit includes that they have “built a dedicated team of Burmese speakers, banned the Tatmadaw (Myanmar military), disrupted networks manipulating public debate and taken action on harmful technology to reduce the prevalence of violating content.” However, in a very similar situation, armed groups in Ethiopia have used Facebook to incite violence. Again, internal communications are showing there are not enough Facebook employees who speak one of the languages necessary to properly monitor the situation. Although Facebook’s Oversight Board issued a statement that independent investigation is needed, they have once again allowed hate speech to circulate for several months.

Post Category :

Legal News, Litigation Analysis

Recent Post