Over the last two years, several Facebook reports red-flagged and outpoured in “anti-minority” and “anti-Muslim” rhetoric as “a substantial component” of the 2019 Lok Sabha election campaign. The increase in hate speech and inflammatory content was mostly centered around themes of threats of violence, Covid-related misinformation involving minority groups, and false reports of Muslims engaging in communal violence.
A July 2020 report specifically noted there was a marked rise in such posts in the last 18 months, and that the sentiment was likely to follow in the coming Assembly elections, including West Bengal. These reports are one among several documents disclosed to the United States Securities and Exchange Commission (SEC) and provided to the US Congress in modified form by the legal counsel of ex-Facebook employee and whistleblower Frances Haugen. These modified versions have been reviewed by a consortium of global news organizations including The Indian Express.
Himanta Biswa Sarma, now Assam Chief Minister in one of the internal reports in 2021 before the Assembly elections in Assam, was flagged in as being part to traffic inflammatory rumours about “Muslims pursuing biological attacks against Assamese people by using chemical fertilizers to produce liver, kidney and heart disease in Assamese.”
Sarma replied he was “not aware of that development”, when asked by The Indian Express about this and whether he knew of his “fans and supporters” indulging in hate-speech. Asked if he was contacted by Facebook flagging the content posted on his page, Sarma said: “I had not received any communication.”
Another report, titled “Communal Conflict in India”, notes that inflammatory content in English, Bengali, and Hindi spiked numerous times and especially in December 2019 and March 2020, which coincides with the protests against the Citizenship Amendment Act and start of lockdowns enforced to prevent the spread of Covid-19.
Documents reveal that there was a palpable clash between two internal Facebook teams: those flagging problematic content and those designing algorithms to push content on the newsfeed, despite presence of such content on the platform. To tackle such problematic content, an internal staff group had, in the July 2020 report, suggested various measures such as developing “inflammatory classifiers” to detect and enforce such content in India, improving the platform’s image text modelling tools so that such content could be identified more effectively, and building “country specific banks for inflammatory content and harmful misinformation relevant to At Risk Countries (ARCs)”. Almost all these reports place India in the ARC category, where the risk of societal violence from social media posts is more compared to other countries.
Groups claiming to be affiliated with the Trinamool Congress engaged in coordinated posting of instructions via large messenger groups and then posted these messages across multiple similar groups in an attempt to boost the audience for content which was “often inflammatory”, but “usually non-violating”, according to another report, titled “India Harmful Networks”. Another Facebook internal report noted, the posts from RSS and BJP affiliated groups carried a high volume of “love jihad” content with hashtags linked to publicly-visible Islamophobic content. Queries sent to the BJP, RSS, and TMC went unanswered.
A spokesperson for Meta Platforms Inc (Facebook was rebranded as Meta on October 28) told The Indian Express: “Our teams were closely tracking the many possible risks associated with the elections in Assam this year, and we proactively put in place a number of emergency measures to reduce the virality of inflammatory comments, particularly videos. Videos featuring inflammatory content were identified as high risk during the election, and we implemented a measure to help prevent these videos from automatically playing in someone’s video feed”. Despite all these red flags, another group of staffers at the social media firm suggested only a stronger time-bound demotion of such content. Asked if the social media platform took any measures to implement these recommendations.
The spokesperson said, “On top of our standard practice of removing accounts that repeatedly violate our Community Standards, we also temporarily reduced the distribution of content from accounts that had repeatedly violated our policies, hate speech against marginalized groups, including Muslims, was on the rise globally”.
“We have invested significantly in technology to find hate speech in various languages, including Hindi and Bengali. As a result, we have reduced the amount of hate speech that people see by half this year. Today, it’s down to 0.03 percent,” the spokesperson added.
Facebook made aware about the nature of content being posted on its platform, but it also discovered, through another study, the impact of posts shared by politicians. In one internal document titled “Effects of Politician Shared Misinformation”, examples from India figured as “high-risk misinformation” shared by politicians which led to a “societal impact” of “out-of-context video stirring up anti-Pakistan and anti-Muslim sentiment”. The study pointed out that users thought it was “Facebook’s responsibility to inform them when their leaders share false information”. There was also debate within the company, according to the documents, on what should be done when politicians shared previously debunked content.
Source: The Indian Express