At 1.35 pm last Friday in Christchurch, New Zealand, when the white supremacist gun went on a rampage at Al Noor Mosque, one of the two mosques, and live-streamed the video on Facebook, the world’s largest social network had no clue at all as to what was happening on its platform.
In a statement this week, Facebook defended itself, saying that the actual live stream of the incident, which lasted for 17 minutes, was only viewed by 200 viewers. But because of a series of operational failures and the nature of the ‘modern internet’, millions of people ended up seeing it. The Sydney Morning Herald describes it well highlight it as “the US$473 billion company’s vaunted artificial intelligence technologies, ‘designed to stop extremist and illicit content’ such as pornography or videos depicting violent acts from appearing on the platform, ‘failed to detect anything objectionable’ in the Christchurch shooting stream.”
Related Article: Has Facebook Taken Societies Backward?
All in all, it took donkey’s years, 29 minutes after the live stream began, and 12 minutes after it finished when a user reported it for Facebook to begin to realize what was happening. It wasn’t until 2.29 pm, when the New Zealand Police used a special escalation channel for law enforcement and intelligence agencies to alert Facebook to problematic content. And that’s when the social network platform acted. The police alert triggered notifications to Facebook’s most senior executives in Australia and New Zealand, including the local policy chief Mia Garlick as well as content moderators in Philippines. After a few minutes, the video was ‘finally’ removed. One hour lapsed between the start of the live stream and the video being taken down proved to a ‘crucial delay’ that enabled the video going viral uncontrollably.
Related Article: ‘Code of Ethics’ for social media platforms in action
In that period, the original video had been viewed by 4000 Facebook users. One user copied and posted the video to ‘8chan’, the online message board frequented by altra-right trolls, involuntary celibates and the alleged attacker himself. This is the site where the terrorist had posted his ‘hate manifesto’. From here, the video spread like wildfire to other platforms such as YouTube, Twitter and Reddit. And it went on to be picked up and aired by news websites and TV stations throughout the world.
In the 24 hours after the incident, Facebook was hit by 1.5 million videos containing footage of the terror attack. The social network’s systems thwarted 1.2 million of them from being uploaded. However, 300,000 videos made it through its ‘defenses’; with many of it being screen recordings or videos with slight alterations specifically designed to evade Facebook’s controls. YouTube also faced similar challenges, describing the volume of videos of the incident as ‘unprecedented both in scale and speed’.
Related Article: Facebook: A Constant Violation of your Privacy
Foad Fadaghi, an analyst at tech researcher Telsyte, said it’s almost impossible for governments to put the ‘tech genie’ back in the bottle without sending us back to the ‘Dark Ages’ or making us similar to the totalitarian countries that limit the internet. A former Google employee, now a critic of the tech industry, Tristan Harris has described Facebook as an ‘uncontrollable digital Frankenstein’.