One Question Quiz
Mark Zuckerberg (Photo: Chip Somodevilla/Getty Images)
Mark Zuckerberg (Photo: Chip Somodevilla/Getty Images)

MediaMarch 31, 2019

Facebook ban: Stop me if you’ve heard this one before

Mark Zuckerberg (Photo: Chip Somodevilla/Getty Images)
Mark Zuckerberg (Photo: Chip Somodevilla/Getty Images)

If Facebook’s announcement that it’ll ban praise and support for white nationalism and separatism sounds familiar, that’s because it is, writes Paul Brislen for RNZ.

Nearly two weeks after the terrorist attack in Christchurch was aired live on Facebook, the company has finally responded in public. From next week it will implement a ban on “praise, support and representation of white nationalism and separatism on Facebook and Instagram”.

The company will also direct any searches for key terms “associated with white supremacy” to a non-profit organisation Life After Hate that helps pull people out of hate speech groups.

If all this sounds familiar, it’s because sadly it is. This is not the first time Facebook has declared it will take action in this way.

In April 2016, Robert Godwin, a 74-year-old grandfather, was shot and killed in Cleveland, Ohio having been chosen at random by a killer who broadcast it live on Facebook.

At the time CEO Mark Zuckerberg told Facebook’s annual developer conference, “We have a lot of work [to do], and we will keep doing all we can to prevent tragedies like this from happening”. In May of 2017 the company announced plans to add 3000 more staff to review user content to help battle violent videos.

Not much has changed. Thursday’s announcement sounds awfully hollow in light of the lack of action since 2016 and does little to assuage anyone’s concerns about the company and its ability to self-regulate.

Facebook Live will continue to be unmonitored and unmanaged. Having a room full of content moderators is a good step forward but given the vast amount of video footage that is uploaded every second of the day, Facebook must invest more in machine learning and automated systems that can identify and root out such content before any human sees it.

As digital forensics professor at the University of California, Berkeley Dr Hany Farid told Nine to Noon such systems already exist and are used widely. That Facebook and its ilk do not automatically tag Facebook Live content and remove it as soon as it becomes apparent it is objectionable remains unacceptable. If the killer had used copyright music in his video I am sure Facebook would have blocked the video immediately, but the slaughter of innocents apparently goes unnoticed.

Let’s see whether those of us who report extremist views will ever get anything more than an automated response that says “this does not breach our terms and conditions”.

Let’s see whether Instagram removes Alex Jones and InfoWars from its platform. Let’s see whether searching for Ku Klux Klan does indeed redirect to Life After Hate.

Facebook hasn’t addressed the company’s unwillingness to do what is required under New Zealand law (the Harmful Digital Communications Act in this case, but also the Privacy Act where it’s butted heads with the Privacy Commissioner repeatedly) and simply tightening up its own community standards isn’t going to make a huge difference at this point.

If the company was serious it would switch off Facebook Live immediately while it worked out how to tag videos. It would implement automated tagging of all videos uploaded to the site so that any which slip through the net can be found and removed swiftly. It would pay attention to local laws and act accordingly – it already does this in Germany for instance and there’s no reason why it can’t abide by New Zealand law when used by New Zealand users. And it would make it easier and more straightforward for users to report objectionable content.

But of course, all this costs money and doesn’t add to shareholder value, which is why it is vital that New Zealand lead the way and introduce regulation that requires Facebook and other tech giants to do the right thing, to abide by New Zealand law and to enable users to report objectionable material as soon as it is posted. To do that, we’ll need regulations and probably a huge upweighting in terms of the fines we can issue and, I would suggest, personal liability for senior executives.

Anything less is just a public relations exercise, just like this announcement from Facebook.

*Paul Brislen is a technology commentator

Keep going!