Why Tech Didn't Stop the New Zealand Attack From Going Viral | WIRED

The online spread of a video from a shooting at two mosques in Christchurch Friday shows the limits of social media moderation.

TESSA BURROWS/Getty Images

At least 49 people were murdered Friday at two mosques in Christchurch, New Zealand, in an attack that followed a grim playbook for terrorism in the social media era. The shooter apparently seeded warnings on Twitter and 8chan before livestreaming the rampage on Facebook for 17 gut-wrenching minutes. Almost immediately, people copied and reposted versions of the video across the internet, including on Reddit, Twitter, and YouTube. News organizations as well started airing some of the footage as they reported on the destruction that took place.

By the time Silicon Valley executives woke up Friday morning, tech giants’ algorithms and international content moderating armies were already scrambling to contain the damage—and not very successfully. Many hours after the shooting began, various versions of the video were readily searchable on YouTube using basic keywords, like the shooter’s name.

This isn't the first time we’ve seen this pattern play out: It’s been nearly four years since two news reporters were shot and killed on camera in Virginia, with the killer’s first-person video spreading on Facebook and Twitter. It’s also been almost three years since footage of a mass shooting in Dallas also went viral.

The Christchurch massacre has people wondering why, after all this time, tech companies still haven’t figured out a way to stop these videos from spreading. The answer may be a disappointingly simple one: It’s a lot harder than it sounds.

For years now, both Facebook and Google have been developing and implementing automated tools that can detect and remove photos, videos, and text that violate their policies. Facebook uses PhotoDNA, a tool developed by Microsoft, to spot known child pornography images and video. Google has developed its own open source version of that tool. These companies also have invested in technology to spot extremist posts, banding together under a group called the Global Internet Forum to Counter Terrorism to share their repositories of known terrorist content. These programs generate digital signatures known as hashes for images and videos known to be problematic to prevent them from being uploaded again. What's more, Facebook and others have machine learning technology that has been trained to spot new troubling content, such as a beheading or a video with an ISIS flag. All of that is in addition to AI tools that detect more prosaic issues, like copyright infringement.

Automated moderation systems are imperfect, but can be effective. At YouTube, for example, the vast majority of all videos are removed through automation and 73 percent of the ones that are automatically flagged are removed before a single person sees them.

But things get substantially trickier when it comes to live videos and videos that are broadcast in the news. The footage of the Christchurch shooting checks both of those boxes.

“They haven’t gotten to the point of having effective AI to suppress this kind of content on a proactive basis, even though it’s the most cash-rich [...] industry in the world,” says Dipayan Ghosh, a fellow at Harvard’s Kennedy School and a former member of Facebook’s privacy and policy team. That’s one reason why Facebook as well as YouTube have teams of human moderators reviewing content around the world.

Motherboard has an illuminating piece on how Facebook’s content moderators review Live videos that have been flagged by users. According to internal documents obtained by Motherboard, once a video has been flagged, moderators have the ability to ignore it, delete it, check back in on it again in five minutes, or escalate it to specialized review teams. These documents say that moderators are also told to look for warning signs in Live videos, like “crying, pleading, begging” and the “display or sound of guns or other weapons (knives, swords) in any context.”

It’s unclear why the Christchurch video was able to play for 17 minutes, or even whether that constitutes a short time frame for Facebook. The company didn’t initially respond to WIRED’s queries about this or to questions about how Facebook distinguishes between newsworthy content and gratuitous graphic violence.

After this story published, Facebook sent further explanation about how it's handling videos of this shooting. “Since the attack happened, teams from across Facebook have been working around the clock to respond to reports and block content, proactively identify content which violates our standards and to support first responders and law enforcement," a spokesperson said. "We are adding each video we find to an internal database which enables us to detect and automatically remove copies of the videos when uploaded again. We urge people to report all instances to us so our systems can block the video from being shared again.”

This means that the original video has been hashed, so that other, similar videos can't be shared again. In order to catch videos that have been altered to evade detection—for instance, videos of the footage playing on a second screen—Facebook is deploying the same AI it uses to spot blood and gore, as well as audio detection technology. Facebook says when it finds this content coming from links to other platforms, it's sharing the information with those companies.

“Our hearts go out to the victims, their families, and the community affected by this horrendous act," Facebook's spokesperson said in an earlier statement. "New Zealand Police alerted us to a video on Facebook shortly after the livestream commenced and we quickly removed both the shooter’s Facebook and Instagram accounts and the video. We're also removing any praise or support for the crime and the shooter or shooters as soon as we’re aware. We will continue working directly with New Zealand Police as their response and investigation continues.”

Google’s New Zealand spokesperson sent a similar statement in response to WIRED’s questions. “Our hearts go out to the victims of this terrible tragedy. Shocking, violent, and graphic content has no place on our platforms, and is removed as soon as we become aware of it. As with any major tragedy, we will work cooperatively with the authorities.”

The Google representative added, however, that videos of the shooting that have news value will remain up. This puts the company in the tricky position of having to decide which videos are, in fact, newsworthy.

It would be a lot easier for tech companies to take a blunt force approach and ban every clip of the shooting from being posted, perhaps using the fingerprinting technology used to remove child pornography. Some might argue that’s an approach worth considering. But in their content moderation policies, both Facebook and YouTube have carved out explicit exceptions for news organizations. The same clip that aims to glorify the shooting on one YouTube account, in other words, might also appear in a news report by a local news affiliate.

YouTube in particular has been criticized in the past for deleting videos of atrocities in Syria relied on by researchers. This leaves tech companies in the difficult position of not only trying to assess news value, but also trying to figure out ways to automate those assessments at scale.

As Google’s general counsel Kent Walker wrote in a blog post back in 2017, “Machines can help identify problematic videos, but human experts still play a role in nuanced decisions about the line between violent propaganda and religious or newsworthy speech.”

Of course, there are signals that these companies can use to determine the provenance and purpose of a video, according to Harvard's Ghosh. “The timing of the content, the historical measures of what the purveyor of the content has put out in the past, those are the types of signals you have to use when you run into these inevitable situations where you have news organizations and an individual pushing out the same content, but you only want the news organization to do so,” he says.

Ghosh argues that one reason why tech companies haven’t gotten better at this is because they lack any tangible incentives: “There isn’t a stick in the air to force them to have better content moderation schemes.” Last year, the regulators in the European Commission did float a proposal to fine platforms that allow extremist content to remain online for more than one hour.

Finally, there’s the perpetual problem of scale. It’s possible that both YouTube and Facebook have grown too big to moderate. Some have suggested that, if these Christchurch videos are popping up faster than YouTube can take them down, then YouTube should stop all video uploads until it has a handle on the problem. But there’s no telling what voices might be silenced in that time—for all their flaws, social platforms can also be valuable sources of information during news events. Besides, the sad truth is if Facebook and YouTube ceased operations every time a heinous post went viral, they might never start up again.

All of this, of course, is precisely the shooter’s strategy: to exploit human behavior and technology’s inability to keep up with it to cement his awful legacy.

Tom Simonite contributed reporting.

Update 4:12 pm ET 3/15/2019: This story has been updated to include additional detail from Facebook.


More Great WIRED Stories

https://www.wired.com/story/new-zealand-shooting-video-social-media/