The Hunter Biden story was a test for tech platforms. They barely passed | Silicon Valley | The Guardian

Show caption ‘Only in 2020 have tech companies begun to take seriously the dangers of Holocaust denial, massive conspiracies that spark violence, anti-vaccine and other medical misinformation, and disinformation about voting.’ Photograph: Yui Mok/PA


Facebook and Twitter are trying to avoid repeating the 2016 misinformation disaster, but haven’t totally figured out how

Mon 19 Oct 2020 09.36 EDT

In 2016, major American news outlets, amplified by YouTube, Facebook and Twitter, did the work of foreign and domestic purveyors of disinformation and propaganda. They took too seriously false, irrelevant and illegally acquired emails from top Democratic campaign officials. They took seriously bogus questions about Hillary Clinton’s own email accounts. And they generated a cloud of confusion and mistrust that led directly to Donald Trump’s unlikely victory in the electoral college.

Biden article row shows how US election is testing Facebook and Twitter

This much is indisputable. What’s interesting is how far Facebook, Twitter and to a lesser extent YouTube have at least attempted to avoid contributing to such a mess in 2020 – while major American news outlets continue to show a willingness to get played.

The latest example is how a tiny, unverifiable, almost-certainly-false story published by Rupert Murdoch’s New York Post (the fourth-largest readership of any newspaper in the United States) generated a ridiculous blowback among rightwing pundits and politicians.

The story is almost comical. It alleges that someone delivered three laptops to a computer repair store in Delaware. The owner of that store thinks the man who delivered the computers was Hunter Biden, the son of Vice-President Joe Biden. But he can’t be sure it was Hunter Biden. Or maybe he can. He’s very confused about how this all went down. Anyway, the owner says he made copies of the hard drives and somehow sent the content, which he deemed suspicious, to some undetermined law enforcement agency and to the former New York City mayor Rudy Giuliani, one of Trump’s personal lawyers. It’s all very unclear how and why such a transaction happened – if at all.

Among the pilfered emails (sound familiar?) was at least one that seemed to suggest that the then vice-president could arrange a meeting with a business associate of Hunter’s in Ukraine. You might remember that Hunter’s Ukraine business involvement was the subject of the phone call that Trump made to the president of Ukraine to get him to announce an investigation of the Bidens. This call is what triggered Trump’s impeachment.

Vice-President Biden says he has never met with anyone affiliated with Hunter Biden’s business and there is no evidence even in the new New York Post story that he did.

So, basically, we are dealing with a third-rate, bungled pile of nonsense here. What’s a social media company to do?

Platforms like YouTube, Twitter and Facebook have three choices when they flag potentially troublesome content. They can keep their hands off and let their users and algorithms do with the content what they wish to do, risking amplification. This was the standard method of dealing with hate speech, misinformation and propaganda for most of the history of these companies. Second, they can choose to keep the problematic posts up on the service but “dial down” the amplifying power of the algorithms, slowing distribution, giving their staff time to research the posts and consider if further action is needed. This is almost always the wisest move.

Third, platforms could choose to block or purge an item completely. Given the scale of Facebook (2.7 billion users), YouTube (2 billion users), and Twitter (330 million users), deleting an item might seem like a major problem for the free flow of information. But it’s not. The original source remains untouched and accessible to most of the world. Nonetheless, by making this harshest of choices the platforms expose themselves to vitriol and risk generating the sort of backlash that can energize paranoid, conspiratorial movements like Q-Anon or Trump supporters.

Only in 2020 have tech companies begun to take seriously the dangers of Holocaust denial, massive conspiracies that spark violence, anti-vaccine and other medical misinformation, and disinformation about voting. And each company has handled these issues in different ways at different times. They are all just feeling their way through it. (They also seem solely concerned about these problems within the US, leaving most of the world largely unprotected.)

Well, at first, Twitter took the hard road with the New York Post story. Sensitive to the fact that nonsense travels farther and faster than sense on its service, Twitter blocked users from sharing or retweeting the original story. Later, it temporarily blocked some rightwing accounts that were trafficking in the claims. These included the account of the White House press secretary, Kayleigh McEnany.

Then, on Friday, Twitter flipped and restored users’ ability to share the Post story.

The justification Twitter offered was that the story – if based on anything at all – was based on stolen information. That’s not a bad reason to block something, given the spectacular errors so many made in 2016. There is so much wrong with the Post story it makes sense to be careful. But, of course, this move brought more attention to the Post story than it might have otherwise generated. The fact that I’m writing about it shows that. Twitter offered no clear reason for changing its decision on Friday. It seems that the social media platform caved to Republican pressure.

Content moderation is a fool’s game. A company can’t win the PR battle no matter what it does

Facebook took a smarter approach, but did not escape being bunched up with Twitter and accused of pandering to the left – an absurd accusation given all the ways that Mark Zuckerberg and his cronies have actively helped Trump get elected and then protected his interests consistently through the past three years. In the case of the Post story, Facebook simply limited the extent to which its algorithms would display the item in people’s News Feed – thus slowing its distribution and lowering interest and awareness in it while Facebook staff could do more research.

As usual, YouTube did nothing to stem the flow of this unfounded story. A Google search on Friday afternoon of “Hunter Biden” and “emails” generates a video version of the story on YouTube.

Content moderation, the term of art for such policies and decisions, is a fool’s game. A company can’t win the public relations battle no matter what it does. Companies like Facebook, Twitter and Google don’t owe anyone a commitment to publish and promote their expressions. They need not defend free speech.

There are frequent calls for these companies to be more transparent and consistent in their moderation policies. But that’s expecting too much. Given the varieties of human expressions and cruelties, it seems impossible to predict all the different problems that might spring up that threaten people’s health, safety or democracy.

Either way, content moderation is necessary. Nobody should want massive systems of content distribution to foster Holocuast denial or call for violence against ethnic groups. Some of these questions feel easy (although for some reason, blocking Holocaust denial seemed like a hard choice for Mark Zuckerberg, raising some serious questions about his capacity for basic moral judgment). Most of them are hard.

As we turn past the US election of 2020, we should recognize that these companies will continue to fail most of the time. They are built to fail our most basic needs for safety and dignity. But sometimes they will make the world a little less cruel and stupid than they otherwise would. That’s the best we can hope for.

  • Siva Vaidhyanathan is a professor of media studies at the University of Virginia and the author of Antisocial Media: How Facebook Disconnects Us and Undermines Democracy