Just as the internet has created immense positive value by connecting people and creating new communities, it has also given new tools to those who want to hatefully threaten, harass, intimidate, defame, or even violently attack people different from themselves. White supremacists and other organizations engaged in these sorts of hateful activities use social media and other major tech platforms to mobilize, organize, fundraise, and normalize racism, sexism, bigotry, and xenophobia. In the past few years, hate activity online has grown significantly as the “alt-right” emerged from the shadows. At the Unite the Right Rally in Charlottesville in 2017, we saw a prime example of how the internet’s invigoration of hate groups can result in real world violence.
Meanwhile, the internet has opened up unprecedented opportunities for diverse communities to speak, create, educate, and entertain by building a direct connection with their audience and provided platforms to voices that would otherwise be silenced. Yet these same creators, including people of color, women, religious minorities, and members of the LGBTQIA community, are routinely harassed and threatened online; these attacks stifle their voices and chill their participation on these platforms.
These harms interfere with the ability of entire communities to use the most important technological advance of the modern era.Some of the larger tech companies have made attempts (some more successfully than others) to address hateful activities on their services. Indeed, some attempts have been over-inclusive, silencing diverse voices combating racism and discrimination. Most tech companies are committed to providing a safe and welcoming space for all users, even if they have so far failed to follow through on that commitment. But when tech companies try to regulate content arbitrarily, without civil rights expertise, or without sufficient resources, they can exacerbate the problem.
The goal of these policies is to provide greater structure, transparency, and accountability to the content moderation that many large platforms are already undertaking. The platforms want fair policies that are effectively enforced. We want to help them manage their platforms responsibly and respectfully.