Detecting Conspiracy Theories on Social Media: Improving Machine Learning to Detect and Understand Online Conspiracy Theories | RAND

Conspiracy theories circulated online contribute to a shift in public discourse away from facts and analysis and can contribute to public harm. Using linguistic and rhetorical theory, RAND researchers conducted a modeling effort to improve machine-learning technology for detecting conspiracy theory language. This report describes the results of that effort and offers recommendations to counter the effects of online conspiracy theories.

Research Questions

  1. How can we better detect the spread of online conspiracy theories at scale?
  2. How do online conspiracies function linguistically and rhetorically?

Conspiracy theories circulated online via social media contribute to a shift in public discourse away from facts and analysis and can contribute to direct public harm. Social media platforms face a difficult technical and policy challenge in trying to mitigate harm from online conspiracy theory language. As part of Google's Jigsaw unit's effort to confront emerging threats and incubate new technology to help create a safer world, RAND researchers conducted a modeling effort to improve machine-learning (ML) technology for detecting conspiracy theory language. They developed a hybrid model using linguistic and rhetorical theory to boost performance. They also aimed to synthesize existing research on conspiracy theories using new insight from this improved modeling effort. This report describes the results of that effort and offers recommendations to counter the effects of conspiracy theories that are spread online.

Key Findings

Recommendations

Table of Contents

This research was sponsored by Google's Jigsaw unit and conducted within the International Security and Defense Policy Center of the RAND National Security Research Division (NSRD).

This report is part of the RAND Corporation research report series. RAND reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND reports undergo rigorous peer review to ensure high standards for research quality and objectivity.

Permission is given to duplicate this electronic document for personal use only, as long as it is unaltered and complete. Copies may not be duplicated for commercial purposes. Unauthorized posting of RAND PDFs to a non-RAND Web site is prohibited. RAND PDFs are protected under copyright law. For information on reprint and linking permissions, please visit the RAND Permissions page.

The RAND Corporation is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.

Document Details

Explore

Related Topics

Browse by Series

Browse by Authors

Stay Informed

RAND Policy Currents

Get weekly updates from RAND.

Format:

Marcellino, William, Todd C. Helmus, Joshua Kerrigan, Hilary Reininger, Rouslan I. Karimov, and Rebecca Ann Lawrence, Detecting Conspiracy Theories on Social Media: Improving Machine Learning to Detect and Understand Online Conspiracy Theories. Santa Monica, CA: RAND Corporation, 2021. https://www.rand.org/pubs/research_reports/RRA676-1.html. Also available in print form.

Marcellino, William, Todd C. Helmus, Joshua Kerrigan, Hilary Reininger, Rouslan I. Karimov, and Rebecca Ann Lawrence, Detecting Conspiracy Theories on Social Media: Improving Machine Learning to Detect and Understand Online Conspiracy Theories, Santa Monica, Calif.: RAND Corporation, RR-A676-1, 2021. As of May 05, 2021: https://www.rand.org/pubs/research_reports/RRA676-1.html

https://www.rand.org/pubs/research_reports/RRA676-1.html