The dead Internet theory is an online conspiracy theory that asserts that the Internet now consists mainly of bot activity and automatically generated content that is manipulated by algorithmic curation, marginalizing organic human activity.[1][2][3][4] Proponents of the theory believe these bots are created intentionally to help manipulate algorithms and boost search results in order to ultimately manipulate consumers.[5] Further, some proponents of the theory accuse government agencies of using bots to manipulate public perception, stating "The U.S. government is engaging in an artificial intelligence powered gaslighting of the entire world population".[1] The date given for this "death" was generally around 2016 or 2017.[1][4][6]
The theory has gained traction because much of the observed phenomena is grounded in quantifiable phenomena like increased bot traffic. However, the idea that it is a coordinated psyop has been described by Kaitlin Tiffany, staff writer at The Atlantic, as a "paranoid fantasy," even if there are legitimate criticisms involving bot traffic and the integrity of the internet.[1]
While the exact origins of the theory are difficult to pinpoint, the dead Internet theory most likely emerged from 4chan or Wizardchan as a theoretical concept in the late 2010s or early 2020s.[1][7] In 2021, a thread titled "Dead Internet Theory: Most Of The Internet Is Fake" was published on the forum Agora Road's Macintosh Cafe, marking the spread of the term beyond these initial imageboards. [1][8] However, discussions and debates surrounding the theory have been prevalent in online forums, technology conferences, and academic circles, possibly since earlier.[1][7]
It was inspired by concerns about the Internet's increasing complexity, dependence on fragile infrastructure, potential cyberattack vulnerabilities, and most importantly, the exponential increase in artificial intelligence capabilities and use.[9] The theory gained traction in discussions among technology enthusiasts, researchers, and futurists who sought to explore the potential risks associated with our reliance on the Internet. The conspiracy theory has entered public culture through widespread coverage, and has been discussed on various high-profile YouTube channels.[1] It gained more mainstream attention with an article in The Atlantic titled "Maybe You Missed It, but the Internet 'Died' Five Years Ago".[1] This article has been widely cited by other articles on the topic.[3][7][8]
Generative pre-trained transformers (GPTs) are a type of large language model (LLM) that employ artificial neural networks to produce human-like content.[10][11] The first of these models was developed by the company OpenAI.[12] These models have created a significant amount of controversy. In one example, Timothy Shoup of the Copenhagen Institute for Futures Studies stated that, "in the scenario where GPT-3 'gets loose', the internet would be completely unrecognizable."[13] He predicted that in such a scenario, 99% to 99.9% of content online might be AI generated by 2025 to 2030.[13] These predictions have been used as evidence for the dead internet theory.[3]
ChatGPT is an AI chatbot whose 2022 release to the general public led journalists to describing the dead internet theory as being potentially more realistic than before.[6][14] Before this, the dead internet theory mostly emphasized government organizations, corporations, and tech-savvy individuals, but ChatGPT put the power of AI in the hands of average internet users.[6][14] This technology caused concerns that the Internet would become filled with content created by people through the use of AI that would drown out organic human content.[6][14][15]
In 2016, the security firm Imperva released a report on bot traffic and found that bots were responsible for 52% of web traffic, the first time it surpassed human traffic.[16] This report has been used as evidence in reports on the dead internet theory.[1]
In the past, the social media site Reddit allowed free access to its API and data, which allowed users to employ 3rd party moderation apps and train AI in human interaction.[15] In a controversial move, Reddit moved to charge for access to its user dataset. Companies training AI will likely continue to use this data for training future AI. As LLMs such as ChatGPT become available to the general public, they are increasingly being employed on Reddit by users and bot accounts.[15] Professor Toby Walsh of the University of New South Wales stated in an interview with Business Insider that training the next generation of AI on content created by previous generations, the content could suffer.[15] University of South Florida professor John Licato compared this situation of AI-generated web content flooding Reddit to the Dead Internet Theory.[15]
Several accounts on Twitter started posting tweets starting with the phrase "I hate texting" followed by an alternative activity, such as "i hate texting i just want to hold ur hand", or "i hate texting just come live with me".[1] These posts received tens of thousands of likes, and many suspected them to be bot accounts. These accounts have been used as an example by proponents of the dead internet theory.[1][8]
The percentage of user accounts run by bots became a major issue during Elon Musk's acquisition of Twitter.[17][18][19][20] During this process, Musk disputed Twitter's claim that fewer than 5% of their monetizable daily active users (mDAU) were bots.[17][21] During this dispute, Musk commissioned the company Cybra to estimate what percentage of Twitter accounts were bots, with one study estimating 13.7% and the second estimating 11%.[17] These bot accounts are thought to be responsible for a disproportionate amount of the content generated.[7] This incident has been pointed to by believers in the dead internet theory as evidence.[7][22]
There is a market online for fake YouTube views to boost a video's credibility and reach broader audiences.[23] At one point, fake views were so prevalent that some engineers were concerned YouTube's algorithm for detecting them would begin to treat the fake views as default and start misclassifying real ones.[23][1] YouTube engineers coined the term "the inversion" to describe this phenomenon.[23][24] YouTube bots and the fear of "the inversion" were cited as support for the dead internet theory in a thread on the internet forum Agora Road's Macintosh Cafe.[1]
Numerous YouTube channels and online communities, including the Linus Tech Tips-forums, have covered the dead Internet theory, which has helped to advance the idea into mainstream discourse.[1]