Tech

AI Slop Is Making the Internet Fake-Happy

Published

on


To anyone with a pulse and a smartphone, it’s obvious that the internet has an AI slop problem. The issue has grown more severe since ChatGPT launched in 2022, with some social platforms flooded with AI-generated writing. Now, there’s data to back up the anecdotal evidence.

A new preprint study published today from researchers at the Imperial College of London, Stanford University, and the Internet Archive found that approximately 35 percent of all new websites are either AI-generated or AI-assisted. The same study also found that online writing is “increasingly sanitized and artificially cheerful.” In other words, AI is making the internet fake-happy.

The research team tried four different approaches to AI detection before settling on tools from Pangram Labs after it delivered the most consistent results. (Though the team found it performed well on its tests, it is worth noting that all artificial intelligence detection tools are imperfect.) To compile a representative sample of websites, it tapped the Internet Archive’s Wayback Machine, which collects snapshots of webpages. In addition to quantifying how many sites created between 2022 and 2025 lean on AI-generated writing, the study also tested six different theories about the characteristics of slop.

The test that looked into artificial cheerfulness examined how AI affected the tone of online writing. Using sentiment analysis, which classifies words as positive, neutral, or negative, it found that “the average positive sentiment score of AI-generated or AI-assisted was 107 percent higher than that of non-AI websites.” The researchers see this spike in artificial happiness as a “symptom” of the “sycophantic and overoptimistic nature of existing LLMs.” In this way, AI writing tools’ tendency to suck up to their human users has a spillover effect, making the overall tenor of online writing more saccharine.

Another test investigated whether the increase in AI-generated writing shrinks “the range of unique ideas and diverse viewpoints” on offer. The researchers found that AI did make the internet less ideologically diverse, with AI websites scoring roughly 33 percent higher on testing for “semantic similarity” than human-made websites.

While those two tests validated the researchers’ assumptions about AI, others did not. Four theories that the researchers tested were not confirmed. Notably, they had suspected that AI would lead to a rise in misinformation, but their analysis of the evidence did not support the hypothesis. They had also guessed that AI writing wouldn’t link out to external sources, and that it would be stylistically more generic than human writing. Confounding expectations, neither of those theories were supported by the evidence, either.

While the analysis found that the ideas espoused by AI writing were more homogenous—and specifically, more consistently cheery—the writing style itself was not confirmed to be flattened. This came as a big surprise to the researchers, who had assumed they would see a clear move towards more generic output. “Everyone on the team expected that to be true,” says Stanford researcher Maty Bohacek. “But we just don’t have significant evidence for that.”

Prior to conducting its analysis, the research team commissioned a poll on how people feel about AI. Comparing it to the results, it discovered that the researchers weren’t the only ones who had their expectations upended. Many commonly held beliefs about AI writing are wrong, their study finds.

Like the researchers, most people polled had also assumed that they would encounter a rise in fake news as the amount of AI-generated websites they saw increased. The vast majority of respondents had also assumed that AI writing would stop linking to external sources, and that it would have an increasingly generic, uniform voice. “It’s interesting to see that people tended to expect the worst outcomes,” Bohacek says.

This study is far from the last word on what AI is doing to the internet. “We just wanted to break ground,” says Bohacek, who sees this as a jumping-off point for deeper exploration. As a snapshot of AI slop’s impact, it offers a particularly human flavor of insight: Sometimes, it’s simply hard to predict how things will unfold.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version