AI floods us with subintelligence, then quietly erodes our ability to recognize, question, or resist itAI floods us with subintelligence, then quietly erodes our ability to recognize, question, or resist it

AI vomits stupid things. And we catch them with our mouths wide open.

2026/03/19 18:00
5 min read
For feedback or concerns regarding this content, please contact us at crypto.news@mexc.com

Growing up, I always thought that the end of the human race would be due to some form of superintelligence, something that was inspired by the tough-to-beat end bosses in the video games that I’ve played and movies I’ve watched. From those classic prey-turned-predator story arcs, I came to fear that the terminators and murderous robot dolls may come to hunt us all.

But lately I’ve come to realize that the bigger, more insidious threat is already in front of us, in forms so alluring that we ignore the warning labels: social media content, research paper-in-minutes, AI hacks and “shortcuts.”

I’m talking, of course, about AI slop and the ocean of subintelligence we’re drowning in.

AI TERMINATORS. Will our demise come from superintelligent robots, as portrayed in movies? Or will subintelligent content drown us first? Screenshots from the movies M3GAN and The Terminator

AI slop is low-quality digital content mass-produced by AI. In other words, it’s junk. Which I understand is quite ironic given how much we praise AI for its supposed “intelligence.” But AI vomits a lot of stupid things, and we catch them all with our mouths wide open.

Last year, for example, The Conversation reported that scientists discovered a peculiar term appearing in published scientific papers: “vegetative electron microscopy.” Those of us with non-scientific minds would brush this off as jargon, except that the term was rubbish and did not exist at all — at least until AI came up with it, preserved it, and reinforced it within its systems so that it found its way into science. Science, the Atlantic declared, is drowning in AI slop.

Even more unbelievable is how AI made this error. It turns out that this was due to a mistake in the digitizing process: the words “vegetative” and “electron,” by coincidence, appeared next to each other but in different columns in some journals published in the ‘50s. Decades later, the error was found in journals and scientific papers, eluding reviewers and editors.

PROCESSING ERROR. Reports speculate that the term “vegetative electron microscopy” may have come from an error in the digitizing process of a 1959 paper. Screenshot from Retraction Watch

Absurd as the story may sound, it is no longer uncommon occurrence to find AI slop in research journals. Researchers have been complaining about “phantom citations” in scientific papers.

One infamous instance is the Trump administration’s “Make America Healthy Again” report, which reportedly included at least 7 citations to sources that did not exist, before they were updated hours after publication. 

If this junk made its way into these tightly guarded scientific communities, what more in spaces with little to no regard for truthfulness, such as social media?

Rise of pseudo-journalistic pages

In just the past few months, Filipinos have seen the rise of many pseudo-journalistic Facebook pages, claiming to produce “explanatory journalism” and easy-to-understand “insights.” In reality, they churn out what can only be suspected as AI-generated summaries-turned-infographics, piggybacking on the work of real journalists actually doing the groundwork. 

“Deep” insights, such as “malatang is not just food but a cultural reflection of Gen Zs,” and many more of those “this is not [insert trendy topic] but [some random intellectual metaphor]” — all AI slop. And when these pages don’t even have the balls to declare their authors, or even their editorial board — to hold themselves accountable for what they put out there — you know that THEY know they’re putting out crap that they don’t want to be associated with.

But here’s where it truly gets terrifying for me: It’s not that AI produces junk, it’s how AI is slowly making us incapable of recognizing it. 

A study from MIT’s Media Lab tracked the brain activity of essay writers over four months and found that those who used ChatGPT showed weaker memory, weaker creativity, weaker critical thinking. Their essays, while polished, were described by evaluators as “soulless.”

Worse, over time, the ChatGPT users got lazier, eventually just copy-pasting AI output wholesale. The researchers call it “cognitive debt”: the more you let AI think for you, the less your brain bothers to think at all.

The art of learning

We’ve been talking to a lot of educators as well, as we do our workshops on AI, and what’s clear is that this is also playing out in schools and has reached absurdity in an almost poetic level, where students use AI, and teachers, overworked and underpaid, also use AI to evaluate them.

In this scenario, are we really learning and teaching anything? After all, the learning is in the friction, as my colleague and co-trainer Gemma Mendoza always says.

These days, schools are forced to spend so much money on AI detection tools that don’t really work, flagging real student work as AI-generated and missing actual AI-generated submissions.

This is the real trap. AI floods us with subintelligence, then quietly erodes our ability to recognize, question, or resist it.

Researchers have been warning us about this “deskilling” that’s happening as we offload our thought process to machines for short-term gains. We have only been able to survive all these years as a species, after all, thanks to our critical thinking skills and adaptability.

So I’m now convinced that the end will not come from some intelligent overlord descending upon us with a masterplan. It will come from a slow, warm flood of stupidity we’ve mistaken for convenience: content we never asked for, citations that lead to nowhere, insights that don’t really matter.

We won’t be hunted. We’ll just forget what it’s like to think for ourselves, one summarized infographic at a time. – Rappler.com

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.