 |
| An illustration made by me. Copyrights Reserved. |
The human brain has always adapted remarkably well to technology. But what happens when the technology starts doing the thinking for us?
Let’s begin with something simple: Google Maps. Most of us use it daily. Yet a 2020 study revealed a hidden cost. Despite the app’s utility, frequent use of GPS systems was found to weaken users’ spatial memory (Adamopoulou & Moussiades, 2020). Even more interestingly, users didn’t believe their sense of direction had declined, though the data said otherwise. Convenience, it seems, comes at a cognitive price.
That’s not even AI — that’s just an app. So what happens when the tool is AI?
Around the same time, Professor David Rafo noticed something curious in his students’ writing. Prior to the pandemic, their assessments were weak and lacked structure. Suddenly, during lockdown, their writing improved drastically. Too drastically. Suspecting AI, Rafo asked his students directly. His hunch was correct. “I realized it was the tools that improved their writing, not their writing skills,” he said (Portland State University, n.d.).
Rafo wasn’t anti-AI. He acknowledged the benefits, noting how AI helps with everything from drafting content to problem-solving. But he warned, “Our cognitive abilities are like muscles. They need regular use to stay strong and vibrant.” The real concern, he explained, is that resisting the ease AI offers takes extraordinary discipline.
This is where the concept of cognitive atrophy comes in — the gradual weakening of our mental faculties due to over-reliance on external tools (Dergaa et al., 2023).
Alzheimer’s researcher Dr. Anne McKee emphasized the importance of mental activity to prevent cognitive decline. Staying mentally active builds resilience against diseases like Alzheimer’s. High cognitive reserve can offset even visible brain damage (Diary of a CEO, 2023).
Yet, we find ourselves offloading that mental effort more and more. One systematic review examined the over-reliance on AI dialogue systems in academic settings. The findings were clear: excessive dependence erodes critical thinking, decision-making, and analytical reasoning (Zhai, Wibowo, & Li, 2023). While AI can streamline workflows, it also introduces ethical concerns — misinformation, algorithmic bias, plagiarism, privacy breaches, and opacity (Dergaa et al., 2023).
This is a well-worn path in tech. Studies have shown how calculators impacted basic math skills, and autocorrect diminished spelling and punctuation accuracy. But now we’re in a new domain. Tools like ChatGPT, Grok, and LLaMA don’t just support us — they do the thinking (Bai, Liu, & Su, 2023).
Tasks like data entry, customer service, or bookkeeping are already being absorbed by AI. And a worrying trend is emerging: cognitive offloading. This refers to using external tools to reduce the effort needed for thinking or problem-solving. In a recent Forbes study, frequent AI users were more likely to rely on tech for decisions, leading to reduced ability to evaluate or think critically (Daniel, 2025).
This cognitive offloading extends to serious domains like law enforcement. In 2023, Detroit police used an AI facial recognition tool called DataWorks Plus to solve a robbery case. The AI matched low-quality footage to a 2015 mugshot. The woman — Porsche Woodruff — was eight months pregnant and nowhere near the crime scene. She was still arrested (New York Times, 2023). The charges were later dropped, but the damage was done. This wasn’t an isolated case. There are currently multiple lawsuits against the Detroit Police Department for similar incidents — all stemming from AI dependence.
Why did this happen? Because people trusted the AI. Just like with GPS. The errors are hard to detect when the tool becomes part of everyday life. And when you look online, you see the signs. On platforms like X/Twitter, users now routinely ask AI bots such as Grok, to explain even simple tweets (Mitchell, 2024). They’ve outsourced their curiosity.
While certain users may find value in utilizing AI on social media to save time, the habitual delegation of even the simplest interpretive tasks to AI systems warrants deeper concern. This trend reveals a broader, often subconscious behavioral shift: a collective surrender to algorithmic decision-making. On platforms such as Instagram, Facebook, Twitter, TikTok, and YouTube — often the gateways through which individuals access content — algorithms largely dictate what users see. The outcome is a subtle erosion of personal agency.
Alec Watson, from the channel Technology Connections, terms this phenomenon “algorithmic complacency.” He notes that increasingly, users prefer letting computer programs decide their digital experience, even when alternative pathways are available. In reflecting on this shift, Watson recalls a time when internet engagement was deliberate — users manually bookmarked pages and curated their online experiences. In contrast, today’s digital ecosystem fosters passive consumption, driven by machine-selected content.
For individuals entering adulthood in the 2020s, there is a notable trend: algorithms are trusted more than human judgment. This trust has consequences. Students who used AI during the pandemic to cut corners in school are now employees relying on AI to write emails, create reports, and even make decisions. Surveys show that 90% of Gen Z employees use two or more AI tools weekly (ColdFusion, 2024). Some say it’s simply working smarter. Others see it as a slow erosion of mental muscle.
We’ve now transitioned from the Information Age into the Knowledge Age — one shaped not by facts alone, but by AI-generated interpretations of those facts. But there’s a problem: much of that knowledge is flawed. Google’s AI Overviews once called Obama the first Muslim president and claimed snakes were mammals (BBC, 2024).
And it gets worse. Oxford researchers discovered a phenomenon called model collapse. When AI models are fed AI-generated content repeatedly, the quality degrades rapidly (Forbes Australia, 2024). After just nine iterations, the output becomes gibberish. A separate study by Amazon found that 60% of today’s internet content may already be AI-generated or AI-translated.
We're in the midst of the AI Slop, where the internet feeds itself.
This leads to the theory of the “dead internet,” where bots and AI generate the majority of content.
And yet, there’s hope. AI is still a tool. It’s not inherently bad. We’ve faced this before. When Dan Bricklin and Bob Frankston created VisiCalc — the first spreadsheet app — in 1979, many feared it would make accountants obsolete. But it didn’t. It enhanced their productivity (Wordyard, 2003).
The same is true for AI today. Use it as a companion — not a replacement for thought. Take its answers with a grain of salt. As Professor Thomas Diettrich put it: “Large language models are statistical models of knowledge bases. They’re not knowledge bases themselves.”
And therein lies the danger. When the model never refuses, it invites trust — even when it shouldn’t. That’s why awareness is key.
We must approach AI in the same way. Let it help, but don’t let it lead. Because no matter how advanced these systems become, nothing can replace the uniquely human ability to think, reason, and create.
As René Descartes once said: Cogito, ergo sum.
I think, therefore I am.
That’s what makes us human.
APA References:
Adamopoulou, E., & Moussiades, L. (2020). Chatbots: History, technology, and applications. Machine Learning with Applications, 2, 100006. https://doi.org/10.1016/j.mlwa.2020.100006
Bai, L., Liu, X., & Su, J. (2023). ChatGPT: The cognitive effects on learning and memory. Brain, 1, e30. https://doi.org/10.1002/brx2.30
BBC. (2024). AI search results are full of errors. https://www.bbc.com/news/articles/ckg9k5dv1zdo
ColdFusion. (2024). Is AI making us dumber? [YouTube video]. https://www.youtube.com/watch?v=aOW_HslF5ko
Daniel, L. (2025, January 19). New study says AI is making us stupid — but does it have to? Forbes. https://www.forbes.com/sites/larsdaniel/2025/01/19/new-study-says-ai-is-making-us-stupid/
Dergaa, I., Ben Saad, H., Glenn, J. M., Amamou, B., Ben Aissa, M., Guelmami, N., Fekih-Romdhane, F., & Chamari, K. (2023). From tools to threats: A reflection on the impact of artificial-intelligence chatbots on cognitive health.
Forbes Australia. (2024). Is AI quietly killing itself — and the internet? https://www.forbes.com.au/news/innovation/is-ai-quietly-killing-itself-and-the-internet/
Mitchell, M. [@MelMitchell1]. (2024). AI replies on X show worrying overreliance [Tweet]. https://x.com/MelMitchell1/status/1793749621690474696
New York Times. (2023, August 6). A wrongful arrest shows the dangers of facial recognition.
Portland State University. (n.d.). Professor David Raffo. https://www.pdx.edu/profile/david-raffo
Wordyard. (2003, April 9). VisiCalc memories. https://www.wordyard.com/2003/04/09/visicalc-memories/
Zhai, C., Wibowo, S., & Li, L. D. (2023). The effects of over-reliance on AI dialogue systems on students’ cognitive abilities: A systematic review.
Source: Arshitha S Ashok
https://arshithasashok.medium.com/is-ai-making-us-dumb-8f81c2c5a95c