• Myths of Technology
  • Posts
  • How the Tragedy of the Commons Is Leading to Cognitive Decline in Humans

How the Tragedy of the Commons Is Leading to Cognitive Decline in Humans

Exploring the tragedy of the commons in AI and how it is leading to the cognitive decline in humans

I recently caught someone recording me secretly at my driving classes.

While it set me on flames of rage and disgust in that moment, it also sent me spiraling into the same question that has been haunting me ever since I gained some sense of the world. Why? Why are humans the way they are?

While I've slowly started to accept my defeat against rationalizing my way out of everything, I've also been questioning how far we go in accepting the world as it is.

MIT recently published a study showing that people who use LLMs like ChatGPT develop weaker neural connectivity and can't even quote from essays they wrote minutes earlier. Yet we're all gaga over AI.

MIT study EEG results showing how brain connectivity systematically weakens from Brain-only (strongest networks) to Search Engine (moderate) to LLM users (weakest connectivity), with statistical significance marked by asterisks. (Courtesy: arxiv)

I'm sure people at OpenAI must've thought about the consequences of what they're building, perhaps negative ones too. Then, why do they still keep accelerating?

Are they bad people? Are all tech giants evil? Thinking so would be naive and ignorant, imo. So let's try to rationalize our way into the why of it.

The Capitalist Foundation

It was in the 16th and 17th centuries when feudalism began declining. The land-owning aristocrats started investing in trade and manufacturing. The Dutch East India Company in 1602 became the first multinational corporation, spreading the idea that private profit could drive innovation and progress. The next thing you know, it birthed capitalism.

"The Times Correspondent looking on at the Sacking of the Kaiser Bagh, after the capture of Lucknow, March 15th 1858" depicts colonial forces systematically looting an Indian palace while a British journalist observes and records. This scene of imperial extraction, where wealth, artifacts, and dignity are stripped away under the guise of civilization, is uncomfortably similar to technofeudalism. The present-day colonizers don't need armies to sack our private spaces, they simply offer us "free" platforms while harvesting our data, thoughts, and behavioral patterns. The correspondent's detached observation mirrors how we've normalized surveillance capitalism, watching passively as our lives are systematically plundered. (Courtesy: factsanddetails)

Later, Adam Smith wrote "The Wealth of Nations" in 1776, formalizing this into an economic theory of individual self-interest channeled through market competition. The idea was that it would somehow benefit society as a whole. The "invisible hand" would guide selfish actions toward the collective good.

For centuries, this sort of worked.

Markets did drive innovation, created wealth, and improved living standards for many. But capitalism also created cycles of boom and bust, inequality, and environmental destruction.

Today, we live in what economist Yanis Varoufakis calls "technofeudalism." Tech platforms have become the new landowners of our increasingly digital lives. We're no longer customers or even workers we used to be. We're now the serfs generating data for our digital lords. Every FAANG company is basically built on the corpus of data.

And now add AI to it. In today’s systems, AI development is purely driven by quarterly earnings and market dominance. And where is it all taking us?

The Tragedy of the Commons

In 1968, ecologist Garrett Hardin described the tragedy of the commons as when individuals act rationally in their own self-interest, depleting a shared resource that everyone depends on.

An example could be how medieval farmers grazed cattle on communal land. Each farmer benefited from adding more cattle, but when everyone did this, the land became overgrazed and useless to all.

The AI industry is living this tragedy in real time.

Every tech company knows that rushing AI development without proper safety measures is going to be catastrophic. But they also know that if they slow down while competitors race ahead, they'll lose market share, talent, and funding. So they’re all racing forward, hoping someone else will solve the safety problems.

While every company-wide decision is rational from their perspective, collectively, they're depleting our cognitive commons.

"Bach or Stravinsky: A game of coordination" illustrates how individual choices can harm everyone when people act in their own self-interest. This perfectly captures the current state of the AI industry, where tech companies race toward AGI without regard for collective safety. Each player, whether Google, OpenAI, or Meta, faces a prisoner's dilemma: develop AI responsibly and risk being overtaken by competitors, or push forward recklessly and maintain market advantage. The tragedy of the commons emerges as shared resources like societal trust and human attention become depleted by this relentless competition. (Courtesy: mdpi)

We've been so concerned with robot takeover scenarios that we've ignored the more immediate threat. We’re being hypnotized by AI overlords who are gradually eroding our ability to think for ourselves.

When you can ask ChatGPT to write your emails or summarize everything, why would you want to develop those skills yourself? When algorithms can curate your timelines, why would you want to learn to research independently? When AI can generate art and music, why would you want to cultivate creativity?

The reliance on AI tools is making the future generation less capable of critical thinking and original thought.

The Way Out

We know that in our technofeudalist capitalist world, cooperation among AI giants is unlikely. They're locked in a race where slowing down means losing everything. But we can't sleepwalk into a future where our children have lost the capacity to read, write, think, or imagine.

It has to stop, and it stops with us.

The generational curse of compliance with technology has to end.

Every major technological shift in the last few decades, from TVs to social media, has promised to make life better. We bought all those lies. In reality, it has made us more passive, more distracted, more dependent.

"Untitled - Metamorphosis" by Zdzisław Beksiński depicts a horse transforming into something unrecognizable and disturbing. I think this perfectly mirrors the tragedy of our present state. In the last two decades alone, we've undergone a darker change where we've traded genuine human connection for dopamine hits. In the process, we’ve lost ourselves completely. It’s not surprising that 1 in every 8 people globally experiences mental health disorders. (Courtesy: indie-artdream)

It’s time that we rise against the narrative that efficiency is always better than effort, that speed is better than depth, that optimization is better than exploration.

We can choose to think for ourselves, even when AI can think for us. We can choose to write our own words, even when AI can write them faster. We can choose to struggle with problems, even when AI can solve them instantly.

The tragedy of the commons happened because individuals couldn't see how their rational choices led to collective ruin. But unlike medieval farmers, we can see exactly where this path leads. We have studies, data, and warnings from experts.

Nobel Prize economist Elinor Ostrom proved that commons can be successfully managed when communities create rules, monitor compliance, and maintain the resource together. She studied Swiss Alpine meadows to Japanese mountain forests that have thrived for centuries without government control or privatization. Our cognitive commons needs the same intentional leadership.

Every little thing you do matters. When users demand transparency, schools focus on preserving cognitive skills, or developers create ethical guidelines, it forces our AI overlords to take a step back.

While we might not fully understand why humans are driven to these destructive patterns, we still have the choice to break them.

The question is, will we trade our ability to think for the ability to appear smart?

I hope not. But if so, why?

Reply

or to participate.