Who Will Pop The AI Bubble?

Exploring the oversaturation of LLMs, the need for accountability, and our role in building a reliable future

Recently, I’ve realized that I am way too scared of regrets. I don’t want to regret anything in life. While it is more about one’s perception, there are some regrets you can just avoid, like the regret of not following your heart, the regret of not traveling enough, the regret of wasting your time, or the regret of doing the wrong thing.

Upon overthinking it for almost too long, I’ve come to conclude that the last one is the trickiest to avoid.

Soap Bubbles by Jean-Baptiste-Siméon Chardin (1733-34) is an allegory for the transient nature of life. It shows that the transience of life is not against its meaning. The tension between the heaviness and lightness of the bubble can also be seen as the current state of our world. We’re always one step closer or away from something tragic or beautiful. (Courtesy: daily.jstor.org)

How do you avoid the regret of doing the wrong thing when the wrong feels right?

For example, drinking a cup of coffee as the first thing in the morning, knowing that you’ll be crippling with jitters till lunch. Or binge-watching Young Sheldon till 3 am, knowing you have to feed your dog at 6 am. Or using LLMs to write for you, knowing you’ll regret missing out on the joy of pouring your heart out.

Now, a part of me wants to say that essentially there’s nothing “wrong” with any of the above. It’s funny how we defend the little wrongs we commit everyday.

But what’s worse than betraying oneself every day? Sadly, we’re doing it collectively as we indifferently accept the rise of LLMs around us. To be clear, I’m not against emerging tech, but I do find the exploitation of anything utterly disgusting.

The AI bubble is one such disgusting thing. Let me explain why.

How Much Is Too Much?

It was in 1955 when John McCarthy, a mathematics professor at Dartmouth College, coined the term “Artificial Intelligence”. It was aimed to secure funding for his field research proposal to the Rockefeller Foundation and distinguish it from the already established field of cybernetics.

While we know little about how “intelligent” the machines were back then, it definitely brought in the dollars till the 1970s when the first AI winter arrived. All the funding and research slowed down, and we were sober again.

The field has seen many such ups and downs since then. However, it is the first time we’re witnessing the birth of a “useless class” at such a scale. While the rise of AI leads to unemployment, layoffs, a reduction in wages, the commodification of data, data extraction, and countless moral and environmental nightmares, millions sit unaware of what’s coming.

I’m not trying to paint a doomsday picture, but take a moment to think about the number of AI-generated photos, videos, articles, and voices you’ve heard this week.

There are hardly any places left that remain unflooded by AI slop. Who asked for it? Who wants to consume it?

But who profits from selling it?

Almost every second company is jumping on the AI hype train. As of today, there are almost 100,000 AI companies across the world. And around 78% of companies worldwide (literally the whole world) are either using or exploring the use of AI.

So if everyone is using it, what can possibly be wrong?

The Pandora’s Box

In 1843, a Danish philosopher named Søren Kierkegaard wrote, “People demand freedom of speech as a compensation for the freedom of thought which they seldom use.”

Unfortunately, the quote still holds true even today.

While most are busy fighting over trivial matters, only a few notice the sword hanging over our heads. We’re not acknowledging the problems with the AI models today.

Let’s go through it step by step!

The AI Stack by The Goldman Sachs Global Institute

The AI stack involves 4 layers: energy, compute, data, and models. To quickly help you understand, energy is required to power any computing system, but AI takes the consumption to a hellish level. Compute involves the hardware and software involved in performing any task, like training, running the models, etc. Data is used to train AI models. And finally, the model itself.

At every level, writes Bloomberg, “In the US, an average 100-megawatt data center, which uses more power than 75,000 homes combined, also consumes about 2 million liters of water per day, according to an April report.”

To win the AI race, nations are setting up data centers even in water-scarce regions, making the locals suffer. Geopolitics is also changing, and we’re already seeing how countries are fighting over GPUs and raw materials. Depending on your tolerance of uncomfortable thoughts, just remember how ruthlessly the oil wars were fought. How far can we go for compute?

AI companies are running out of data, and they’re doing everything to steal it. OpenAI accused DeepSeek of stealing its data. Reddit accused OpenAI of stealing its data, and The New York Times even sued them. Meta pirated books and art styles from millions, and now openly claims that using personal data is necessary to train models. So basically, it’s already Hunger Games!

AI models have the black box problem, a lack of transparency in how they, particularly deep learning systems, arrive at their conclusions. They hallucinate all the time with no one to be held accountable for it. Whenever you use an AI model, your data is used to train it further. Good luck if you’re using them for taxes, therapy, or understanding medical reports! Using AI models leads to a cognitive decline upon overusing. If you’re dumb, it’ll make you dumber. If you’re naive, it’ll give you “psychosis”.

Unfortunately, most people don’t even have a clue about any of the above. But is there still hope?

The Final Call

People who profit from shoving these AI models down our throats won’t stop. They have families to feed, pockets to fill, investors to please, missions to fulfill, egos to boost, a world to fit into, and a million other reasons.

Honestly, if a normal person like you or me can’t boycott, then what do we expect from anyone else? Blaming the builders won’t help, but holding empathy might.

So many are under pressure not to get replaced or become redundant. Others are “improving” or “adopting”. But who is patient? Who is willing to forgive? Who is willing to see beyond numbers? After all, we’re humans and just datapoints.

Sometimes, I wonder if some of us take the risk of slowing down, will it make others pause and reflect too? Maybe, maybe not. But is it worth taking a bet? Sure.

Vanitas Still Life by Pieter Claesz, 1630, is a still life painting. The symbols, like skulls and extinguished candles, are a reminder of death. The interesting part of making still life paintings is how they force you to pause and observe deeply. (Courtesy: subjektiv)

For the first time ever in history, we can make this planet more livable by doing less, not more. By being less angry, less impatient, less intolerant, less overworked, less absent, less self-absorbed, less heartless, and less overexploitative.

Maybe we don’t need to wait for the bubble to pop, maybe we can let it dissolve slowly.

The change that comes slowly faces less resistance. However, the change must continue.

Be it skipping caffeine for a day, skipping watching Young Sheldon for a few nights, or not using ChatGPT for a week.

You choose.

And if you can’t, at least make sure you read every day to compensate.

I’ve recently finished reading The Midnight Library by Matt Haig, and can’t recommend it enough!

Reply

or to participate.