Why Information Obesity Won't Solve Problems in AI

Exploring the Paradox of Data Gorging, Wisdom Starvation, and Unintended Consequences of Exponential Growth of AI

“I wish I knew better!”

A phrase almost every single person has whispered to themselves at least once in their lifetime.

But the problem isn’t that we don’t “know” enough, it’s that we’re bad at making sense of what we know.

In the age of AI, we find ourselves gorging on data, yet starving for wisdom. This paradox, which I like to call "information obesity," is at the heart of our struggle, especially in navigating the rapidly evolving landscape of AI.

But how does it matter and how is it impacting you? Let’s find out!

The Cambrian Explosion of AI

We are currently standing at the cliff of what some call the "Cambrian Explosion" of AI. But what does that mean?

The Cambrian Explosion was a period roughly 530 million years ago when most major animal groups suddenly appeared in the fossil record. It marked a rapid diversification of complex life forms on Earth, dramatically increasing the planet's biodiversity in a relatively short geological timespan.

An illustration of Earth's history divided into major geological time units, from the ancient Archean Eon to the present Cenozoic Era, highlighting key evolutionary milestones along the way. Fun fact: If Earth's entire 4.6-billion-year history were compressed into a single 24-hour day, humans would only appear in the last 1.17 seconds before midnight! (Courtesy: fitz6.wordpress)

AI is at the same stage today.

No, not in the sense that AI might turn into a new life form tomorrow (tbh, I’m not very sure about this one considering to be recognized alive is more dependent on human perception than AI itself), but in the sense that the experiments with AI are more exponential and diverse than ever before.

However, this also comes with our inability to anticipate its unintended consequences. History is rife with examples of our failure to predict the future or even understand the past or present accurately.

For example, for a long time, people believed in the theory of spontaneous generation, which posited that life could arise from non-living matter. This idea persisted for centuries until Louis Pasteur definitively disproved it in 1859.

Another example is the theory of the geocentric model of the universe, which placed Earth at the center of all celestial bodies until it was overturned by the Copernican revolution.

The flat earth theory1 , the phlogiston theory2 , the Mars canals3 , etc. are all examples of how our predictions about the future are often spectacularly wrong, especially when we're on the cusp of paradigm-shifting discoveries.

So how can we start being “correct”?

The Myth of More

In our quest to better understand and predict the future of AI, there's a temptation to gather more data. But this approach is fundamentally misguided.

AI is a quest to replicate human intelligence, however, it overlooks the most crucial part of it:

The human brain’s capacity to process information.

If a human reads continuously for their entire life, they could still only process about 8 billion words. Our brains have evolved to efficiently evaluate a finite amount of information for survival. We're not built for information gluttony.

So why do we think more data is better when it comes to AI?

It is not even sustainable. The “data wall” problem, (ironically) a prediction that by 2028, the stock of high-quality textual data on the internet will all have been used to train AI, stares us in the face. Hence, the data wars begin. A major reason behind the recent Apple-OpenAI partnership.

What we need isn't more data, but better ways to process and understand the information we already have. We need superior pattern recognition and prediction models. We need to focus on developing frameworks that can better anticipate potential outcomes, rather than simply amassing more data.

This also blinds us to another bigger problem in AI.

Blindness to Blind Spots

Any exponential technology4 has the potential to go unnoticed until the log phase where the growth becomes violent and almost uncontrollable.

AI is as exponential as it can be!

Humans are biologically wired for cognitive biases. We are naturally inclined to think in linear terms, making it difficult to intuitively grasp exponential growth. In addition to that, when you combine it with the current culture where we get hyper-focused/fixated on ideas that contribute to our “confirmation bias5 ”, we reach a place where talking about AI ethics over profits makes everyone roll their eyes.

Everyone is hyperfocused on asking "Can we build this?" but not enough are asking "Should we build this?" and "What might happen if we do?".

For example, the invention of the automobile has led to unforeseen issues like urban sprawl, all kinds of pollution, climate change, etc. that harm us every day. The widespread adoption of AI could have far-reaching effects we can't yet imagine.

Even John McCarthy must’ve never imagined that the world would be using AI to generate boob pictures, create nonconsensual deep fakes, or simply bombard the entire internet with walls of text indistinguishable from the truth only a human holds the capacity to tell.

Battle of the Nudes (1470–1480) by Antonio del Pollaiuolo is a Renaissance engraving depicting ten nude male figures in dynamic combat, showcasing human anatomy and movement. The work reflects the Renaissance fascination with humanism, realism, and the human body. (Courtesy: wikipedia)

So as we navigate the future of AI, we need to move from information obesity to information nutrition.

We need to be more tactical about building AI where less can mean more, rather than manufacturing an infinite abundance of junk.

Honestly, having more information is quite useless if you don’t have the right information or don’t know how to use it wisely!

More importantly, it can save us from another:

“I wish I knew better!”

Notes:

  1. The Flat Earth Theory: posits that the Earth is a flat plane rather than a globe.

  2. The Phlogiston Theory: A popular theory in the 17th and 18th centuries, proposed that all combustible materials contained a fire-like element called phlogiston. This theory attempted to explain combustion and rusting, suggesting that burning released phlogiston into the air, but it was eventually disproved by Antoine Lavoisier's work on oxygen.

  3. Mars Canals: A theory that proposed that Mars was crisscrossed by a network of artificial waterways. This misconception arose from telescopic observations by astronomers like Percival Lowell, who mistakenly interpreted natural Martian surface features as engineered structures, fueling speculation about intelligent Martian life.

  4. Exponential Technology: refers to technologies that improve at an exponential rate over time, often doubling in capability or performance at regular intervals. This concept, closely related to Moore's Law in computing, applies to various fields such as artificial intelligence, biotechnology, and nanotechnology, potentially leading to rapid and transformative societal changes.

  5. Confirmation Bias: is the tendency to favor information that confirms one's preexisting beliefs or hypotheses. This cognitive bias leads people to search for, interpret, and recall information in a way that reinforces their current perspectives, often while ignoring or downplaying contradictory evidence.

Reply

or to participate.