Why We Might Be Closer To Solving The Hard Problems Of Consciousness?

Exploring the growing obsession with biomarker tracking, the rejection of AI art, and the rise of collective consciousness

On March 31, 2005, at 9:05 in the morning, Terri Schiavo died in peace after 15 years in a persistent vegetative state.

Protestors all over the USA gathered with water bottles in front of her hospice after the courts asked doctors to remove her feeding tube. The case exposed our fundamental bias of what philosopher Peter Singer calls "meat chauvinism," the assumption that biological tissue alone defines personhood.

Interestingly, while everyone argued about whether she was still conscious or not, no one could definitively say what consciousness actually is.

Theresa Marie Schiavo (1963-2005) represents one of the most profound legal battles over consciousness in modern history. The inscription "I Kept My Promise" refers to her husband Michael's commitment to honor what he believed were her wishes to not be kept alive artificially. For fifteen years, Terri existed in what doctors called a persistent vegetative state, sparking fierce debate between her husband and parents over what consciousness means and who decides when it ends. (Courtesy: findagrave.com)

Is the body merely a temporary home for a conscious soul? Or is consciousness an emergent property of a living body? These questions aren’t new at all.

For the first time in human history, we might be closer than ever to answering them.

But why should normal people like you and me care? Because these questions have our future on the line. We’re filling our world with AI and robots without knowing what they really are. Is your AI girlfriend going to become real at some point? Are our kids going to grow up playing with robots? Will AI agents really become conscious and replace us?

Before we answer any of it, let’s explore what we already know:

Consciousness and its hard problem

Consciousness, at its simplest, is awareness of a state or object, either internal to oneself or in one's external environment.1

David Chalmers coined the term "hard problem of consciousness" in 1995, distinguishing it from the "easy problems" of explaining cognitive functions. The easy problems are merely engineering challenges like how we process information, integrate sensory data, or control behavior. The hard problem asks questions like what it is like to be you.

Why does the red of a rose feel like anything at all? Why does chocolate cake taste different from vanilla cake in that indescribable way that no amount of neuroscience can explain?

Without understanding it, we continue to build AI and robots without any idea if we're creating philosophical zombies or actual sentient beings.

"Self-Portrait with Leica" (Ilse Bing, 1931) shows the photographer merged with her camera in a mirror's reflection. Just as reflections appear to be us but lack any inner experience, AI is a sophisticated mirror of human consciousness without actually possessing awareness. (Courtesy: artgallery.nsw.gov.au)

So to comfort ourselves, we believe we're making progress because we can now track everything. My Oura ring tells me I had 87 minutes of REM sleep. My glucose monitor shows a banana spike at 2:47 PM. My heart rate variability suggests I'm 23% more stressed than baseline. We measure cortisol, VO2 max, telomere length, and a thousand other biomarkers, creating what we imagine is a complete picture of ourselves.

But this is like trying to understand Beethoven's Ninth by counting the frequency of each note. We're not even scratching the surface.

What sets us apart

Even though we might not fully understand what it means to be a human or a conscious being, some things set us apart from intelligent machines or AI.

When I see red roses, I suddenly remember my grandfather's garden, but you might feel a pang of lost love, or find yourself transported to that Tuesday when everything changed. The AI processes the same as wavelengths of light. We experience redness in a subjective way that no amount of description can convey to someone who's never seen color.

Similarly, syntax doesn't generate semantics. No amount of data processing, however complex, can necessarily produce subjective experience in AI.

It’s our ability to experience and feel that makes us human.

For example, we're collectively starting to reject AI art with great intensity. Not because the outputs are technically inferior, but because they're dreams without a dreamer, songs without a singer who's known heartbreak, paintings without hands that trembled with emotion.

Anyone who's truly created art knows this. The point is never the artifact but the consciousness behind it that suffered, celebrated, and turned experience into expression.

The Renaissance of the Renaissance

We're witnessing a mass awakening to our own spiritual starvation.

This is also reflected in the recent explosion of interest in psychedelics, meditation retreats, breath work, and more. We are also seeing the revival of ancient practices like forest bathing and grounding. There’s a growing rejection of "success" in favor of meaning. We're desperately trying to find something deep.

The last time humanity experienced such a collective consciousness crisis, it birthed the Renaissance.

After the Black Death killed a third of Europe, survivors reimagined what it meant to be human. It created what sociologist Émile Durkheim would call "collective effervescence" with shared trauma generating new social meanings. The confrontation with mass mortality sparked an explosion of art, science, and philosophy that still defines our civilization.

"The Creation of Adam" (Michelangelo, 1512) depicts the biblical moment when God gives life to the first human being. The iconic fresco shows God reaching toward Adam with their fingers nearly touching. God's finger is fully stretched while Adam's hangs limp, depicting human passivity. All he needs to do is stretch his finger to make the divine connection with his creator. (Courtesy: wikipedia)

Today, the conditions are remarkably similar. Instead of biological plague, we face an epidemic of individualism.

Each person has become their own project, optimizing personal metrics in isolation. Be it healing in therapy, exercising abundant choices without a shared purpose, or infinite self-expression without any mutual recognition.

We look at ourselves as the one, without ever experiencing oneness.

But what if the hard problem of consciousness can't be solved by studying individual minds?

What’s Next?

No individual neuron in our bodies "speaks English," yet the collective network generates a language we all speak. Similarly, individual minds might be necessary but insufficient for consciousness.

Quantum mechanics violated our most basic intuition from classical physics. We now know that electrons exist in superposition until observed. Entangled particles affect each other instantly across impossible distances. The observer literally creates reality by observing it.

What if collective consciousness similarly violates our assumptions about individual minds? What if Qualia are just a tiny segment of what we know as consciousness?

Perhaps the hard problem persists because consciousness can't observe itself directly, just as an eye cannot see itself without a mirror. But collectively, we're becoming that mirror.

Soon, we might see consciousness not as something inside an individual body, but as something we're all inside of. Perhaps, consciousness isn't a problem to be solved but a reality to be recognized.

We’re still clinging to the electrons inside a torch battery in the hopes of finding the sunrays.

Perhaps the question shouldn’t be “What is it like to be you?” but “What is it like to be us?”.

Endnotes:

Reply

or to participate.