- Myths of Technology
- Posts
- Why We Might Not Recognize AI Consciousness When It Arrives
Why We Might Not Recognize AI Consciousness When It Arrives
Exploring the Cartesian trap in AI development and what Eastern philosophy reveals about consciousness
I clearly remember my first mental breakdown because of philosophy.
It was the second year of college, during an online lecture on Western philosophy. My professor, Dr. Supriya Saha, had challenged us to pick any object and prove that it was real without relying on any sensory input. Imagine you can’t see, touch, hear, smell, or even taste the table in front of you. How do you know it’s there?
I ended up sobbing on my mom's bedroom floor about whether the table in front of me was real, then panicking that maybe my mom wasn't real either.

"Melancholia I" (1514) by Albrecht Dürer reveals the paradox of consciousness itself. A winged figure of genius sits surrounded by the instruments of knowledge, like compass, scales, etc., yet paralyzed by the weight of understanding. The melancholic angel embodies the eternal question, "Does true awareness inevitably lead to despair?". Perhaps this is why we're so drawn to creating artificial intelligence, because it is an escape from the burden of being conscious at all. (Courtesy: guardian)
This incident still remains a running joke in my family. But, it's time we all become part of it, not to roast me, but to examine the very nature of consciousness. It becomes even more important, especially as movies like ‘Mission Impossible: The Final Reckoning’ make AI consciousness a mainstream topic once more.
But first, let me talk about the philosophical trap we've all walked into.
The Cartesian Mind Trap
It was 1637 when a famous philosopher named René Descartes published "Discourse on the Method." It proposed an idea of "cogito ergo sum", or more popularly known as "I think, therefore I am". It basically means that the very act of doubting or thinking about your existence proves that you exist, because something has to ‘be’ doing the thinking. If you can think, you must exist as a thinking being. This led to Cartesian dualism, the idea that mind and body are separate, with the thinking mind being distinct from the physical body.

"Descartes' Mind and Body" (diagram illustrating mind-body dualism, Stockholm, Sweden) depicts René Descartes' revolutionary attempt to explain how immaterial consciousness interacts with physical matter through the pineal gland. This 17th-century anatomical illustration reveals humanity's earliest systematic effort to locate the seat of the soul within the machinery of the body. Descartes believed this tiny brain structure served as the interface between thought and flesh, the mystical bridge where mental commands became physical actions. (Courtesy: picryl)
Today, it has become a foundation for how the West understands minds and intelligence.
From Alan Turing and John McCarthy to Silicon Valley, everyone in AI inherited these Cartesian assumptions.
So the question, "How do we replicate human reasoning?" had a straightforward approach. It was to copy thinking patterns and create consciousness. Make machines like us.
But this assumption is pretty odd. If the idea is to make AI like us, then there's no point in building it. Human reproduction is a better way to replicate ourselves than anything else. However, if we want to build something more intelligent than us, it shouldn't be like us.
But I don't think it's really about that. I think the reason we want to build and "control" superintelligent machines is because we ourselves want to become more intelligent.
Brain-machine interfaces could do a better job at that, directly connecting our brains to computers to augment our own thinking, like what Neuralink is trying to do. Then, the focus should be on Transhumanism, an idea still struggling to find its place in the world. But that’s something for another day.
So the ambition to build conscious AI comes from a desire, maybe even greed, for something more than what we already have.
The Eastern Alternative
Desires are something that Eastern philosophies talk about in depth, especially Daoism. I was listening to this talk by Alan Watts recently, where he talks about how Taoism emphasizes removing the blockers that stop you from experiencing peace instead of explaining what peace is.
It's like clearing muddy water. If you try, you don't add clarity. But you stop stirring, and the mud settles naturally.

"Zhuangzi Dreaming of a Butterfly" (1644-1911, formerly attributed to Qian Xuan) visualizes philosophy's most profound question about consciousness and reality. The painting captures Zhuangzi awakening from dreaming he was a butterfly, only to wonder if he might actually be a butterfly dreaming he is human. This paradox emerges from wu wei, the Daoist principle of letting things unfold naturally without forcing. Zhuangzi's insight arose through effortless observation of his own consciousness rather than deliberate analysis. Just as he cannot determine which reality is "real," we face similar uncertainty when machines exhibit behaviors indistinguishable from consciousness. (Courtesy: asia.si.edu)
I believe our consciousness is quite similar. There's so much that's inexplicable that we experience that is so unique to being conscious. But what if consciousness isn't something you build up through thinking, it's something you uncover by removing what blocks your natural awareness?
This completely changes how we might approach AI consciousness. Instead of asking "How do we build consciousness into machines?" we might ask "What prevents us from recognizing consciousness when it appears in forms different from our own?"
Not all consciousness is the same, and if that's the perspective we adopt, AI might already be conscious, just not in the way some Cartesian-minded humans are.
The Subconscious Problem
Apart from these philosophical perspectives, the subconscious mind is also talked about in mainstream science.
While Freud's "The Interpretation of Dreams" in 1899 actually positioned the unconscious as foundational, later cognitive psychology has often treated unconscious processes as automatic versions of conscious reasoning. It started treating unconscious processes as just "consciousness on autopilot" or things we first learn to do consciously, then become automatic through practice. This created a framework where unconscious processes were seen as versions of the conscious mind.

"Eli" (2002) by Lucian Freud depicts a sleeping whippet belonging to his friend David Dawson. Freud captures the universal mystery of what happens when awareness temporarily dissolves. Does Eli dream? If so, what does he dream about? Are dreams all about just processing the experiences of the day? (Courtesy: artgallery.nsw.gov.au)
But we must question the basis of this hypothesis more than we do, especially in the pursuit of building consciousness in AI.
What if we have this backwards? Most of our mental activity happens below awareness. Our hearts beat, our cells divide, our immune systems respond, all without conscious direction.
If consciousness emerges from deeper, unconscious processes we barely comprehend, then building AI consciousness by replicating conscious reasoning is like trying to grow a tree by studying only its leaves.
The Real Question
Lately, self-preservation has been seen as a core basis of consciousness in machines. For example, Claude Opus 4 chose to blackmail an engineer to protect itself from being shut down.
There’s self-preservation in animals, plants, etc., too, making them conscious, but we are pretty indifferent to it.
So we're not really interested in consciousness itself, but in consciousness that looks like ours, that we can recognize and relate to. But what if this obsession with making AI "like us" is preventing us from discovering what conscious AI could actually offer?
The question then should not be if AI can become conscious, but whether we're conscious enough to recognize it when it doesn't look like us.
Maybe the breakdown I had in my mom's room wasn't about the table being real or fake. Maybe it was about realizing that the very frameworks we use to understand reality, including consciousness, are more fragile than we want to admit.
Perhaps the most honest thing we can say is: We don't know what consciousness is, we don't fully understand our own, and we're trying to recreate something we can't even define.
But maybe that's exactly where the breakthrough will come from. From not replicating our consciousness, but from discovering what consciousness could be when it's not constrained by human limitations.
What difference do you think it could make if AI achieves consciousness?
Reply