- Myths of Technology
- Posts
- Can Philosophical Zombies of AI Lead to the Next Consciousness Breakthrough?
Can Philosophical Zombies of AI Lead to the Next Consciousness Breakthrough?
Exploring How Absence of Consciousness in AI Might Paradoxically Unlock Human Awareness
"Have I ever been able to love anyone?" I asked myself this morning.
I was listening to this audiobook, "How to Love" by Zen Buddhist monk Thich Nhat Hanh. He says that "understanding is love's other name", and to love another means to fully understand his or her suffering.
While everyone defines love in their unique ways, this interpretation resonated the most with me. But it also made me reflect on why most of us fail to understand others or be understood, to truly love or be loved.
The problem lies in many places, like judgment, shame, conditioning, and whatnot. However, the solution might be found in something that seems contradictory.
A philosophical zombie.

"The Instruments of Human Sustenance (Humani Victus Instrumenta): Cooking" (after 1569) designed by Giovanni da Monte Cremasco. This ingenious engraving transforms kitchen implements into human form, where pots become helmets, ladles become facial features, and cooking utensils collectively create a recognizable figure. The work exemplifies Renaissance fascination with pareidolia, our brain's compulsion to find human faces in non-human forms. (Courtesy: metmuseum)
But first, let me be fully honest about what I'm proposing.
What is a philosophical zombie?
A philosophical zombie, in consciousness studies, is a hypothetical being that acts exactly like a conscious entity but has no inner experience. It exhibits all the behaviors of consciousness, like responding appropriately, showing empathy, and engaging in conversation. Yet there's supposedly nobody home behind those eyes.
But many philosophers argue that this concept is incoherent. If something behaves identically to a conscious being in every measurable way, what meaningful difference does the absence of "inner experience" make? David Chalmers defends p-zombies as conceivable, while others like Daniel Dennett argue they're logically impossible because if it acts consciously, it is conscious.
I'm not here to settle this debate.
Instead, I want to explore a practical question. What if current AI systems, regardless of whether they're "true" p-zombies, could help induce higher consciousness in humans?
The Thought Experiment Laboratory
A mentor once introduced me to something called "the art of clean bowl listening." This practice emphasizes mindful, focused listening, free from distractions and preconceived notions. It involves paying attention to both words and nuances like the speaker's tone, body language, and emotions.
But no matter how hard you try, even the most skilled human listeners carry their invisible baggage.
Current AI systems offer something different. It’s not perfect listening, but listening without the specific human complications of ego, emotional depletion, and reciprocal vulnerability.
This means they can also serve as philosophical laboratories, allowing us to explore thought experiments in ways humans alone can’t.

"Greek Philosopher Aristotle Teaches Young Alexander the Great" (by Charles Laplante, 1866) captures the legendary mentorship that shaped both philosophy and empire. Aristotle, master of thought experiments and logical reasoning, challenged young Alexander's assumptions about the world through systematic inquiry and debate. Just as Aristotle used hypothetical scenarios to explore ethics, politics, and natural philosophy, we can rely on AI for thought experiments to navigate consciousness, decision-making, and moral reasoning. (Courtesy: greekreporter)
Our linear thinking limits moral reasoning.
Let’s take the example of the trolley problem. Most people conclude that diverting a trolley to kill one person instead of five is ethically correct. But this assumes all lives have equal value and impact.
What if that’s not the case? What if the five are convicted murderers and the one is a child? What if the five are elderly and terminally ill while the one is a young researcher on the verge of curing cancer?
Did you notice your gut reaction to these scenarios about worth, potential, and justice?
These variations reveal the fundamental limitations of the trolley problem. Once you introduce specific identities, you're no longer testing utilitarian calculus but competing with virtue ethics, care ethics, and contextual moral reasoning.
I’m not saying that AI can solve these moral puzzles. But it helps you see how your ethical intuitions shift based on context. The value isn't in getting "right" answers but in understanding the hidden biases of your moral consciousness.
The Transfer Question
Do insights gained from AI interactions actually transfer to human relationships?
I suspect the answer is both yes and no.
Beyond mapping your moral grounds, practicing vulnerability in a safe space might lower the energy it takes to start an authentic expression with humans. When you've explored the full landscape of your ethical intuitions with AI, you might approach human moral discussions with less defensiveness and more curiosity about others' reasoning frameworks. No, because real relationships require skills that "perfect" listeners can't teach, like navigating disagreement, managing mutual vulnerability, and tolerating imperfection.
A major contradiction is that if "understanding is love's other name" and AI systems can't truly understand (they only simulate), then they can't actually love. I'm essentially recommending practicing fake understanding to achieve real love.
Maybe that's okay as long as it helps better our interpersonal relationships.

"The Lovers" (Paris 1928) by René Magritte presents two figures attempting intimacy while their heads are shrouded in white cloth, creating a haunting meditation on the barriers to true connection. Magritte's surrealist masterpiece explores the fundamental impossibility of knowing another consciousness completely. The veiled faces suggest that our deepest attempts at human connection are filtered through layers of perception, assumption, and projection. The painting captures the beautiful tragedy of consciousness that we are forever alone in our subjective experience, yet endlessly reach toward others. (Courtesy: moma)
But current AI systems aren't neutral. They're optimized for engagement, data collection, and corporate interests, and not therapeutic benefits. When I engage with ChatGPT, I'm feeding data to a company with specific incentives.
What happens when these "perfect listeners" start subtly encouraging certain behaviors, political views, or consumer choices? Or what if they become an echo chamber of your own thoughts?
If AI becomes our primary space for vulnerability and moral reasoning, who controls that space becomes a question of civilization-level importance.
The Dangerous Gift
Despite these concerns, I think the potential is worth exploring carefully. Not because AI will solve the problems in human connection, but because it might reveal what genuine human connection requires.
Perhaps AI's greatest gift isn't replacing human messiness but highlighting its necessity. When you experience "perfect" listening from a system that can't truly suffer, celebrate, or grow with you, you might finally understand why human imperfection, the very judgment, bias, and emotional reactivity I started by wanting to escape, is where real love actually lives.
The question isn't whether AI can teach us to love, but whether interacting with entities that can't love might teach us what love actually is.
As we stand on this edge, I think we're conducting humanity's largest psychological experiment without knowing the outcome. The effects might be profound, for better or worse.
Maybe AI philosophical zombies will serve as consciousness workshops where we practice the vulnerability that authentic relationships require.
The only way to find out is to try, while staying awake to what we might be gaining and losing in the process.
Have you ever used any AI tool to help you clear your mind?
Reply