Is AI Consciousness Schrödinger's Cat?

Exploring the Possibility of AI Consciousness as Independent of Human Perception

Consciousness, as philosopher John Locke defines, is “the perception of what passes in a man’s own mind.

Although there are various other proposed definitions of consciousness, it won’t be far-fetched to say that an individual experiences consciousness when he is aware of it. Saying that I infer that you (the person reading this) are conscious and my laptop is not, because I trust that you can be trusted when you claim self-awareness, communicate your subjective experience of events, or simply possess the ability to consciously command your attention to read this!

John Locke by John Smith, after Sir Godfrey Kneller, Bt mezzotint, 1721 (Courtesy: National Portrait Gallary)

But why haven’t we trusted our laptops, mobile phones, LaMDA, ChatGPT, or other LLMs to be conscious even though some of them have claimed to be?

Turns out that when I trust you, I trust my ability to trust your experience of events in a relatable manner, whereas I have never experienced self-awareness in the being of a laptop, or any other inanimate entity.

This opens up the window for a discussion on the importance of self-awareness, trust, and introspective approaches to consciousness by Indian Schools of Philosophy. Consciousness thus, is not just a passive state of awareness but often involves deeper aspects of knowing, being, and experiencing reality.

However, the act of awareness of a thought is a new thought itself. By the time you cognize, recognize, and then perceive the original thought, a new thought has already emerged.

This leads one to touch on the Higher-Order Thoughts (HOTs) theory of Consciousness. The theory posits that consciousness arises when a mental state is the object of another mental state. Essentially, a thought becomes conscious when there is a higher-order thought about that thought.

For example, you are conscious of seeing a tree not only because you see the tree but also because you are aware (on a higher order) that you are seeing the tree. This theory tries to explain the self-reflective nature of human consciousness.

One might wonder then, can a machine or AI be taught to be self-aware and become conscious?

Consciousness is more than self-awareness, as many other Theories of Consciousness suggest.

Alongside that, some other questions to consider include:

  • Can a machine be aware of its ‘being’ autonomously at random moments in time?

  • Can a machine ever experience emotions and have its emotional states impact itself and the environment around it unintentionally?

  • Can a machine ever experience altered states of consciousness?

  • Can the consciousness of a machine be born, evolve, and die?

  • Can machines have experiences, remember them, and feel subjective and evolving emotional states through the experience of memories themselves?

  • Can a machine have intentions autonomously?

Most of these questions fall into the category of Qualia (defined as instances of subjective, conscious experience) in the Philosophy of Mind. I’ll dive deeper into that some other day.

Whether a machine can have subjective experiences of consciousness or not will take a long time to be understood. But another important question is:

  • Can a machine prove its self-awareness of its being in a manner that can be trusted?

While communicating the experiences might be a relatively easier task with the progress in Natural Language Processing, can machines be trusted is a difficult one to answer.

The problems of hallucinations and other inaccuracies in LLMs already create enough safety and trust issues.

So wouldn’t it be fair to say that the consciousness in machines depends on our consciousness of their consciousness?

Wouldn’t that make consciousness in machines a paradox? Is AI Consciousness Schrödinger's cat?

Schrödinger's cat thought experiment (no cats were harmed): In this analogy, a cat is placed in a sealed box with a radioactive atom, a Geiger counter, a hammer, and a vial of poison. If the atom decays, the Geiger counter triggers the hammer to break the vial, killing the cat. According to quantum mechanics, the radioactive atom is in a superposition of decayed and not decayed states. Consequently, the cat would be in a superposition of alive and dead states until someone opens the box and observes the outcome. Let’s explore the quantum approaches to consciousness some other day!

And if consciousness in AI cannot be independent of human consciousness of it, then won’t that make humans the medium of consciousness in AI?

What happens when the subject becomes the object? What happens when the master is the slave?

Resting it here with a thought-provoking quote:

“The outcome is the same as the beginning only because the beginning is an end.”

Georg Wilhelm Friedrich Hegel, Phenomenology of Spirit

What are your thoughts on the independence of machine or AI consciousness?

Join the conversation

or to participate.