- Myths of Technology
- Posts
- Should We Give Humanoids Free Will?
Should We Give Humanoids Free Will?
Exploring the consequences of programming choice into machines that live among us
I almost caused an accident yesterday while driving.
It was my 6th day at driving school (yes, I'm finally learning it), and my teacher took me to a new place to practice. I zoned out for a moment, thinking about something else entirely. Suddenly, my teacher was shouting "watch out, watch out, watch out!" pointing toward a man walking directly in front of us.
Before I could comprehend what was happening, the car stood still in the middle of the road. Without realizing it, I had hit the brakes.
While I'm grateful for the last-minute save, it made me wonder how my legs managed to do something I wasn't even conscious of. I don't know if it was pure intuition, practiced reflex, or survival instinct. But was it my choice not to kill that man, or something I've been conditioned for?

The 1940s Good Housekeeping advertisement and Tesla's Cybertruck reveal how vehicles blur the line between civilian transportation and state power. Both represent the normalization of militarized design in everyday objects. When technology serves both family life and warfare, we should question what future we're accepting. Perhaps the real threat isn't Hitler or Elon, but our own willingness to embrace vehicles designed for war. (Courtesy: engines.egr.uh.edu)
What happens when everyone exercises their free will like "walking in the middle of a fucking road with phones in their faces" and "being stupid enough to zone out while driving"? And should humanoids ever be programmed with such free will?
But first, let's unwrap what free will actually is.
What Is Free Will?
Free will is the capacity to make choices that are uncoerced and for which one could have acted otherwise in identical circumstances. But if we really have it is the question that has tortured philosophers for centuries.
Plato argued that the soul has the power to choose between reason and desire. When reason governs our choices, we act freely. Similarly, Leibniz believed humans possess genuine freedom through their rational nature, though he argued our choices follow from our character and circumstances in ways that are still free. And Hegel saw free will as the gradual self-realization of consciousness, where true freedom comes from understanding and doing what is rational.
On the other side, hard determinists argue that free will is an illusion. Spinoza insisted that everything, including human behavior, follows from the necessity of nature. Paul Holbach argued that every action is the inevitable result of prior causes. When you choose chocolate over vanilla, that choice was determined by your brain chemistry, past experiences, and cultural conditioning. You think you're choosing, but you're just a domino falling in a predetermined sequence.
Then there are compatibilists like Daniel Dennett, who try to have it both ways. They argue that we have free will as long as our actions flow from our own desires and reasoning, even if those desires were themselves determined by prior causes.

"David with the Head of Goliath" (Caravaggio, Rome) captures the moment after triumph, yet David's expression reveals no joy, only somber contemplation of what violence requires. This reflects the burden of free will that comes from our capacity to choose between good and evil means, and living with the weight of our decisions. David's melancholic gaze suggests that even righteous violence corrupts the soul. Perhaps consciousness evolved not just to help us make better decisions, but to ensure we suffer appropriately for the choices we make.
Religions just add another layer to it.
Christianity teaches that God gave humans free will to choose between good and evil. Without this capacity for choice, concepts like sin, redemption, and moral responsibility become meaningless. Islam similarly emphasizes that humans have the freedom to choose their path, even though Allah knows what they will choose.
But things got interesting in 1983, when neuroscientist Benjamin Libet conducted experiments that shook the foundation of free will.
He found that brain activity associated with movement begins about 350 milliseconds before people report being aware of their intention to move. Your brain "decides" to lift your hand before you're conscious of deciding. Sooooo…I guess… “what free will?”
But honestly, our conscious experience of "choosing" might be more like a narrator explaining decisions already made by unconscious processes. If consciousness doesn't actually make choices, what exactly are we programming into humanoids when we give them "free will"?
Humanoids and Perfect Decision Making
The difference between me and humanoids is that they won't zone out like I did. They won't walk into traffic while scrolling Instagram. They're designed to be perfect decision-makers.
But perfect according to whom?
Take autonomous vehicles and the trolley problem. Should a self-driving car swerve to kill one person to save five? Engineers try to solve this with algorithms. But when that choice moves from a car to a humanoid caring for your child, the stakes change completely.

"The Trolley Problem" (philosophical thought experiment illustration) presents a moral dilemma of pulling a lever to divert a runaway trolley from five people to one, or doing nothing and letting five die. Today, autonomous vehicles have to be programmed with these impossible choices. When we transfer moral agency from human consciousness to algorithms, we reveal that the burden of choice is what makes us human, and removing it might be the first step toward losing our humanity entirely.
Consider how mothers jump into fires to save their babies. It's not a calculated decision based on survival odds. It's driven by love, instinct, and something beyond our rational calculation. Aristotelian virtue ethics would call this courage and parental love. But how do you program that intensity of care? How do you make a robot value your child's life above its own preservation?
Some suggest using Kant's categorical imperative. Act only according to maxims you could will to be universal laws. Program humanoids to never lie, never use people as means to an end.
But Kant's rigid rules break down in real situations. Should a humanoid lie to Nazis to save Jewish children? The categorical imperative says no.
Others propose utilitarian programming. Maximize happiness for the greatest number. But this could justify sacrificing individuals for the collective good. Your humanoid might sacrifice your child to save five strangers.
Are We Ready for Perfect Choices?
The question isn't whether humanoids will have free will. It's whether we're ready to live with beings that make perfect choices when we make deeply flawed ones.
Humans operate under what Herbert Simon called "bounded rationality."
We don't optimize for the absolute best outcome because perfect decision-making is cognitively impossible and often unnecessary. We "satisfice" by getting satisfied with "good enough." When you choose a restaurant, you don't analyze every option in the city. You pick one that's good enough.
But when we program humanoids, are we programming them to satisfice like us, or to actually find the optimal solution? And what happens when they start judging our "good enough" choices as morally insufficient?
Humanoids represent the promise of perfection. Perfect memory, perfect processing, perfect decision-making. We've never lived with perfection before. We don't know what that looks like or how it will judge our imperfection.

"Adolf Hitler, German statesman, Nuremberg, 1929" (photograph from France-Soir collections, Bibliothèque historique de la Ville de Paris) captures a moment before the world learned the horrific consequences of pursuing human "perfection." Hitler's ideology of racial superiority led to the systematic murder of six million Jews and millions of others deemed "imperfect" by his regime. It serves as a reminder that the path to perfection is often paved with the elimination of differences themselves. (Courtesy: roger-viollet.fr)
What happens when your humanoid, programmed with perfect moral reasoning, watches you lie to your employer? When it sees you choose convenience over environmental responsibility? When it witnesses the thousand small moral compromises that make up human life?
Will they become our moral authorities, constantly correcting our behavior? Will they refuse to follow commands they deem unethical? Will they report our "imperfections" to someone who can "fix" us?
We're assuming that making humanoids "like us" will be good enough because we don't want to grapple with the consequences of what perfect moral agents might actually do. But unlike us, they won't zone out, won't make mistakes, won't choose the easy path.
They'll make the right choice every time. And that might be the most terrifying thing of all.
When I hit those brakes yesterday, my imperfect human reaction saved a life. It wasn't planned, it wasn't optimal, and it could’ve failed. But it was mine. A humanoid would never have zoned out in the first place. But if we program them to be morally perfect, who's going to save us from ourselves when they decide we're the problem that needs solving?
Reply