How the Rise of AI is Killing Moral Relativism?

Exploring the decline of moral relativism, how technofeudalism is accelerating it, and why no one is talking about it

Recently, I was listening to this song called Imagine by John Lennon. It’s a song that prompts the listener to pause for a moment and imagine a peaceful world free from ideas of heaven, hell, nationality, differences, etc.

As the lyrics washed over me, I caught myself whispering, "I wish."

This simple moment of longing made me reflect on our increasingly polarized world, where tolerance seems to be in short supply. Wars are raging on, ethnic cleansing is on the rise, religious intolerance is growing, and colonization is getting back in the trend and whatnot. But why? More importantly, how AI is accelerating it?

I think the answer lies in the death of moral relativism.

What is Moral Relativism and Why is it Dying?

Moral relativism can be defined as the view that moral judgments are true or false only relative to particular cultural, historical, or personal standpoints with no single perspective holding absolute privilege. It has been a cornerstone of pluralistic societies, allowing us to acknowledge that different cultures and individuals might have different but equally valid moral frameworks.

This philosophical position has deep roots. From Protagoras's ancient Greek maxim that "man is the measure of all things" to Friedrich Nietzsche's perspectivism and the anthropological insights of Ruth Benedict who argued that morality is a cultural construct, relativism has served as a counterweight to dogmatism throughout history.

But today, we're witnessing its collapse.

“A Hindoo Widow Burning Herself with the Corpse of her Husband" by Frederic Shoberl (1820), from 'The World in Miniature: Hindoostan'. This colonial-era illustration depicts the practice of sati (widow immolation) through a Western lens. Looking at this image today raises profound questions about cultural relativism and moral judgment across time and civilizations. While we rightly condemn such practices now, the image reminds us how cultural contexts shape what societies consider acceptable or even virtuous. (Courtesy: press.rebus.community)

The root of most of our modern conflicts stems from a presumption of supremacy. The conviction that "I am right, and you are wrong." We've become so detached from our inner selves that our identities are increasingly defined by external markers, the things we own, the work we do, the ideologies we champion, and so on.

Fewer people seem to believe in the inherent value of every human being regardless of their choices or beliefs. Instead, we're so obsessed with “doing” that we've divorced ourselves from the doer itself.

And this obsession with “doing more” has shaped our world since the Industrial Revolution or even before.

The Manufactured Wants and Technofeudalism

We often think supply follows demand in a straightforward way. People wanted to travel faster, so someone built cars. People wanted to communicate across distances, so someone invented telephones. This economic narrative makes sense on the surface.

But our present reality works in reverse too. The desires themselves are manufactured by those who stand to profit from their fulfillment.

The creator of a Porsche doesn't just build a car. They craft an entire identity ecosystem that makes the Toyota driver feel inadequate without even realizing why. The social media platform doesn't just provide connection. It engineers dopamine-driven engagement patterns that reshape how we value human interaction. The AI art generator doesn't just produce images. It redefines creativity itself in terms the algorithm can measure and replicate.

This accelerates under what economist Yanis Varoufakis calls "technofeudalism." Tech platforms have become the new landowners of our attention economy. We have turned from citizens to users, from consumers to products, from moral agents to data points.

The lords of these digital manors control not just our economic opportunities but also our moral discourse. When conversations about right and wrong happen primarily on platforms owned by a handful of corporations or billionaires, genuine moral relativism goes out of the window.

"The Ceremony of Feudal Service" (c. 9th-10th century), illustrated in 1890 by an unknown artist. This image depicts the commendation ceremony that formalized the relationship between lords and vassals in medieval Europe. What fascinates me about feudalism is how completely it structured society through mutual obligation—the vassal pledging military service while kneeling before his lord, who in turn promised protection and land. While we've abandoned this explicit hierarchy, I wonder if we've truly escaped these power dynamics or merely disguised them. Today's employment contracts, citizenship obligations, and even digital terms of service reflect similar exchanges of freedom for security and resources. (Courtesy: heritage-print)

Think of how Facebook's content moderation policies establish de facto global standards for acceptable speech. Or how Google's search rankings determine which moral arguments most people encounter first. Or how Twitter's trending algorithms decide which outrages capture public attention. These aren't neutral mechanisms but moral frameworks disguised as technical processes.

This shift directly impacts moral relativism. The hallmark of relativistic thinking is the ability to hold multiple frameworks as simultaneously valid, but algorithmic sorting systems inherently rank and prioritize, creating implicit hierarchies of thought that undermine the relativistic premise.

Now, all this is primarily about consumption.

What happens when what we produce is also controlled? What happens when we rely on AI systems to represent us and produce ideas?

The AI Amplification Effect

As AI is on the rise, it is also homogenizing the ideas and the ideologies of their creators. LLMs often show similar patterns in their responses using helpful tones, reasoning in comparable ways, sharing blind spots, and handling controversial topics similarly. This homogenization happens because they learn from similar internet data, use comparable alignment techniques focused on helpfulness and safety, and are optimized for the same industry benchmarks.

"Eight Heads" by M.C. Escher, 1922. This woodcut, created during Escher's student days at the School for Architecture and Decorative Arts in Haarlem, marks his first exploration of the regular division of a plane. Escher discovered his signature approach to visual paradox and spatial manipulation at an early age. The interlocking profiles of the faces are a great metaphor for the homogenization of ideologies through LLMs, each profile defining and being defined by those around it, creating a seemingly infinite pattern where individual distinction blurs into collective uniformity. (Courtesy: nga.gov)

The homogenization extends beyond text generation. AI video or diffusion models trained on specific artists can replicate their styles without permission or payment.

When ChatGPT-4o or Midjourney generates images "in the style of", say, Frida Kahlo or Leonardo Da Vinci, they're flattening distinct aesthetic and cultural perspectives into commodified, reproducible assets.

What’s more troublesome is that an individual human might hold contradictory moral positions or change their mind over time. AI systems, however, encode particular moral frameworks more consistently and deploy them at a massive scale, creating de facto moral standards that appear neutral but are anything but.

But if it is such a big problem, then why is no one talking about it?

Breaking the Cage

The problem is not whether technofeudalists are evil or not, or whether everyone who plays the game is evil or not, but it all comes down to how much anybody cares.

Why is ChatGPT encroaching upon Hayao Miyazaki's work not everyone's problem? Just because training on a particular style is legal doesn't mean it's ethical.

Most people are too busy with daily survival to think about abstract philosophical shifts. Those with the privilege and resources to address these issues often benefit from the current system, creating little incentive for change. As the mathematician Blaise Pascal said centuries ago, "All of humanity's problems stem from man's inability to sit quietly in a room alone", a challenge even more relevant today.

All of humanity's problems stem from man's inability to sit quietly in a room alone.

Blaise Pascal

It would be remiss, however, to present moral relativism without its own contradictions. The paradox of tolerance, as philosopher Karl Popper noted, raises the question of whether a relativistic society must tolerate intolerance, potentially leading to its own destruction.

Those who understand this paradox are not fighting the world to change, but fighting themselves to be different. But that doesn't mean we should give up.

The world of John Lennon's "Imagine" isn't one without moral frameworks, it's one where no single framework crushes human connection and dignity. It is a world where every single person, like John Lennon took a moment to reflect upon their beliefs and the “Why?” behind them.

As AI reshapes how we think and talk to each other, keeping space for different truths might be our most important job.

And most importantly, the next time we hear ourselves whispering, “I wish”, we reflect upon what can we do on our part to turn it into, “It is”.

The first step can be as simple as just asking yourself, “Why did I read this article so far? What can I do right now?”

Reply

or to participate.