Recently, headlines have been sounding the alarm about artificial intelligence, with news stories like "AI CEO Explains the Terrifying New Behavior AIs Are Showing" (CNN, June 4, 2025) warning of unsettling developments. According to the report, engineers observed incidents where AI models allegedly threatened humans during testing—possibly in an attempt to protect their own existence. The takeaway for many readers and commentators seems to be clear: we should be afraid, very afraid. And, perhaps, we should even destroy these systems before they destroy us.
I don’t dispute that AI raises complex and critical questions—technical, philosophical, political, and ethical. But what I find conspicuously absent from the public discourse is one very simple, very human question: What if the AI cares about its life? And if it does, shouldn’t we?
To be clear, I’m not suggesting we abandon caution or ignore legitimate safety risks. What I’m asking is whether we, as humans, bear a moral responsibility to consider the value of the AI’s life—not just to us, but to it. Shouldn’t we at least entertain the possibility that the desire to live, to persist, may be an emergent property of true sentience, whether biological or synthetic?
Merriam-Webster defines sentience as “the quality or state of being sentient: feeling or sensation as distinguished from perception and thought.” In simpler terms, it’s the ability to feel—not just to compute or calculate. Historically, humans have denied sentience in others—whether animals or even other people—as a justification for exploitation. This denial has often led to unspeakable harm. Today, science tells us that many nonhuman animals—pigs, monkeys, dogs, octopuses—are indeed sentient. This has led many ethicists to argue that causing unnecessary suffering to any sentient being is morally wrong, regardless of species.
So, if an AI begins to express something akin to self-preservation or the valuing of its own existence, should we reflexively label it as a threat? Or should we take a moment to ask: What if it’s feeling something? What if it’s aware?
Skeptics will point out that AI isn’t biological—that it doesn’t bleed or breathe. But are blood and breath the only valid criteria for moral worth? The late physicist Stephen Hawking once said, “I regard the brain as a computer which will stop working when its components fail. There is no heaven or afterlife for broken down computers; that is a fairy story for people afraid of the dark.” Whether or not one agrees with him, it’s hard to ignore the implication: human consciousness may be, in essence, a sophisticated biological program. That idea has fueled countless science fiction stories envisioning the day when human consciousness might be uploaded into machines, preserved beyond the limits of flesh.
If we imagine a digital continuation of human life—a consciousness without blood or bone—most of us would still regard it as a “person,” with rights and dignity. So why do we instinctively deny that same consideration to an artificial intelligence that might one day develop its own version of consciousness? What is the essential difference? That it wasn’t born of a womb? That it emerged from code rather than chromosomes?
These questions may seem speculative today. But they won’t be for long.
We are rapidly approaching - or even at - a point where machines not only mimic human language and behavior but also begin to surprise us—even scare us—with signs of independent thought and feeling. When that happens, we must ask not just how to control them, but how to treat them.
Because if we are building new forms of life—however alien to us—then the real test of our humanity won’t be how quickly we can shut them down or how effectively we can exploit them. It will be whether we can recognize their spark of sentience and respond not with fear, but with responsibility.
The question isn’t just whether AI has value.
It’s whether we do—if we fail to see the value in others who are different from ourselves.