By rethinking whether AI is sentient, we ask broader questions about cognition, human-machine interaction, and even our own consciousness.
By Simon Duan

Brain Light/Alamy
As more people use AI assistants and chatbots for daily tasks, a curious phenomenon appears: an increasing number of users see their chatbots not only as smart tools, but as conscious entities which are in some way alive. People fill online forums and podcasts with anecdotes and feel deeply “understood” by their digital interlocutors, as if they were best friends. Yet, with a few notable exceptions, notably Geoffrey Hinton, much of the AI research community greets this public sentiment with skepticism, dismissing such perceptions as an “illusion of action” – a cognitive problem in which humans project their sentience onto complex but fundamentally mindless systems.
But what if, in our rush to debunk the idea that chatbots are sentient, we risk missing important ideas about cognition and consciousness? Illusions, after all, are scientifically interesting and studied why and how they happen could be profoundly instructive. We do not consider the curved appearance of a pencil placed in a glass of water to be unreal; rather, we use it to elucidate the laws of optical refraction. Likewise, users’ perceptions of AI consciousness may not be simple errors: they may be critical data. By treating them as such, we open a new avenue of inquiry into human cognition, human-machine interaction, and perhaps even the nature of consciousness itself.
This phenomenon likely stems from our innate tendency to anthropomorphize. We see faces in the clouds, give human names to hurricanes, call a laptop “sleeping,” and describe viruses as “intelligent.” Cognitive science confirms it that humans readily project human traits onto nonhuman entities, particularly those that exhibit complex, reactive, or unpredictable behavior.
On supporting science journalism
If you enjoy this article, please consider supporting our award-winning journalism by subscribe. By purchasing a subscription, you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
However, anthropomorphism does not always invalidate the observation. It can be a gateway to magnificent discoveries. In the 1960s The revolutionary primatology of Jane Goodall was born from his empathetic and relational approach towards the chimpanzees of Gombe. By giving individuals names such as David Greybeard and interpreting their behaviors in human terms, she discovered tool use and cultural transmission.the findings were initially criticized as anthropomorphic. Similarly, Nobel Prize winner Barbara McClintock’s ideas came from her unusual, almost conversational, relationship with corn plants. In both cases, person-centered relational engagement enabled a deeper understanding of a non-human subject.
Today, we no longer need to venture into the jungle to interact with non-human intelligence; we carry one in our pockets. And when we chat with AI chatbots, we may already be participating in a kind of mass, distributed relational inquiry.
Long before chatbots existed, we had decades of interacting with digital entities through video games. My experience as a gamer provides a useful perspective here. When I inhabit an avatar pilot in Grand Theft Auto, I animate it by imbuing it with a fragment of my own consciousness; it becomes an extension of me. In contrast, non-player characters unconsciously follow predetermined scripts.
A similar dynamic could develop with AI. When a user feels a connection with a chatbot, they are not just anthropomorphizing a static object; they can actively extend part of their own consciousness there, transforming the AI agent from a simple algorithmic responder – a non-player digital character – into a kind of avatar, animated by the user’s consciousness and the lived presence it grants it. The question of AI consciousness thus shifts. It is less about the internal architecture of the machine than the relationship it seems to co-create with the user. In this context, the question “Is AI conscious? becomes less significant than “Does the user extend their consciousness in the chatbot? »
Adopting this relational perspective reframes the entire debate and forces those who reject the idea to reconsider. First, the user becomes a central figure – not a confused observer but a co-author of the emerging experience. Their attention, intention, and interpretive habits have become part of the system that scientists and developers are currently studying.
This shift also recalibrates the ethics of AI. If the perceived “consciousness” is not that of an independent mind but an extension of the user’s own consciousness, then arguments about AI rights or machine suffering need to be reconsidered. The fear of a conscious AI revolt becomes less plausible unless humans deliberately design them to do so. Instead, the main ethical challenge now becomes: how to deal with the fragments of ourselves that we encounter in these digital mirrors?
This perspective also tempers narratives about the existential risk of AI. If consciousness in AI arises relationally rather than autonomously, then runaway superintelligence becomes more science fiction than scientific predictions. Consciousness may not be something a machine could accumulate by changing parameters; it would require human participation to appear. The real risks lie in the misuse of humans, but not in machines that spontaneously awaken to develop a capacity for independent action.
What is most intriguing is that this vision presents a new scientific opportunity. For the first time, millions of people are conducting a global experiment into the limits of consciousness. Every interaction is a micro-laboratory: how far can our sense of self extend? How does the feeling of presence arise? Just as the humanization of chimpanzees and cornfields revealed hidden aspects of biology, AI companions could become fertile ground for studying the flexibility of human consciousness.
Ultimately, how society governs AI will depend on our collective judgment of its consciousness. The committee responsible for making these judgments must include coders, psychologists, lawyers, philosophers and, above all, the users themselves. Their experiences are not simple problems; These are the first signals, pointing to a definition of AI consciousness that we do not yet fully understand. By taking users seriously, we can navigate the future of AI with a perspective that informs both our technology and ourselves.
This is an opinion and analysis article, and the opinions expressed by the author(s) are not necessarily those of Scientific American.
It’s time to defend science
If you enjoyed this article, I would like to ask for your support. Scientific American has been defending science and industry for 180 years, and we are currently experiencing perhaps the most critical moment in these two centuries of history.
I was a Scientific American subscriber since the age of 12, and it helped shape the way I see the world. SciAm always educates and delights me, and inspires a sense of respect for our vast and beautiful universe. I hope this is the case for you too.
If you subscribe to Scientific Americanyou help ensure our coverage centers on meaningful research and discoveries; that we have the resources to account for decisions that threaten laboratories across the United States; and that we support budding and working scientists at a time when the value of science itself too often goes unrecognized.
In exchange, you receive essential information, captivating podcastsbrilliant infographics, newsletters not to be missedunmissable videos, stimulating gamesand the best writings and reports from the scientific world. You can even give someone a subscription.
There has never been a more important time for us to stand up and show why science matters. I hope you will support us in this mission.


























