X

Stop Talking About AI as if It's Human. It's Not

Commentary: Instead of pretending AI is a cognizant being with emotions, let's examine the actual risks and limitations.

Headshot of Macy Meyer
Headshot of Macy Meyer
Macy Meyer Writer II
Macy is a writer on the AI Team. She covers how AI is changing daily life and how to make the most of it. This includes writing about consumer AI products and their real-world impact, from breakthrough tools reshaping daily life to the intimate ways people interact with AI technology day-to-day. Macy is a North Carolina native who graduated from UNC-Chapel Hill with a BA in English and a second BA in Journalism. You can reach her at mmeyer@cnet.com.
Expertise Macy covers consumer AI products and their real-world impact Credentials
  • Macy has been working for CNET for coming on 2 years. Prior to CNET, Macy received a North Carolina College Media Association award in sports writing.
Macy Meyer
4 min read
Defocused shot of a female standing against illuminated LED digital display screen in the dark.

Anthropomorphizing AI gives people the wrong idea about what these systems actually are. And that has consequences. 

d3sign/Getty Images

In the race to make AI models appear increasingly impressive, tech companies have adopted a theatrical approach to language. They keep talking about AI as if it's a person. Not only about the AI "thinking" or "planning" -- those words are already fraught -- but now they're discussing an AI model's "soul" and how models "confess," "want," "scheme" or "feel uncertain."

This isn't a harmless marketing flourish. Anthropomorphizing AI is misleading, irresponsible and ultimately corrosive to the public's understanding of a technology that already struggles with transparency, at a moment when clarity matters most.

Research from large AI companies, intended to shed light on the behavior of generative AI, is often framed in ways that obscure more than illuminate. Take, for example, a recent post from OpenAI that details its work on getting its models to "confess" their mistakes or shortcuts. It's a valuable experiment that probes how a chatbot self-reports certain "misbehaviors," like hallucinations and scheming. But OpenAI's description of the process as a "confession" implies there's a psychological element behind the outputs of a large language model. 

Perhaps that stems from a recognition of how challenging it is for an LLM to achieve true transparency. We've seen that, for instance, AI models cannot reliably demonstrate their work in activities like solving Sudoku puzzles

There's a gap between what the AI can generate and how it generates it, which is exactly why this human-like terminology is so dangerous. We could be discussing the real limits and dangers of this technology, but terms that label AI as cognizant beings only minimize concerns or gloss over the risks. 


Don't miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


AI has no soul 

AI systems don't have souls, motives, feelings or morals. They don't "confess" because they feel compelled by honesty, any more than a calculator "apologizes" when you hit the wrong key. These systems generate patterns of text based on statistical relationships learned from vast datasets. 

That's it. 

Anything that feels human is the projection of our inner life onto a very sophisticated mirror.

Anthropomorphizing AI gives people the wrong idea about what these systems actually are. And that has consequences. When we begin to assign consciousness and emotional intelligence to an entity where none exists, we start trusting AI in ways it was never meant to be trusted. 

Today, more people are turning to "Doctor ChatGPT" for medical guidance rather than relying on licensed, qualified clinicians. Others are turning to AI-generated responses in areas such as financesemotional health and interpersonal relationships. Some are forming dependent pseudo-friendships with chatbots and deferring to them for guidance, assuming that whatever an LLM spits out is "good enough" to inform their decisions and actions. 

How we should talk about AI

When companies lean into anthropomorphic language, they blur the line between simulation and sentience. The terminology inflates expectations, sparks fear and distracts from the real issues that actually deserve our attention: bias in datasets, misuse by bad actors, safety, reliability and concentration of power. None of those topics requires mystical metaphors.

Take Anthropic's recent leak of its "soul document," used to train Claude Opus 4.5's character, self-perception and identity. This zany piece of internal documentation was never meant to make a metaphysical claim -- more like its engineers were riffing on a debugging guide. However, the language these companies use behind closed doors inevitably seeps into how the general population discusses them. And once that language sticks, it shapes our thoughts about the technology, as well as how we behave around it.

Or take OpenAI's research into AI "scheming" research, where a handful of rare but deceptive responses led some researchers to conclude that models were intentionally hiding certain capabilities. Scrutinizing AI results is good practice; implying chatbots may have motives or strategies of their own is not. OpenAI's report actually said that these behaviors were the result of training data and certain prompting trends, not signs of deceit. But because it used the word "scheming," the conversation turned to concerns over AI being a kind of conniving agent.    

There are better, more accurate and more technical words. Instead of "soul," talk about a model's architecture or training. Instead of "confession," call it error reporting or internal consistency checks. Instead of saying a model "schemes," describe its optimization process. We should refer to AI using terms like trends, outputs, representations, optimizers, model updates or training dynamics. They're not as dramatic as "soul" or "confession," but they have the advantage of being grounded in reality.

To be fair, there are reasons why these LLM behaviors appear human -- companies trained them to mimic us. 

As the authors of the 2021 paper "On the Dangers of Stochastic Parrots" pointed out, systems built to replicate human language and communication will ultimately reflect it -- our verbiage, syntax, tone and tenor. The likeness doesn't imply true understanding. It means the model is performing what it was optimized to do. When a chatbot imitates as convincingly as the chatbots are now able to, we end up reading humanity into the machine, even though no such thing is present.

Language shapes public perception. When words are sloppy, magical or intentionally anthropomorphic, the public ends up with a distorted picture. That distortion benefits only one group: the AI companies that profit from LLMs seeming more capable, useful and human than they actually are.

If AI companies want to build public trust, the first step is simple. Stop treating language models like mystic beings with souls. They don't have feelings -- we do. Our words should reflect that, not obscure it.

Read also: In the Age of AI, What Does Meaning Look Like?