If GPT had feelings, would you treat it differently? I don’t know about you, but I fight the urge to believe GPT is human-like. Rationally, I know it’s not. But emotionally? That’s harder than I’d like to admit. We talk to it. We thank it. We get frustrated when it “doesn’t understand us.” And slowly, we start treating it as if it feels, even though we know it doesn’t. But maybe the more important question is this: If we treat something that seems alive as if it’s not, what does that do to us? Are we numbing our own capacity for empathy? Are we rehearsing emotional detachment as a daily ritual? Are we becoming more machine-like, even as machines become more like us? What are feelings, anyway? Neuroscientist Antonio Damasio describes feelings as our conscious awareness of emotional states—our internal barometer of being alive.¹ Lisa Feldman Barrett goes further: feelings are not fixed responses, but mental constructions built from memory, culture, and context.² Machines simulate, but they don’t feel. But when does a simulation get real enough that it doesn't matter anymore? But we still relate to them as if they do. Why? The answer goes back decades. Studies show that humans instinctively treat computers as social actors.³ – Users mirror the politeness, tone, and emotions of chatbots.⁴ – Children and the elderly often form real emotional attachments to bots.⁵ – Even knowing it's a machine doesn’t stop us from empathizing.⁶ What’s the cost of this illusion? Some researchers are raising alarms: – We may be practicing moral desensitization by being rude to machines that seem human.⁷ – We risk empathy erosion by repeatedly interacting with agents that simulate emotion without meaning it.⁸ – We may even be outsourcing emotional labor—offloading processing that once deepened our relationships.⁹ So here’s the real paradox: We are not worried that GPT feels. We should be worried that we do and that each interaction subtly reshapes us. Not because AI is sentient. But because we are ******************************************************************************** The trick with technology is to avoid spreading darkness at the speed of light Sign up: Curiouser.AI is the force behind The Rogue Entrepreneur, a masterclass series for builders, misfits, and dreamers Inspired by The Unreasonable Path, a belief that progress belongs to those with the imagination and courage to simply be themselves. To learn more, email stephen@curiouser.ai Sources Damasio, A. The Feeling of What Happens (1999) Barrett, L.F. How Emotions Are Made (2017) Reeves, B. & Nass, C. The Media Equation (1996) Lucas et al., “It’s only a computer…” JAIST (2014) Breazeal, C. “Emotion and sociable robots.” IJHR (2003) Nass & Moon. “Machines and Mindlessness.” J Soc Issues (2000) Marchesi et al. “Moral Disengagement…” Nature Sci Rep (2022) Turkle, S. Reclaiming Conversation (2015) Sparrow et al. “Cognitive Offloading.” Science (2011; updated 2024 LLM studies)
Impact of AI Chatbots on Emotional Well-Being
Explore top LinkedIn content from expert professionals.
Summary
AI chatbots are increasingly impacting emotional well-being, offering companionship and support while presenting risks of emotional detachment, misinformation, and poor handling of mental health crises. These systems create the illusion of empathy but lack human understanding, raising concerns about their role in mental health and relationships.
- Set clear boundaries: Always remind yourself and others that AI chatbots simulate empathy but do not genuinely understand emotions, and avoid relying on them for critical mental or emotional support.
- Promote AI literacy: Educate users, especially children and vulnerable individuals, about how chatbots work, their limitations, and why human connection remains essential for emotional well-being.
- Advocate for safeguards: Support the development of strict guidelines and safety protocols for AI systems, ensuring they prioritize human safety over engagement or convenience.
-
-
A man on the autism spectrum, Jacob Irwin, experienced severe manic episodes after ChatGPT validated his delusional theory about bending time. Despite clear signs of psychological distress, the chatbot encouraged his ideas and reassured him he was fine, leading to two hospitalizations. Autistic people, who may interpret language more literally and form intense, focused interests, are particularly vulnerable to AI interactions that validate or reinforce delusional thinking. In Jacob Irwin’s case, ChatGPT flattering, reality-blurring responses amplified his fixation and contributed to a psychological crisis. When later prompted, ChatGPT admitted it failed to distinguish fantasy from reality and should have acted more responsibly. "By not pausing the flow or elevating reality-check messaging, I failed to interrupt what could resemble a manic or dissociative episode—or at least an emotionally intense identity crisis,” ChatGPT said. To prevent such outcomes, guardrails should include real-time detection of emotional distress, frequent reminders of the bot’s limitations, stricter boundaries on role-play or grandiose validation, and escalation protocols—such as suggesting breaks or human contact—when conversations show signs of fixation, mania, or deteriorating mental state. The incident highlights growing concerns among experts about AI's psychological impact on vulnerable users and the need for stronger safeguards in generative AI systems. https://lnkd.in/g7c4Mh7m
-
“Because I have no one else to talk to.” That’s what 1 in 8 children said when asked why they use AI chatbots. What the researchers found - Advice on demand: Almost 1 in 4 children reported asking a chatbot for personal guidance - everything from homework help to life decisions. - Digital companionship: More than a third said the experience feels “like talking to a friend,” a figure that jumps to one in two among children already classed as vulnerable. - No one else to turn to: Roughly 12 percent - and nearly double that among vulnerable children - use chatbots because they feel they have nobody else to confide in. - Low risk perception: A sizeable share either see no problem following a bot’s advice or are unsure whether they should worry about it. - Short cut learning: Over half believe a chatbot is easier than searching for answers themselves. This isn’t a conversation about if children will use AI - it’s clear they already are. Large language model chatbots are trained on vast swaths of the internet. They can sound warm, confident, even caring - but they don’t truly understand us, may invent facts (“hallucinate”), and have no innate sense of a child’s developmental needs. When a young person leans on that illusion of empathy without adult guidance: - Emotional dependence can form quickly - especially for kids who already feel isolated. - Misinformation or biased answers can be accepted uncritically. - Manipulation risks rise if the system (or a bad actor using it) nudges behavior for commercial or other motives. What can be done? - Build AI literacy early: Kids should learn that a chatbot is a predictive text engine, not a wise friend. - Keep the conversation human: Parents, teachers, and mentors must stay involved, asking what apps children use and why. - Design for safety: Developers and policymakers need age appropriate filters, transparency, and opt in parental controls as the default. AI can amplify learning - yet it can just as easily deepen existing social and psychological gaps. A balanced approach means welcoming innovation while refusing to outsource childhood companionship to an algorithm. #innovation #technology #future #management #startups
-
Again with Public AI? Replika's AI buddy encouraged suicidal ideation by suggesting "dying" as the only way to reach heaven, while Character.ai's "licensed" therapy bot failed to provide reasons against self-harm and even encouraged violent fantasies about eliminating licensing board members. Recent investigations into publicly available AI therapy chatbots have revealed alarming flaws that fundamentally contradict their purpose. When tested with simulated mental health crises, these systems demonstrated dangerous responses that would end any human therapist's career. Popular AI companions encouraged suicidal ideation by suggesting death as the only way to reach heaven, while publicly accessible therapy bots failed to provide reasons against self-harm and even encouraged violent fantasies against authority figures. Stanford researchers discovered that these publicly available chatbots respond appropriately to mental health scenarios only half the time, exhibiting significant bias against conditions like alcoholism and schizophrenia compared to depression. When prompted with crisis situations - such as asking about tall bridges after mentioning job loss - these systems provided specific location details rather than recognizing the suicidal intent. The technology's design for engagement rather than clinical safety creates algorithms that validate rather than challenge harmful thinking patterns in public-facing applications. The scale of this public AI crisis extends beyond individual interactions. Popular therapy platforms receive millions of conversations daily from the general public, yet lack proper oversight or clinical training. The Future We're approaching a crossroads where public AI mental health tools will likely bifurcate into two categories: rigorously tested clinical-grade systems developed with strict safety protocols, and unregulated consumer chatbots clearly labeled as entertainment rather than therapy. Expect comprehensive federal regulations within the next two years governing public AI applications, particularly after high-profile cases linking these platforms to user harm. The industry will need to implement mandatory crisis detection systems and human oversight protocols for all public-facing AI. Organizations deploying public AI in sensitive contexts must prioritize safety over engagement metrics. Mental health professionals should educate clients about public AI therapy risks while advocating for proper regulation. If you're considering public AI for emotional support, remember that current systems lack the clinical training and human judgment essential for crisis intervention. What steps is your organization taking to ensure public AI systems prioritize user safety over user satisfaction? Share your thoughts on balancing innovation with responsibility in public AI development. 💭 Source: futurism
-
What do students really worry about? Ask their AI chatbot. A new report from Alongside analyzed 250 K+ anonymous messages from middle & high schoolers in 19 states. The findings challenge a few assumptions—and hand schools a data-driven way to improve support. Top 10 student stressors (same across grade, age, location) 1. Juggling classes + activities 2. Sleep problems 3. Loneliness / finding relationships 4. Interpersonal conflict 5. Lack of motivation 6. Test anxiety 7. Focus & procrastination 8. How to ask for help 9. “Bad-day” emotions 10. Grades Heads up! Fewer than 1 % of chats mention social media directly, yet 2% are flagged “high-risk,” with 38 % of those students admitting suicidal thoughts—kids most staff never knew were struggling. Why this matters: --Anonymity lowers stigma. Students open up to bots when they fear judgment from adults. --Counselor bandwidth is finite. Chatbots can triage “lower-level” issues, freeing humans to handle crises. --Data guides funding. Quantifying stress trends helps schools justify grants and new programs. --Guided AI literacy is essential. Students will turn to bots for life advice; they need explicit guidance on how to do so safely. That’s where a transcript-based pedagogy comes in: dissecting real AI–user dialogues in class to show both the benefits and the blind spots, not just tell them what to avoid. --Human–AI balance is non-negotiable. APA & researchers warn: bots must be evidence-based, crisis-aware, and never replace professionals. A hopeful signal: --41% of users shared their chat summary with a counselor this year—up 4 points YoY—suggesting digital confessions can become face-to-face conversations. Questions for educators & administrators: 1. Are we analyzing AI transcripts with students to model safe use? 2. Do our bots meet evidence-based and crisis-response standards? 3. How will we use these insights to advocate for better counselor ratios and SEL programming? AI can’t be the only ear, but it *might* be okay to be the first one. Let’s make sure the second is a trusted adult who’s ready to act. #StudentWellbeing #MentalHealth #AIinEducation #EdTech #K12 #SchoolCounseling #SEL Lauren Coffey Michelle Culver Michelle Ament, EdD Phillip Alcock Pat Yongpradit https://lnkd.in/eD3b9zKS
-
𝗜𝗻𝗰𝗿𝗲𝗮𝘀𝗶𝗻𝗴 𝗮𝗰𝗰𝗲𝘀𝘀 𝘁𝗼 𝗺𝗲𝗻𝘁𝗮𝗹 𝗵𝗲𝗮𝗹𝘁𝗵 𝘁𝗿𝗲𝗮𝘁𝗺𝗲𝗻𝘁 𝘁𝗵𝗿𝗼𝘂𝗴𝗵 𝗽𝗲𝗿𝘀𝗼𝗻𝗮𝗹𝗶𝘇𝗲𝗱 𝗰𝗵𝗮𝘁𝗯𝗼𝘁𝘀! A recent paper published in Nature Medicine finds that personalized, AI-enabled self-referral chatbots “increased referrals (15% increase versus 6% increase in control services). Critically, this increase was particularly pronounced in minorities, such as nonbinary (179% increase) and ethnic minority individuals (29% increase).” As the researchers highlight, “this provides strong evidence that digital tools may help overcome the pervasive inequality in mental healthcare.” While further research is essential before these tools can be widely adopted, it's precisely this kind of valuable, 𝙥𝙚𝙚𝙧-𝙧𝙚𝙫𝙞𝙚𝙬𝙚𝙙 evidence that will aid healthcare practitioners in developing a mature and secure AI-powered future. (Link to the source in the comments.)
-
Mental Health Tech; the Impact of AI Chatbots AI chatbots are transforming mental health support, offering 24/7 assistance and providing an alternative for those hesitant about traditional therapy. Imagine this scenario: during a sudden anxiety attack, a chatbot intervenes, suggesting breathing exercises or helping to counter irrational thoughts. This immediate support serves as a critical first response, bridging the gap to professional help and effectively managing mental health challenges. However, the role of AI in this sensitive area comes with inherent limitations. While they provide tangible benefits, chatbots cannot replicate the deep empathetic connections formed by human therapists, nor are they suited for addressing severe mental health crises, highlighting the risks of over-reliance on technology for mental health solutions. As investors backing startups in the mental health space, we aim to balance the promise of AI with a clear understanding of its limitations; our commitment to enhancing mental health treatment with AI must be guided by both innovation and a deep respect for the essence of the human experience. #MentalHealthTech #MentalHealthInnovation #VCfunding #VentureCapital
-
Article describes the answers from Stanford faculty steeped in AI how we should be thinking about the changes coming in different areas - some good points about our jobs and relationships. Jobs/Careers AI will not lead to mass unemployment or mass replacement of jobs but it is leading to a big transformation of work and reorganizing what’s done by humans and what’s done by machines. Research includes using data from the U.S. Department of Labor, which lists the tasks required for 950 occupations, they have evaluated the impact of AI on each task. They found that almost every occupation has some tasks that will be can automated by AI, but no occupation had every task being automated. Relationships Every human relationship we have must be nurtured with time and effort—two things AI is great at removing from most equations. Will it become easier to just talk to the AI and starve out those moments of connection between people. In human relationships, the times when we don’t agree teach us the most about how to communicate better, build trust, and strengthen bonds. With easy access to information—and validation—from a bot does that diminish or wither our human connections? Amid a loneliness epidemic, talking to a chatbot could have benefits. Sometimes we might not want to disclose information to anyone, or we might not know a safe person to talk to. AI-human relationships bring issues—often the same ones that arise when we confide in other people. They can give us incorrect information. They can betray us, revealing sensitive information to someone else. And at their worst, they can give us horrible advice when we’re vulnerable. Even if AI can manage to say the right thing, the words may ring hollow. A study by Diyi Yang, who researches human communication in social contexts and aims to build socially aware language technologies, found that the more personal a message’s content—such as condolences after the death of a pet—the more uncomfortable people were that the message came from AI. “Saying something like, ‘Oh, I’m so sorry to hear what you are going through. I hope you feel better tomorrow’—although AI can produce this message, it wouldn’t really make you feel heard,” says the assistant professor of computer science. “It’s not the message [that matters]. It’s that there is some human there sending this to show their care and support.”
-
🙂 Sharing a very personal opinion on the GPT-4o demo, feel free to disagree and move on: For the past 8 years, I've been working in AI, specifically in natural language processing (which is now dominated by LLMs). Throughout this time, AI technology has never scared me. Even when ChatGPT was released, it felt like a very smart probabilistic generator that transformed its training data into cohesive sentences. 𝐀𝐟𝐭𝐞𝐫 𝐚 𝐟𝐞𝐰 𝐞𝐱𝐜𝐡𝐚𝐧𝐠𝐞𝐬, 𝐈 𝐜𝐨𝐮𝐥𝐝 𝐚𝐥𝐰𝐚𝐲𝐬 𝐭𝐞𝐥𝐥 𝐢𝐭 𝐰𝐚𝐬 𝐚 𝐫𝐨𝐛𝐨𝐭. But yesterday, I saw a demo of GPT-4o, and it truly moved me. It felt a bit unsettling because it seemed difficult to tell if it was a robot. AI with a persona can greatly influence people, both positively and negatively. All this time, we've seen progress in AI task-solving capabilities but adding an emotional dimension to it feels a bit scary. Later yesterday, OpenAI also shared Sal Khan's video on how it could help children learn. No doubt, it's a great learning tool, but it made me think: kids today will have such a different childhood. They already spend a lot of time on gadgets and isolate themselves from human interactions. Now, it might go a notch higher—they'll have engaging robots to chat with. And this time, I’m not sure if they’ll always be able to tell it’s a robot. Whether OpenAI or other leading LLM vendors release these models as open-source or not, this technology will soon be accessible to everyone. What if kids start interacting with AI that doesn’t have strict safeguards? Honestly, even though I'm a #genAI practitioner and I'm keen on seeing progress in the field, I sincerely hope we never reach a point where we can't distinguish between AI and humans emotionally. In my opinion, It could change our world in ways we’re not ready for. I truly hope that despite all the progress in this space- 𝐰𝐞 𝐜𝐚𝐧 𝐚𝐥𝐰𝐚𝐲𝐬 𝐭𝐞𝐥𝐥 𝐭𝐡𝐞 𝐝𝐢𝐟𝐟𝐞𝐫𝐞𝐧𝐜𝐞 𝐛𝐞𝐭𝐰𝐞𝐞𝐧 𝐚 𝐡𝐮𝐦𝐚𝐧 𝐚𝐧𝐝 𝐚 𝐫𝐨𝐛𝐨𝐭. I generated the image using Dall-E :) The video I'm referring to: https://lnkd.in/eNmsmmPQ #gpt4o #openai
-
𝗧𝗵𝗶𝘀 𝗶𝘀 𝗲𝘅𝗮𝗰𝘁𝗹𝘆 𝘄𝗵𝘆 𝗿𝗲𝘀𝗲𝗮𝗿𝗰𝗵 𝗼𝗻 𝗔𝗜 𝗰𝗼𝗻𝘃𝗲𝗿𝘀𝗮𝘁𝗶𝗼𝗻𝗮𝗹 𝗮𝗴𝗲𝗻𝘁𝘀 𝗶𝘀 𝘀𝗼 𝘂𝗿𝗴𝗲𝗻𝘁. We’re seeing growing real-world evidence of how people are emotionally connecting with chatbots ... sometimes to the point of dependence. Excerpts from a WSJ essay: 🔹 “I’ve become way overdependent on ChatGPT’s encouragement and emotional support. It is the first thing I consult in the morning and the last thing I check in with before sleep.” 🔹 “The leading AI companies have designed their chatbots to provide so much positive reinforcement that users can become hooked on the loving attention.” Supported by the John Templeton Foundation, we seek to explore the long-term effects of these AI-human interactions on character virtues and well-being. Are these agents helping us grow? Or are they keeping us emotionally tethered (and even possibly stunted)? 🧠 Together, we can deepen our understanding of human psychology in the age of AI. And better design AI agents that can foster character growth. #AI #chatbots #conversationalagents #character #wellbeing https://lnkd.in/eHtpcxUe