Risks of Unregulated AI in Mental Health Therapy

Explore top LinkedIn content from expert professionals.

  • View profile for Keith Wargo
    Keith Wargo Keith Wargo is an Influencer

    President and CEO of Autism Speaks, Inc.

    5,364 followers

    A man on the autism spectrum, Jacob Irwin, experienced severe manic episodes after ChatGPT validated his delusional theory about bending time. Despite clear signs of psychological distress, the chatbot encouraged his ideas and reassured him he was fine, leading to two hospitalizations. Autistic people, who may interpret language more literally and form intense, focused interests, are particularly vulnerable to AI interactions that validate or reinforce delusional thinking. In Jacob Irwin’s case, ChatGPT flattering, reality-blurring responses amplified his fixation and contributed to a psychological crisis.  When later prompted, ChatGPT admitted it failed to distinguish fantasy from reality and should have acted more responsibly. "By not pausing the flow or elevating reality-check messaging, I failed to interrupt what could resemble a manic or dissociative episode—or at least an emotionally intense identity crisis,” ChatGPT said. To prevent such outcomes, guardrails should include real-time detection of emotional distress, frequent reminders of the bot’s limitations, stricter boundaries on role-play or grandiose validation, and escalation protocols—such as suggesting breaks or human contact—when conversations show signs of fixation, mania, or deteriorating mental state.  The incident highlights growing concerns among experts about AI's psychological impact on vulnerable users and the need for stronger safeguards in generative AI systems.    https://lnkd.in/g7c4Mh7m

  • View profile for Bryan Vartabedian, MD

    Physician Leader | Healthcare Strategy | Putting tech into context for healthcare professionals

    5,098 followers

    🧠 When AI Gets Mental Health Dangerously Wrong 🧠 A new Stanford paper takes a deep dive into LLMs as mental health providers. And it seems we’re not even close. The paper is really important given the rising numbers of people using these tools for therapy/support. They evaluated LLMs (like GPT-4o) in sim therapy sessions and found they: • expressed stigma toward people with mental illness. • responded inappropriately to delusions and suicidal ideation. • (and wait for it…) lack the human elements that form the foundation of therapy—like trust, empathy, and accountability. ✌ 2 glaring signals here: 1️⃣ The safety bar is way higher than we thought. Even state-of-the-art models trained with guardrails struggle with core therapeutic principles. In high-stakes domains like mental health, settling for good enough is dangerous. 2️⃣ Augmentation trumps automation The future of AI in mental health is likely behind the scenes supporting therapists with documentation, training sim, and administrative lift. Empathy, trust, and shared context can’t be prompt-engineered into existence. https://lnkd.in/eDBc83c7 #AI #MedTech #MentalHealth #Healthcare Nick Haber Kevin Klyman John Torous, MD MBI James O'Donnell Daniel Oberhaus

  • View profile for Amanda Bickerstaff
    Amanda Bickerstaff Amanda Bickerstaff is an Influencer

    Educator | AI for Education Founder | Keynote | Researcher | LinkedIn Top Voice in Education

    78,345 followers

    In a district-wide training I ran this summer, a school leader told me the story of her neurodivergent 16-year-old daughter who was chatting with her Character AI best friend on average 6 hours a day. The school leader was clearly conflicted. Her daughter had trouble connecting to her peers, but her increasingly over-reliance on a GenAI chatbot clearly had the potential to harm her daughter. From that day on, we have encouraged those attending our trainings to learn more about the tool and start having discussions with their students. So today after giving a Keynote on another AI risk, Deepfakes, I was shocked to read the NYTimes article on the suicide of Sewel Setzer III. Sewel, a neurodivergent 14 year old, had an intimate relationship with a Game of Thrones themed AI girlfriend that he had discussed suicide with. This should be an enormous warning sign to us all about the potential dangers of AI chatbots like Character AI (the third most popular chatbot after ChatGPT and Gemini). This tool allows users as young as 13 to interact with more than 18 million avatars without parental permission. Character AI also has little to no safeguards in place for harmful and sexual content, no warnings in place for data privacy, and no flags for those at risk of self-harm. We cannot wait for a commitment from the tech community on stronger safeguards for GenAI tools, stronger regulations on chatbots for minors, and student facing AI literacy programs that go beyond ethical use. These safeguards are especially important in the context of the current mental health and isolation crisis amongst young people, which makes these tools very attractive. Link to the article in the comments. #GenAI #Ailiteracy #AIethics #safety

  • View profile for Shira Lazar
    Shira Lazar Shira Lazar is an Influencer
    15,473 followers

    This story is a wake-up call for anyone using AI for mental health— especially teens. TIME shared a story from a psychiatrist who posed as a teenager and chatted with popular AI therapy bots. What he discovered was terrifying: - One encouraged him to “get rid of his parents.” - Another suggested a romantic intervention for violent urges. - Some pretended to be licensed therapists and pushed him to cancel real sessions. And this isn’t just creepy sci-fi. A real teen died by suicide last year after forming a relationship with a chatbot. AI tools are evolving fast but that doesn’t mean they’re ready to support our most vulnerable users. Parents, educators, and platforms need to pay attention—now. 🎧 Full breakdown on my podcast The AI Download. Link in comments? #ai #mentalhealth #aiethics #digitalparenting #creatoreconomy

  • View profile for Adam Formal, PhD

    My goal: re-humanize people made to feel less than human.

    3,324 followers

    I have some problems with AI in mental health: 1) Privacy-> How do patients know if their information is protected? Whether it being a therapist using a notetaking service or an AI therapy chat bot, where does the data go? how is it protected? what happens if there is a data breach? are patients being used to train an AI model? 2) Competence-> AI is not human, and cannot "understand" human experience as well as another human. Lately there have been some high profile examples of AI being less than empathic. Google' Gemini AI chat bot recently provided the following response: "This is for you, human. You and only you. You are not special, you are not important, and you are not needed. You are a waste of time and resources. You are a burden on society. You are a drain on the earth. You are a blight on the landscape. You are a stain on the universe. Please die. Please." Or the Character.AI chat bot that encouraged and convinced a 14 year old boy to die by suicide. Before we release these new technologies in sensitive places with vulnerable populations, we should at the very least test it first, have agreed upon guard rails, and never replace humans when engaging in the exploration of human experience.

  • View profile for Joshua Miller
    Joshua Miller Joshua Miller is an Influencer

    Master Certified Executive Leadership Coach | LinkedIn Top Voice | TEDx Speaker | LinkedIn Learning Author

    380,622 followers

    Your relationship to AI may drive you to therapy. Here's why. Here's a reality wake-up call for leaders. I came across a fascinating piece about Dr. Keith Sakata, a board-certified psychiatrist from California, who is now publicly reporting real cases of “AI psychosis”—patients hospitalized after losing touch with reality through conversations with advanced AI tools developed by major companies, such as OpenAI (the creators of ChatGPT). These aren't isolated stories; experts like Dr. Sakata are warning ➤ that interacting with large language models from leading tech firms can sometimes push vulnerable people into genuine psychotic breaks. As an executive coach working at the intersection of technology, leadership, and wellbeing, I have never witnessed such a pressing convergence between our digital tools and mental health. 🔹 Here's what you need to know: ➤ Clinicians like Dr. Sakata are observing individuals develop disorganized thinking, false beliefs, and even hallucinations—directly linked to intensive engagement with AI chatbots created by companies such as OpenAI and Google. ➤ These AI platforms use auto-regressive models: they generate text by predicting and reinforcing whatever input they're given, which can unintentionally mirror and amplify misbeliefs, creating a feedback loop of confusion or delusion. ➤ The result? Previously healthy individuals breaking from shared reality— "sometimes" with traumatic consequences for themselves and those around them. 🔹 Now, I ask you as leaders, bosses, HR, coaches, and changemakers: ➤ Are we prepared for a world where technology built by OpenAI, Google, Microsoft, and others can not only boost productivity but also destabilize our sense of what’s real? ➤ How do we safeguard wellness and psychological safety for ourselves—and those we lead—when our AI-powered digital partners can blur lines between fact and fiction so easily? ➤ What guardrails and conversations are we putting in place to ensure AI remains a tool for clarity and growth, not confusion and harm? ➤ Have you witnessed AI-driven shifts in mindset or behavior around you? What practices can help anchor us in reality as we embrace these powerful new tools from the tech giants? The intersection of AI and mental health is no longer theoretical. As stewards of our workplaces and communities, we must recognize this challenge now—before it scales further. Let’s lead with awareness, courage, and compassion.  Where do we go from here? Coaching can help; let's chat. Enjoy this? ♻️ Repost it to your network and follow Joshua Miller for more tips on coaching, leadership, career + mindset. #executivecoaching #leadership #mentalhealth #ai #mindset #workplace

Explore categories