Social Impact Of AI

Explore top LinkedIn content from expert professionals.

  • View profile for Pascal BORNET

    #1 Top Voice in AI & Automation | Award-Winning Expert | Best-Selling Author | Recognized Keynote Speaker | Agentic AI Pioneer | Forbes Tech Council | 2M+ Followers ✔️

    1,500,935 followers

    🧠 Your Brain Is Quietly Paying a Price for Using ChatGPT We spend hours with LLMs like ChatGPT. But are we fully aware of what they’re doing to our brains? A new study from MIT delivers a clear message: The more we rely on AI to generate and structure our thoughts, the more we risk losing touch with essential cognitive processes — creativity, memory, and critical reasoning. 📊 Key insight? When students wrote essays using GPT-4o, real-time EEG data showed a significant decline in activity across brain regions tied to executive control, semantic processing, and idea generation. When those same students later had to write without AI assistance, their performance didn’t just drop — it collapsed. 🔬 What they did: 54 students wrote SAT-style essays across multiple sessions, while high-density EEG tracked information flow between 32 brain regions. Participants were split across three tools: → Solo writing (“Brain-only”) → Google Search → GPT-4o (LLM-assisted) In the final round, the groups switched: GPT users wrote unaided, and unaided writers used GPT. (LLM→Brain and Brain→LLM) ⚡ What they found: Neural dampening: Full reliance on the LLM led to the weakest fronto-parietal and temporal connectivity — signaling lighter executive function and shallower semantic engagement. Sequence effects: Writers who began solo and then layered on GPT showed increased brain-wide activity — a sign of active cognitive engagement. The reverse group (starting with GPT) showed the lowest coordination and overused LLM-preferred vocabulary. Memory failures: In their very first AI-assisted session, no GPT users could recall a single sentence they had just written — while most solo writers could. Cognitive debt: Repeated LLM use led to narrower idea generation and reduced topic diversity — making recovery without AI more difficult. 🌱 What does this mean for us? LLMs make content creation feel frictionless. But that very convenience comes at a cost: Diminished engagement. Lower memory. Narrower thinking. If we want to preserve intellectual independence and the ability to truly think, we need to use LLMs with intention. →Use them too soon, and the brain goes quiet. →Use them after thinking independently — and they amplify our output. ✨ Hybrid workflows are the way forward: Start with your own cognition, then apply LLMs to sharpen, not replace. The most irreplaceable kind of AI will always be Actual Intelligence. 👉 Full study (with TL;DR + summary table): https://zurl.co/0hnox

  • View profile for Peter Walker
    Peter Walker Peter Walker is an Influencer

    Head of Insights @ Carta | Data Storyteller

    155,936 followers

    New movement - we delete AI comments when we find them. I'm unsure when this tipped over from mild to severe annoyance, but it's become an epidemic. For those of us building original content, the comments are (were?) the best part. Learning from intelligent people who push back on your thesis, or support it with new information, or offer a personal story that leads to another new idea - that's the good stuff. Meanwhile, AI comments jump into the conversation offering: "𝘎𝘳𝘦𝘢𝘵 𝘪𝘯𝘴𝘪𝘨𝘩𝘵𝘴 𝘧𝘰𝘳 𝘚𝘢𝘢𝘚 𝘧𝘰𝘶𝘯𝘥𝘦𝘳 𝘯𝘢𝘷𝘪𝘨𝘢𝘵𝘪𝘯𝘨 𝘥𝘪𝘭𝘶𝘵𝘪𝘰𝘯 𝘣𝘦𝘯𝘤𝘩𝘮𝘢𝘳𝘬𝘴! 🚀 𝘠𝘰𝘶𝘳 𝘦𝘹𝘱𝘦𝘳𝘪𝘦𝘯𝘤𝘦 𝘸𝘪𝘭𝘭 𝘤𝘦𝘳𝘵𝘢𝘪𝘯𝘭𝘺 𝘩𝘦𝘭𝘱 𝘰𝘵𝘩𝘦𝘳𝘴 𝘪𝘯 𝘵𝘩𝘦 𝘤𝘰𝘮𝘮𝘶𝘯𝘪𝘵𝘺, 𝘗𝘦𝘵𝘦𝘳 𝘞𝘢𝘭𝘬𝘦𝘳" Eloquent. I'm not even sure what these are supposed to achieve. If you use an AI comment tool, please comment (yourself this time) below and let me know what the goal is. Is it as basic as increasing your profile views as others in the thread accidentally click your profile icon? Because the comment above is not inspiring any action in any real reader. So from now on I'm just deleting them from any post I write. I invite you all to join me 🙏

  • View profile for Nicki Martin
    Nicki Martin Nicki Martin is an Influencer

    Scaling businesses with automation & AI. Systems and services to help you scale on auto. 5 x Founder. 2 x NED. Building Buzzio to 7 figures in 12 months. Follow my journey (+ free stuff) via link 👇🏼

    51,485 followers

    Not all engagement is created equal! Algo update! LinkedIn’s algorithm is now penalising accounts with lots of automated, AI-generated comments on their posts 🚫 Instead of helping, these repetitive or irrelevant comments, that repeat your post back to you parrot fashion, could actually be DAMAGING the reach of your favourite Creators, meaning you will see LESS of what you like in the feed! And they won't thank you for it! If you've been taught by some 'guru' that engagement always wins, and installed a chrome extension or third party tool to help you keep on top of it, please wipe that smug smile off your face! LinkedIn is on to you! You're damaging not only your own reach, but that of the people you have been attempting to build a robotic relationship with! Everyone knows I love AI - but the comments section is NOT the place for it! At You Need Nicki, we've always stood firm on the power of real, meaningful engagement. There's no shortcuts, you have to do the work. And we're super happy to see the Linkedin algorithm favouring genuine interactions that drive value again. Thoughtful comments, authentic conversations, and real connections. This is what we help our clients focus on: quality over quantity, with engagement that builds genuine connections and opportunity. Genuine engagement is a springboard for real life relationships - like the lovely Angie McQuillin here who is one of the many LinkedIn connections I have since had the pleasure of meeting in person. Pro tip: If you spot AI-driven, empty comments on your posts, consider deleting or blocking them to protect your reach and maintain a high-value feed. How can you spot an AI comment on your post? Drop your thoughts in the comments? 🤖 #LinkedInTips #SocialMediaStrategy #MeaningfulEngagement #AlgoUpdate

  • View profile for Saanya Ojha
    Saanya Ojha Saanya Ojha is an Influencer

    Partner at Bain Capital Ventures

    73,360 followers

    A new study caused a bit of a stir last week - not because its conclusions were shocking, but because it sparked a conversation that feels older than AI itself. The study, titled 'Your Brain on ChatGPT', looked at what happens when people use large language models (LLMs) to help them write essays. They had three groups: One wrote the essays the old-fashioned way, with their actual brains. One used a search engine. One used ChatGPT. Then they flipped some of them around: people who’d gotten used to the AI had to write without it, and vice versa. Researchers tracked brain activity and text complexity (topic diversity, vocabulary, etc). And what they found will probably not surprise you: when people used AI, their cognitive effort went down - because, well, that’s the point of AI. And when they tried to write without it after relying on it, they struggled more. The authors dubbed this effect “cognitive debt”: we offload mental work now, but pay the price later when we need those skills. Cue the headlines about the “dark side of AI.” A generation that can prompt but not think. Higher-order reasoning under siege. But of course, this isn’t new at all. ▪️Wheels made us weaker walkers, but gave us speed, reach, and trade. ▪️Calculators dulled mental arithmetic, but unlocked more complex math. ▪️GPS softened our internal compass, but let us explore more freely. This isn’t a reason to panic. It’s a reason to be deliberate. You don’t blame the wheel for weaker legs - you go to the gym. You don’t stop using GPS - you go hiking to stay sharp. AI is no different. It takes away certain cognitive workouts and it’s up to us to design new ones. Personally? I’d much rather choose yoga to stay fit than chase down my dinner. It’s easy to claim LLMs pose a greater risk because they touch higher-order reasoning and creativity. But that assumes these are static qualities, threatened by tools, rather than dynamic ones shaped by how we use tools. Every major technology has redefined what counts as “higher-order work.” The real risk isn’t creating a generation that can prompt but not think - it’s creating a culture that treats prompting and thinking as either-or, when in reality they’re intertwined. In my experience, LLMs free me to focus on substance over form. If they help with boilerplate or structure, that’s not decay - that’s leverage. It’s like moving from walking out of necessity to walking because you want to. AI lets us choose our cognitive workouts. And so the questions worth asking aren’t “Is AI bad?” but: ▪️What parts of thinking are worth preserving through intentional effort? ▪️What cognitive load are we ready to let go of to pursue bigger, better questions? After all, we didn’t ban wheels. We built cities. We didn’t burn calculators. We taught calculus. We didn’t kill search engines. We raised our standards for truth. We won’t fear AI. We’ll learn to make it our leverage.

  • View profile for Leanne Shelton ✨
    Leanne Shelton ✨ Leanne Shelton ✨ is an Influencer

    Author of ‘AI-Human Fusion’ | LinkedIn Top Voice – AI | Smart, Human-First Use of AI Tools for Marketing, Business Communications & Productivity | Training | Keynotes | Coaching

    7,715 followers

    Linkedin is starting to feel like a networking event where everyone’s swapped their business cards for bots. Scroll for five seconds and you’ll see what I mean. There are auto-generated posts with zero soul. Generic leadership quotes or messages repackaged by ChatGPT. Comments like “Great insights!” that scream “AI wrote this for me!” And in some cases, there are bots commenting on content written by other bots. Ugh. 😅 We’ve turned one of the most powerful platforms for professional human connection into an AI echo chamber – and I’m confused about why humans are letting it happen. Don’t get me wrong. I train companies to use AI tools like ChatGPT every day. I love what’s possible when the tech is used well. But what we’re seeing on LinkedIn right now isn’t smart AI use. We’re seeing humans outsourcing their voices to the machines. As a result, we’re losing the very thing that made this platform powerful in the first place – real people, sharing real ideas, with real impact. It’s not too late to turn this around. But first, we need to talk about the problem – and what we need to do about it. Read more of my musings in my article published by Mumbrella today - https://lnkd.in/gsxfe-jx LinkedIn coaches, Kate Merryweather and Karen Tisdell, and I cover this topic in more detail within 'AI-Human Fusion'. Grab your copy here - https://lnkd.in/gGGRyz5C #ai #linkedin #keepithuman

  • View profile for Christina Stathopoulos, MSc

    Data & AI Evangelist | Global Keynote Speaker & Award-Winning Educator | Making data & AI work for everyone, through a responsible lens! | Join my #bookaweekchallenge 📚

    94,669 followers

    Your brain on ChatGPT 👇 Do you have a balanced relationship with AI? What does an over-reliance on (Gen)AI do to you? New studies are coming out to address this very question... Spoiler Alert: It shouldn't be a surprise that the impact is detrimental. I've been saying this from early on. Just as your muscles atrophy when you don't exercise, your brain will atrophy if you don't put it to work. I see a new study by Massachusetts Institute of Technology making the rounds all over socials. It explores the neural and behavioral consequences of LLM-assisted essay writing. The study split participants into 3 groups tasked with writing essays: 🔹 those using ChatGPT (LLM group) 🔹 those using search engines 🔹 those without any digital assistance (brain-only group) They measured their brain activity using EEG and graded the quality and originality of the essays produced. The results? The ChatGPT group showed the weakest neural engagement, especially regarding memory, attention and executive control. Their essays were less original and more formulaic. They couldn't recall their work compared to the other groups. Then the researchers added a twist - roles were reversed! The ChatGPT group had to write essays without AI, and the brain-only group were allowed to use ChatGPT. So what do you think happened?... The ChatGPT group continued to show reduced brain activity. The brain-only group experienced a boost in neural engagement. Generative AI is still 'new' for us. These early studies only give us a glimpse into what may happen in the long-term. Relying on tools like ChatGPT may initially feel convenient and productive, but there is a lingering cognitive impact. You're throwing your critical thinking and creativity out the door. What does it take to get that back? What if, down the line, you've offloaded all of your higher-order thinking skills to an AI for the last decade - can you engage in your own analytical, critical and creative thinking again? I'm a data and AI evangelist so maybe you're surprised to read this post from me. But you shouldn't be because I'm a RESPONSIBLE advocate for this tech. I'm heavily involved in higher education too, and I firmly believe we need more balanced use of AI in educational settings, and life in general, to preserve and enhance our cognitive functions. I use AI daily. But I also still engage in my own 'old school' activities like hand-written notes and only reading paperback books. Have you found your balance with AI? #artificialintelligence #generativeAI #genAI #neuroscience #LLMs

  • View profile for Romano Roth
    Romano Roth Romano Roth is an Influencer

    Global Chief of Cybernetic Transformation | Author of The Cybernetic Enterprise | Thought Leader | Executive Advisor | Keynote Speaker | Lecturer | Empowering Organizations through People, Process, Technology & AI

    16,487 followers

    😓 𝐂𝐨𝐧𝐯𝐞𝐧𝐢𝐞𝐧𝐜𝐞 𝐭𝐨𝐝𝐚𝐲. 𝐂𝐨𝐠𝐧𝐢𝐭𝐢𝐯𝐞 𝐝𝐞𝐛𝐭 𝐭𝐨𝐦𝐨𝐫𝐫𝐨𝐰. 𝐇𝐞𝐫𝐞’𝐬 𝐰𝐡𝐚𝐭 𝐲𝐨𝐮𝐫 𝐛𝐫𝐚𝐢𝐧 𝐥𝐨𝐨𝐤𝐬 𝐥𝐢𝐤𝐞 𝐨𝐧 𝐂𝐡𝐚𝐭𝐆𝐏𝐓. 😵💫 A new MIT study titled “𝐘𝐨𝐮𝐫 𝐁𝐫𝐚𝐢𝐧 𝐨𝐧 𝐂𝐡𝐚𝐭𝐆𝐏𝐓: 𝐀𝐜𝐜𝐮𝐦𝐮𝐥𝐚𝐭𝐢𝐨𝐧 𝐨𝐟 𝐂𝐨𝐠𝐧𝐢𝐭𝐢𝐯𝐞 𝐃𝐞𝐛𝐭 𝐰𝐡𝐞𝐧 𝐔𝐬𝐢𝐧𝐠 𝐚𝐧 𝐀𝐈 𝐀𝐬𝐬𝐢𝐬𝐭𝐚𝐧𝐭 𝐟𝐨𝐫 𝐄𝐬𝐬𝐚𝐲-𝐖𝐫𝐢𝐭𝐢𝐧𝐠” reveals something many have suspected, but few have proven: 𝐮𝐬𝐢𝐧𝐠 𝐂𝐡𝐚𝐭𝐆𝐏𝐓 doesn’t just change how we write it 𝐜𝐡𝐚𝐧𝐠𝐞𝐬 𝐡𝐨𝐰 𝐰𝐞 𝐭𝐡𝐢𝐧𝐤. 𝐖𝐡𝐚𝐭 𝐭𝐡𝐞 𝐫𝐞𝐬𝐞𝐚𝐫𝐜𝐡𝐞𝐫𝐬 𝐝𝐢𝐝: 🔹54 students wrote SAT-style essays in four sessions while high-density EEG tracked activity across 32 brain regions. 🔹Three conditions: 🔸Brain-only (no tools) 🔸Google Search 🔸GPT-4o (ChatGPT) 🔹In Session 4, they flipped roles: 🔸Brain→LLM: former unaided writers used GPT 🔸LLM→Brain: GPT users had to write unaided 𝐊𝐞𝐲 𝐅𝐢𝐧𝐝𝐢𝐧𝐠𝐬: 🔹𝐂𝐫𝐞𝐚𝐭𝐢𝐯𝐢𝐭𝐲 𝐨𝐟𝐟𝐥𝐨𝐚𝐝𝐞𝐝, 𝐧𝐞𝐭𝐰𝐨𝐫𝐤𝐬 𝐝𝐢𝐦𝐦𝐞𝐝: GPT-only use showed the weakest fronto-parietal and temporal brain connectivity, signs of shallow thinking and reduced executive control. 🔹𝐎𝐫𝐝𝐞𝐫 𝐦𝐚𝐭𝐭𝐞𝐫𝐬: Starting without AI and revising with it led to peak brain activation. Starting with GPT and switching later caused the lowest engagement and over-reliance on AI-favored phrasing. 🔹𝐌𝐞𝐦𝐨𝐫𝐲 𝐚𝐧𝐝 𝐨𝐰𝐧𝐞𝐫𝐬𝐡𝐢𝐩 𝐜𝐨𝐥𝐥𝐚𝐩𝐬𝐞: None of the GPT writers could quote their own essay sentences, right after writing them. Solo writers had near-perfect recall. 🔹𝐂𝐨𝐠𝐧𝐢𝐭𝐢𝐯𝐞 𝐝𝐞𝐛𝐭 𝐚𝐜𝐜𝐮𝐦𝐮𝐥𝐚𝐭𝐞𝐬: Repeated GPT use led to narrower thinking and reduced linguistic diversity. Once the tool was removed, recovery was difficult. 𝐖𝐡𝐲 𝐢𝐭 𝐦𝐚𝐭𝐭𝐞𝐫𝐬: This study introduces the idea of 𝐜𝐨𝐠𝐧𝐢𝐭𝐢𝐯𝐞 𝐝𝐞𝐛𝐭: every shortcut we take with AI today could 𝐝𝐮𝐥𝐥 our ability to 𝐥𝐞𝐚𝐫𝐧, 𝐫𝐞𝐦𝐞𝐦𝐛𝐞𝐫, and 𝐭𝐡𝐢𝐧𝐤 tomorrow. 𝐖𝐚𝐧𝐭 𝐭𝐨 𝐬𝐭𝐚𝐲 𝐬𝐡𝐚𝐫𝐩? Don’t let ChatGPT be your starting point. Let it support, not replace, your thinking. Start with your own ideas. Then bring in AI for revision, clarity, or expansion. That’s how we keep the human in human creativity. 🔗full study is in the comment #AI #ChatGPT #Education #Neuroscience #Learning #Productivity

  • View profile for Jacob Morgan

    Keynote Speaker, Professionally Trained Futurist, & 6x Author. Founder of “Future Of Work Leaders” (Global CHRO Community). Focused on Leadership, The Future of Work, & Employee Experience

    153,517 followers

    A new MIT study just uncovered something startling: 👉 People who used ChatGPT to help them write showed weaker neural activity, lower memory retention, and less ownership of their work. Meanwhile, those who wrote without AI had stronger, more distributed brain connectivity and produced more original, diverse content. This isn’t just about writing. It’s about what happens when we outsource too much thinking to machines. As a futurist, I believe AI will be one of the greatest tools of our era — but every tool comes with a trade-off. What we gain in efficiency, we risk losing in cognitive depth, originality, and human learning. The big question for leaders: How do we build organizations where AI amplifies human intelligence without replacing it? ✅ Use AI to spark ideas, not complete them. ✅ Let machines accelerate, but not automate, judgment. ✅ Preserve space for deep work and critical thinking. One of the things we talk about in my CHRO group Future Of Work Leaders is what happens when every employee at your company has access to AI tools and assistants? The danger is outsourcing our critical thinking and decision making. We’re entering the Era of Cognitive Offloading — where thinking itself is becoming optional. That’s not inherently bad. Offloading memory to writing gave rise to civilization. But when we offload too much cognition to AI, we risk becoming passive operators in a world that still requires active discernment. The future won’t reward those who simply use AI — it will reward those who partner with it while fiercely protecting their uniquely human edge: Curiosity. Judgment. Creativity. Self-awareness. If your team is using AI, the next question shouldn’t be “How fast can we go?” It should be: “What are we still choosing to do with our own minds?” We’re not in an AI race. We’re in a human cognition race — and how we use these tools will determine who thrives. #FutureOfWork #Leadership #AI #Neuroscience #MIT #Futurism #ChatGPT #CognitiveOffloading #management

  • View profile for Eric So

    --MIT Sloan Distinguished Professor of Global Economics and Behavioral Science

    3,480 followers

    Your brain on AI: One of the first studies measuring what ChatGPT use does to our brain MIT researchers tracked 54 people writing essays using ChatGPT, web search, or just their brains—while monitoring neural activity with EEG. The findings are striking: 🧠 Brain connectivity weakened with more AI support. ChatGPT users showed the least neural engagement. 🔍 Memory collapsed. 83% of ChatGPT users couldn't quote their own essays minutes later, vs. near-perfect recall without AI. ⚡ "Cognitive debt" accumulated. When ChatGPT users later wrote without AI, their brains showed weakened connectivity compared to those who practiced unassisted writing. 🎨 Creativity declined. AI-assisted essays were statistically more uniform and less original. The twist: Strategic timing matters. Using AI after initial self-driven effort preserved better cognitive engagement than consistent AI use from the start. This isn't anti-AI—it's about understanding the trade-offs. While AI-generated essays scored well initially, participants showed signs of cognitive atrophy: diminished critical thinking, reduced memory encoding, and less ownership of their work. The takeaway: We need to enhance, not replace, human thinking as we integrate these powerful tools. Full study here: https://lnkd.in/e-6urMD8 Note: This is a pre-print study awaiting peer review.

  • View profile for Dr. Radhika Dirks

    Global AI Advisor | Forbes 30 Women in AI to Watch | Artificial Intelligence Expert | PhD in Quantum Computing | Keynote Speaker

    15,182 followers

    I don’t worry about AI replacing jobs. I worry about it replacing thought. Especially in kids. If we’re not careful, we won’t raise the next generation of thinkers. We’ll raise a generation that never needed to. That’s my biggest fear with AI — not hallucinations, not jobs, not surveillance. It’s “cognitive atrophy” — A slow erosion of the very thing education is meant to build: the ability to think. And it’s already happening. I’ve been thinking about the recent MIT study ever since I read it — and sure, any decent scientist will say you can poke a million holes in any research. But this one hasn’t left my mind. They scanned the brains of students using ChatGPT to write essays, and it confirmed what I’ve feared for months. → Neural connectivity dropped from 79 to 42 — a 47% collapse. In some sessions and frequency bands, the drop was as low as 55%. → 83% couldn’t recall a single sentence they’d written — just minutes later. → Even when they stopped using AI, their brains stayed less engaged than those who never used it at all. The researchers called it "cognitive offloading." I call it something closer to erosion. Because the real risk here isn't academic. It's developmental. We're not just handing kids a writing tool. We're handing them a shortcut before they've built the mental muscles they'll need for life. What happens when your brain learns to complete a task — without thinking through it? You're not building reasoning. You're building a dependency. When you outsource the struggle — the reflection, the idea wrestling — you're left with something that reads like thought, but isn't. And if you're 12? 10? 5? You're not learning to write. You're learning to SKIP thinking. The scary part? It works. It gets the grade. It saves time. Which makes it even harder to challenge. But the teachers called the AI-assisted essays "soulless." That word haunts me. I'm not anti-AI. I work in this space. But I am deeply against what it's replacing — especially in education. We didn't get here by skipping the hard part. We got here by learning how to think. Slowly. Imperfectly. And sometimes painfully. What happens when the next generation never gets that chance? P.S. You can read the entire MIT study here - https://lnkd.in/gN82waAz #AI #technology #innovation #education

Explore categories