Reasons Generative AI Utility Is Debated

Explore top LinkedIn content from expert professionals.

Summary

Generative AI, a technology capable of creating content like text, images, and code, has sparked significant debate about its real-world applications and limitations. While it holds incredible potential, its challenges, such as reliability, ethical concerns, and integration difficulties, often overshadow its promises.

  • Define clear objectives: Before implementing generative AI, ensure that its use is tied to a specific, measurable business goal to avoid wasted resources and misaligned expectations.
  • Evaluate specialized solutions: Consider partnering with AI vendors offering tools tailored to your industry, as these often integrate better with existing workflows compared to generic platforms.
  • Address data quality: Focus on improving the quality and diversity of data inputs, as poor data can lead to inaccurate outputs and undermine your AI project’s success.
Summarized by AI based on LinkedIn member posts
  • View profile for Cesar Viana Teague

    GTM & Ai Change Management, Transformation. Husband. Cook.

    19,362 followers

    How's it really going with Generative AI project success? 🤔 Different sources provide varying figures on project outcomes, indicating the complexity of implementation - High pilot failure rate:  An August 2025 MIT study, "The GenAI Divide: State of AI in Business 2025," found that 95% of enterprise generative AI pilot projects fail to deliver measurable business value. Low production rate:  A Gartner survey found that only 48% of AI projects, on average, make it from a prototype to production. Similarly, one survey found that 88% of AI pilots never reach a production stage. Mixed ROI:  While some sources report that most AI adopters see a positive return on investment (ROI), others state that between 70% and 85% of projects fail to meet their desired ROI. The Key differences? In-house vs. External solutions. The success rate can depend on the approach an organization takes to development - External solutions see higher success:  The MIT study found that enterprises that partnered with specialized AI vendors for their solutions had a 67% success rate. In contrast, those that attempted to build their projects entirely in-house succeeded only 33% of the time. Specialized solutions beat general tools:  While general-purpose tools like ChatGPT are good for individual use, they often fail in enterprise environments because they lack deep integration and adaptability to specific workflows. Specialized, vendor-built solutions generally have better integration frameworks.   Reasons for generative AI project failures? 🤯 The problem is typically not the technology itself but the implementation strategy. Common reasons for failure include -  Unclear objectives:  Many companies implement AI without a specific, measurable business problem to solve, confusing technology with strategy. Poor integration:  Generic tools often do not connect well with existing enterprise systems like Customer Relationship Management (CRM) or Enterprise Resource Planning (ERP), forcing manual workarounds that negate efficiency gains. Ignoring back-office opportunities:  The MIT study found that most generative AI budgets go to sales and marketing, while the most significant ROI is often found in less flashy back-office automation. Lack of skilled talent:  A shortage of skilled personnel in-house to manage, integrate, and maintain AI solutions is a common barrier to success. Poor data quality:  Generative AI models are highly dependent on the quality and diversity of their training data. Biased, inconsistent, or low-quality data can lead to inaccurate outputs and project failure. Overall, it's important to do some Strategic Planning & Change Management for AI & IT change projects, to minimize the failure rates mentioned above! 🙌 #changemanagement #generativeai #strategicplanning

  • View profile for Montgomery Singman
    Montgomery Singman Montgomery Singman is an Influencer

    Managing Partner @ Radiance Strategic Solutions | xSony, xElectronic Arts, xCapcom, xAtari

    26,754 followers

    Generative AI continues to generate excitement, but significant challenges are often overlooked. Reports from respected sources such as Harvard Business Review and Goldman Sachs highlight that current expectations may not align with reality. The technology, while promising, has limitations that need to be acknowledged and addressed. In May, Harvard Business Review discussed "AI's Trust Problem," in June, Goldman Sachs raised doubts about whether the expected $1 trillion in AI investment will deliver substantial returns. Their concern: aside from developer efficiency, there may not be enough value to justify such massive spending, especially in the near term. Jim Covello, Goldman Sachs' head of global equity research, pointed out that replacing low-wage jobs with costly technology contradicts earlier tech transitions, which focused on improving efficiency and affordability. A recent analysis from Planet Money echoes this skepticism, listing “10 reasons why AI may be overrated.” Issues like hallucinations (when AI generates false or misleading information) and declining quality in AI-generated outputs raise concerns about its readiness for widespread use. A study by The Washington Post also examined what people ask AI chatbots about, revealing unexpected trends. Along with common academic assistance, some topics raised ethical and personal concerns. 🔍 Reality check: Generative AI can be impressive but often struggles with accuracy, leading to errors or hallucinations. 💸 Investment risks: Financial experts question the value of massive investments in AI and wonder if the technology will offer enough returns in the short term. 📉 Productivity vs. quality: While AI can increase productivity, particularly in coding, research shows that the quality of AI-generated code is often subpar. 📚 Help with homework: Students turn to AI chatbots for homework help, but concerns arise when AI provides direct answers rather than guidance or learning support. ❓ Personal and sensitive queries: Many chatbot users ask about personal topics, including sex and relationships, which raises ethical questions about privacy and appropriate use. These points serve as a reminder that while generative AI is a powerful tool, it’s important to approach it with realistic expectations and a clear understanding of its current limitations. #GenerativeAI #AIEthics #AIRealityCheck #AIinEducation #TechInvestments #AIProductivity #AIChallenges #AIHomework #AIandSex #AIinConservation #AIFuture #AIHype 

  • View profile for Stephen Klein

    Founder & CEO, Curiouser.AI | Berkeley Instructor | Building Values-Based, Human-Centered AI | LinkedIn Top Voice in AI

    67,605 followers

    10 Extraordinary Popular Delusions About Generative AI A "Reality Check" on Hype-as-a-Service (HaaS) While Generative AI will emerge as a triumph for humanity, in its current form, it is essentially "monetized anxiety," over sold and misrepresented by the industry and many of its supporters 1. Job displacement is widespread. Reality: Research from the Chicago Federal Reserve shows that job displacement is tied to economic cycles, not AI. (Source: Chicago Federal Reserve) 2. Gen AI is revolutionizing business efficiencies. Reality: According to Gartner, 70% of AI projects fail to meet expectations. Businesses are being sold promises of cost-cutting that rarely materialize. (Source: Gartner) 3. GenAI is enterprise-ready. Reality: As reported by MIT Sloan, error rates and hallucinations in AI models continue to be significant barriers. Enterprises are struggling to trust AI for mission-critical tasks. (Source: MIT Sloan) 4. Management consultants know AI. Reality: Harvard Business Review reports that most consultants have only recently started working with AI and are using outdated methodologies. (Source: Harvard Business Review) 5. Early adopters are ahead of the game. Reality: McKinsey found that 80% of CEOs regret their rush to implement AI, with many projects failing to deliver real business value. (Source: McKinsey) 6. Agents are real. Reality: MIT’s CSAIL debunked AI "agents" as little more than rebranded old tech with no real-world application. (Source: MIT CSAIL) 7. GenAI is a boon for humanity. Reality: Oxford Economics warns that AI is contributing to cognitive decline, encouraging intellectual laziness instead of growth. (Source: Oxford Economics) 8. GenAI is a real business. Reality: PitchBook reports that 85% of AI startups are unprofitable, sustained by endless investment rather than real business models. (Source: PitchBook) 9. Closed-source AI is good for business. Reality: Forbes explains that closed-source systems lock businesses into costly, long-term agreements and stifle innovation. Open-source alternatives are growing 30% faster. (Source: Forbes) 10. GenAI is good for young children. Reality: The American Academy of Pediatrics and NIH warn that excessive screen time, including AI exposure, can damage cognitive and social development in children. (Source: American Academy of Pediatrics, NIH) We need a strategic shift, one that prioritizes humanity, pragmatic business and long-term value, not hype. (Sources in Comments) ******************************************************************************** The trick with technology is to avoid spreading darkness at the speed of light Stephen Klein is the Founder & CEO of Curiouser.AI, a Generative AI platform and values-based advisory firm focused on strategic coaching and individual and organizational competence. He also teaches AI Ethics at UC Berkeley. To learn more, visit curiouser.ai or connect on Hubble https://lnkd.in/gphSPv_e

  • View profile for Sreenath Reddy
    Sreenath Reddy Sreenath Reddy is an Influencer

    Linkedin Top Voice | AMC Practitioner | Delivering transformative analytics and ad optimization solutions for brands and agencies to win on Amazon and Walmart| CEO @ Intentwise | Educator | Love all things data and AI.

    11,490 followers

    A word of caution. Attributing reasoning and logic to Generative AI (Gen AI) is a mistake. Gen AI is great at producing plausible output. The output may not be accurate all the time. It works great when we are not looking for accuracy but possibilities. For example, ask it to rewrite some text or give you a recipe or a tour plan or a poem; it works fine. At least, it will appear so to us. But, if you expect it to provide you with factually accurate information every time, it's not there (Yet 😃 ). For example, when we turned on Gen AI-based responses on our live chat, it started to spew grammatically accurate but factually inaccurate information to our clients. There are techniques and approaches to improve accuracy, but it will not be 100%. Don't get me wrong. Gen AI is one of the most transformative technology innovations we will see in our lifetime (Internet & Mobile Phones are among the others on my list). As my friend Hemant puts it, at the moment, Gen AI is great at three things : 🔹 Translation (Example: Text to code, Language to language, format to format) 🔹 Summarizing (Example: Extracting insights from reviews, call transcripts) 🔹 Semantic search There is a lot that can be done with these three things. However, we should just be clear-eyed about its limitations. Otherwise, it will disappoint, or worse burn big a hole in your pocket 😃 . Your thoughts? #generativeAI #amazonadvertising #walmartconnect ___________________ Follow Me Here 👉 https://lnkd.in/gp3Q6H8B

  • View profile for Shail Khiyara

    Top AI Voice | Founder, CEO | Author | Board Member | Gartner Peer Ambassador | Speaker | Bridge Builder

    31,284 followers

    𝟱 𝗛𝗔𝗥𝗗 𝗧𝗥𝗨𝗧𝗛𝗦 𝗠𝗖𝗞𝗜𝗡𝗦𝗘𝗬 𝗢𝗩𝗘𝗥𝗟𝗢𝗢𝗞𝗘𝗗 𝗔𝗕𝗢𝗨𝗧 𝗚𝗘𝗡𝗘𝗥𝗔𝗧𝗜𝗩𝗘 𝗔𝗜 𝗔𝗚𝗘𝗡𝗧𝗦 McKinsey & Company says generative AI agents will transform industries, replacing workflows and driving new productivity.  (link to the report below) ► 𝗕𝗟𝗜𝗡𝗗 𝗦𝗣𝗢𝗧𝗦 But is the reality as close as they suggest? Here are five critical challenges they may have overlooked: 1️⃣ 𝗧𝗲𝗰𝗵𝗻𝗶𝗰𝗮𝗹 𝗥𝗲𝗮𝗱𝗶𝗻𝗲𝘀𝘀: Cascading errors. Can AI agents reliably handle critical, high-stakes workflows like finance or healthcare? 2️⃣ 𝗪𝗼𝗿𝗸𝗳𝗼𝗿𝗰𝗲 𝗜𝗺𝗽𝗮𝗰𝘁: Automation frees workers—but also displaces them. How do we reskill and retain institutional knowledge? 3️⃣ 𝗥𝗢𝗜 𝗨𝗻𝗰𝗲𝗿𝘁𝗮𝗶𝗻𝘁𝘆: AI integration is costly. If agents still need significant human oversight, is the investment delivering enough value? 4️⃣ 𝗧𝗿𝘂𝘀𝘁 𝗕𝗮𝗹𝗮𝗻𝗰𝗲: AI agents that “feel human” risk being over-trusted. Where’s the middle ground between trust and oversight? 5️⃣ 𝗘𝘁𝗵𝗶𝗰𝗮𝗹 𝗥𝗶𝘀𝗸𝘀: Misaligned goals, unintended outputs, and misuse demand robust safeguards to avoid major risks. ► 𝗧𝗛𝗘 𝗣𝗥𝗢𝗗𝗨𝗖𝗧𝗜𝗩𝗜𝗧𝗬 𝗣𝗛𝗔𝗦𝗘 𝗣𝗥𝗢𝗕𝗟𝗘𝗠 – 𝗥𝗘𝗩𝗜𝗦𝗜𝗧𝗘𝗗 Most AI tools today augment, not replace, human workflows. Even promising tools like AutoGPT are seeing mixed adoption. And yet, 𝗕𝗶𝗴 𝗧𝗲𝗰𝗵 𝗰𝗼𝗺𝗽𝗮𝗻𝗶𝗲𝘀 𝗹𝗶𝗸𝗲 𝗚𝗼𝗼𝗴𝗹𝗲, 𝗔𝗺𝗮𝘇𝗼𝗻, 𝗮𝗻𝗱 𝗠𝗶𝗰𝗿𝗼𝘀𝗼𝗳𝘁 𝗮𝗿𝗲 𝗯𝗲𝘁𝘁𝗶𝗻𝗴 𝗯𝗶𝗴 𝗼𝗻 𝗔𝗜 𝗮𝗴𝗲𝗻𝘁𝘀 𝘁𝗼 𝘁𝗿𝗮𝗻𝘀𝗳𝗼𝗿𝗺 𝘁𝗵𝗲𝗶𝗿 𝗼𝗽𝗲𝗿𝗮𝘁𝗶𝗼𝗻𝘀 𝗯𝘆 𝟮𝟬𝟮𝟱. While this indicates the competitive pressure to adopt agents quickly, 𝘁𝗵𝗲 𝘀𝗮𝗺𝗲 𝗰𝗵𝗮𝗹𝗹𝗲𝗻𝗴𝗲𝘀 𝗽𝗲𝗿𝘀𝗶𝘀𝘁: 𝗿𝗲𝗹𝗶𝗮𝗯𝗶𝗹𝗶𝘁𝘆, 𝘀𝗰𝗮𝗹𝗮𝗯𝗶𝗹𝗶𝘁𝘆, 𝗮𝗻𝗱 𝘁𝗿𝘂𝘀𝘁. ► 𝗔𝗖𝗖𝗢𝗥𝗗𝗜𝗡𝗚 𝗧𝗢 𝗧𝗛𝗘 World Economic Forum  🚩  AI agents remain utility-focused, struggling with multi-agent orchestration in complex environments.  🚩  Without robust governance, risks like goal misalignment and specification gaming remain significant. They work in low-stakes environments (e.g., customer service), but high-stakes adoption (healthcare, finance) is still far off. ⏏ 𝗪𝗛𝗬 𝗜𝗧 𝗠𝗔𝗧𝗧𝗘𝗥𝗦 Generative AI agents are powerful tools, but the road from potential to practicality is far more complex, than McKinsey suggests. The optimism around 2025 as a pivotal year for AI agents must be balanced with hard questions: ▪️ 𝗛𝗼𝘄 𝘄𝗶𝗹𝗹 𝗶𝗻𝗱𝘂𝘀𝘁𝗿𝗶𝗲𝘀 𝗺𝗮𝗻𝗮𝗴𝗲 𝗲𝘁𝗵𝗶𝗰𝗮𝗹 𝗿𝗶𝘀𝗸𝘀, 𝗿𝗲𝗴𝘂𝗹𝗮𝘁𝗼𝗿𝘆 𝗴𝗮𝗽𝘀, 𝗮𝗻𝗱 𝗱𝗲𝗰𝗶𝘀𝗶𝗼𝗻-𝗺𝗮𝗸𝗶𝗻𝗴 𝗯𝗶𝗮𝘀 𝗮𝘀 𝘁𝗵𝗲𝘀𝗲 𝘀𝘆𝘀𝘁𝗲𝗺𝘀 𝘀𝗰𝗮𝗹𝗲? ▪️ 𝗔𝗿𝗲 𝘄𝗲 𝗮𝘀𝗸𝗶𝗻𝗴 𝘁𝗵𝗲 𝘁𝗼𝘂𝗴𝗵 𝗾𝘂𝗲𝘀𝘁𝗶𝗼𝗻𝘀 𝗯𝗲𝗳𝗼𝗿𝗲 𝗿𝘂𝘀𝗵𝗶𝗻𝗴 𝗮𝗵𝗲𝗮𝗱? #AI #Automation #GenerativeAI #AIAgents #BigTech #AIAdoption

  • View profile for Eugina Jordan

    CEO and Founder YOUnifiedAI I 8 granted patents/16 pending I AI Trailblazer Award Winner

    41,254 followers

    Is generative AI: the key to unprecedented productivity or a cause for future mass unemployment? The Oliver Wyman Forum's report indicates that generative AI ✅could contribute up to $20 trillion to global GDP by 2030 ✅save 300 billion work hours annually. Yet, while 96% of employees believe AI can help in their current jobs, 60% are afraid it will automate them out of work, and 61% do not find it very trustworthy. The survey across 16 countries revealed ✅55% of employees use generative AI weekly, ✅but only 36% receive sufficient AI training from their employers. ✅40% of users would rely on AI for major financial decisions, ✅30% would share more personal data for a better experience, despite their mistrust. Generative AI's impact is already significant: it could displace millions of jobs globally, with one-third of all entry-level roles at risk of automation. Meanwhile, junior employees armed with AI may potentially replace their first-line managers, creating a vacuum in the job pyramid. 𝐓𝐨 𝐦𝐚𝐱𝐢𝐦𝐢𝐳𝐞 𝐛𝐞𝐧𝐞𝐟𝐢𝐭𝐬, 𝐜𝐨𝐦𝐩𝐚𝐧𝐢𝐞𝐬 𝐦𝐮𝐬𝐭 𝐚𝐝𝐨𝐩𝐭 𝐚 𝐩𝐞𝐨𝐩𝐥𝐞-𝐟𝐢𝐫𝐬𝐭 𝐚𝐩𝐩𝐫𝐨𝐚𝐜𝐡, 𝐢𝐧𝐯𝐞𝐬𝐭𝐢𝐧𝐠 𝐢𝐧 𝐰𝐨𝐫𝐤𝐞𝐫 𝐭𝐫𝐚𝐢𝐧𝐢𝐧𝐠 𝐚𝐧𝐝 𝐬𝐮𝐩𝐩𝐨𝐫𝐭. 𝐓𝐡𝐢𝐬 𝐦𝐞𝐚𝐧𝐬 𝐜𝐫𝐞𝐚𝐭𝐢𝐧𝐠 𝐢𝐧𝐭𝐮𝐢𝐭𝐢𝐯𝐞 𝐩𝐫𝐨𝐜𝐞𝐬𝐬𝐞𝐬 𝐚𝐥𝐨𝐧𝐠𝐬𝐢𝐝𝐞 𝐭𝐞𝐜𝐡𝐧𝐨𝐥𝐨𝐠𝐲 𝐚𝐧𝐝 𝐚𝐝𝐝𝐫𝐞𝐬𝐬𝐢𝐧𝐠 𝐞𝐦𝐩𝐥𝐨𝐲𝐞𝐞 𝐜𝐨𝐧𝐜𝐞𝐫𝐧𝐬 𝐭𝐨 𝐚𝐯𝐨𝐢𝐝 𝐦𝐨𝐫𝐚𝐥𝐞 𝐝𝐞𝐜𝐥𝐢𝐧𝐞 𝐚𝐧𝐝 𝐢𝐧𝐜𝐫𝐞𝐚𝐬𝐞𝐝 𝐭𝐮𝐫𝐧𝐨𝐯𝐞𝐫. Here are some facts that caught my attention: ✅In the healthcare sector, generative AI could save doctors three hours a day by 2030, enabling them to serve an additional 500 million patients annually. ✅AI could democratize access to mental health support, potentially reaching 400 million new patients globally. ➡Despite its potential, generative AI presents risks, including hallucinations, black-box logic, cyberattacks, and data breaches. Managing these risks requires a dynamic model of test, measure, and learn, with proactive involvement from business leaders, regulators, and consumers. 𝐀𝐧𝐝 𝐰𝐡𝐚𝐭 𝐚𝐛𝐨𝐮𝐭 𝐜𝐫𝐞𝐚𝐭𝐢𝐯𝐢𝐭𝐲? The report highlights a significant potential for generative AI to enhance creativity. By automating routine and monotonous tasks, AI frees up time for workers to engage in more thoughtful and creative aspects of their jobs. This new productivity paradigm could redefine the value of work, emphasizing innovation and collaboration between humans and AI. ➡ However, there are concerns about originality and authenticity, as AI-generated content may blur the lines between human and machine creativity. As we stand at this pivotal juncture, HOW are we prepared to navigate the risks and rewards of generative AI? Or maybe it's a matter of WHEN. Let me know what data points in the report caught your attention and how you think they might evolve. ⬇

  • View profile for Arnaud Lucas

    CTO/VP Engineering | Lead High-Growth, Tech 1st, Customer-Focused Organizations | AI/GenAI/ML/Cloud | B2B/B2C | Drive Product & Revenue Growth | Scale Innovative, High Performing Teams | ex-TripAdvisor, ex-Wayfair

    5,550 followers

    When my 7th-grade daughter asked for help with a math problem, neither her teacher’s app, my own calculations, nor ChatGPT could align on the “right” answer. Surprisingly, the Generative AI (GenAI) confidently produced incorrect results—prompting her friend to defend them as truth. This encounter vividly illustrated a challenge in GenAI: 𝗵𝗮𝗹𝗹𝘂𝗰𝗶𝗻𝗮𝘁𝗶𝗼𝗻𝘀. GenAI, with its power to create text, images, and code, is reshaping industries. Yet, as we move from experimentation to real-world deployment, its tendency to hallucinate—producing false, irrelevant, or misleading outputs—poses real risks to trust, accuracy, and operational integrity. In my latest article, 𝗛𝗮𝗹𝗹𝘂𝗰𝗶𝗻𝗮𝘁𝗶𝗻𝗴 𝗠𝗮𝗰𝗵𝗶𝗻𝗲𝘀: 𝗨𝗻𝗱𝗲𝗿𝘀𝘁𝗮𝗻𝗱𝗶𝗻𝗴 𝗚𝗲𝗻𝗔𝗜 𝗘𝗿𝗿𝗼𝗿𝘀, I explore the types of Hallucinations with 𝚛̲𝚎̲𝚊̲𝚕̲ ̲𝚎̲𝚡̲𝚊̲𝚖̲𝚙̲𝚕̲𝚎̲𝚜̲ and their implications as they are more than glitches—they are built into the design of GenAI systems. 𝘞𝘩𝘢𝘵 𝘦𝘹𝘢𝘮𝘱𝘭𝘦𝘴 𝘰𝘧 𝘩𝘢𝘭𝘭𝘶𝘤𝘪𝘯𝘢𝘵𝘪𝘰𝘯𝘴 𝘢𝘳𝘦 𝘺𝘰𝘶 𝘦𝘯𝘤𝘰𝘶𝘯𝘵𝘦𝘳𝘪𝘯𝘨 𝘸𝘩𝘦𝘯 𝘶𝘴𝘪𝘯𝘨 𝘎𝘦𝘯𝘈𝘐? 𝘓𝘦𝘵’𝘴 𝘥𝘪𝘴𝘤𝘶𝘴𝘴 𝘪𝘯 𝘵𝘩𝘦 𝘤𝘰𝘮𝘮𝘦𝘯𝘵𝘴! #GenerativeAI #GenAI #TechLeadership #Innovation #ArtificialIntelligence #AIInnovation #AITrends #FutureOfWork #AIAdoption #AIEthics #TechStrategy #InnovationLeadership #AIInBusiness

Explore categories