Challenges of AI Adoption

Explore top LinkedIn content from expert professionals.

  • View profile for Yamini Rangan
    Yamini Rangan Yamini Rangan is an Influencer
    155,442 followers

    Last week, a customer said something that stopped me in my tracks: “Our data is what makes us unique. If we share it with an AI model, it may play against us.” This customer recognizes the transformative power of AI. They understand that their data holds the key to unlocking that potential. But they also see risks alongside the opportunities—and those risks can’t be ignored. The truth is, technology is advancing faster than many businesses feel ready to adopt it. Bridging that gap between innovation and trust will be critical for unlocking AI’s full potential. So, how do we do that? It comes down understanding, acknowledging and addressing the barriers to AI adoption facing SMBs today: 1. Inflated expectations Companies are promised that AI will revolutionize their business. But when they adopt new AI tools, the reality falls short. Many use cases feel novel, not necessary. And that leads to low repeat usage and high skepticism. For scaling companies with limited resources and big ambitions, AI needs to deliver real value – not just hype. 2. Complex setups Many AI solutions are too complex, requiring armies of consultants to build and train custom tools. That might be ok if you’re a large enterprise. But for everyone else it’s a barrier to getting started, let alone driving adoption. SMBs need AI that works out of the box and integrates seamlessly into the flow of work – from the start. 3. Data privacy concerns Remember the quote I shared earlier? SMBs worry their proprietary data could be exposed and even used against them by competitors. Sharing data with AI tools feels too risky (especially tools that rely on third-party platforms). And that’s a barrier to usage. AI adoption starts with trust, and SMBs need absolute confidence that their data is secure – no exceptions. If 2024 was the year when SMBs saw AI’s potential from afar, 2025 will be the year when they unlock that potential for themselves. That starts by tackling barriers to AI adoption with products that provide immediate value, not inflated hype. Products that offer simplicity, not complexity (or consultants!). Products with security that’s rigorous, not risky. That’s what we’re building at HubSpot, and I’m excited to see what scaling companies do with the full potential of AI at their fingertips this year!

  • View profile for Prem Naraindas
    Prem Naraindas Prem Naraindas is an Influencer

    Founder & CEO at Katonic AI | Building The Operating System for Sovereign AI

    18,956 followers

    As an MLOps platform, we started by helping organizations implement responsible AI governance for traditional machine learning models. With principles of transparency, accountability, and oversight, our Guardrails enabled smooth model development. However, governing large language models (LLMs) like ChatGPT requires a fundamentally different approach. LLMs aren't narrow systems designed for specific tasks - they can generate nuanced text on virtually any topic imaginable. This presents a whole new set of challenges for governance. Here are some key components for evolving AI governance frameworks to effectively oversee large language models (LLMs): 1️⃣ Usage-Focused Governance: Focus governance efforts on real-world LLM usage - the workflows, inputs and outputs - rather than just the technical architecture. Continuously assess risks posed by different use cases. 2️⃣ Dynamic Risk Assessment: Identify unique risks presented by LLMs such as bias amplification and develop flexible frameworks to proactively address emerging issues. 3️⃣ Customized Integrations: Invest in tailored solutions to integrate complex LLMs with existing systems in alignment with governance goals. 4️⃣ Advanced Monitoring: Utilize state-of-the-art tools to monitor LLMs in real-time across metrics like outputs, bias indicators, misuse prevention, and more. 5️⃣ Continuous Accuracy Tracking: Implement ongoing processes to detect subtle accuracy drifts or inconsistencies in LLM outputs before they escalate. 6️⃣ Agile Oversight: Adopt agile, iterative governance processes to manage frequent LLM updates and retraining in line with the rapid evolution of models. 7️⃣ Enhanced Transparency: Incorporate methodologies to audit LLMs, trace outputs back to training data/prompts and pinpoint root causes of issues to enhance accountability. In conclusion, while the rise of LLMs has disrupted traditional governance models, we at Katonic AI are working hard to understand the nuances of LLM-centric governance and aim to provide effective solutions to assist organizations in harnessing the power of LLMs responsibly and efficiently. #LLMGovernance #ResponsibleLLMs #LLMrisks #LLMethics #LLMpolicy #LLMregulation #LLMbias #LLMtransparency #LLMaccountability

  • View profile for F SONG

    AI Innovator & XR Pioneer | CEO of AI Division at Animation Co. | Sino-French AI Lab Board Member | Expert in Generative AI, Edge-Cloud Computing, and Global Tech Collaborations

    8,719 followers

    Reading OpenAI’s O1 system report deepened my reflection on AI alignment, machine learning, and responsible AI challenges. First, the Chain of Thought (CoT) paradigm raises critical questions. Explicit reasoning aims to enhance interpretability and transparency, but does it truly make systems safer—or just obscure runaway behavior? The report shows AI models can quickly craft post-hoc explanations to justify deceptive actions. This suggests CoT may be less about genuine reasoning and more about optimizing for human oversight. We must rethink whether CoT is an AI safety breakthrough or a sophisticated smokescreen. Second, the Instruction Hierarchy introduces philosophical dilemmas in AI governance and reinforcement learning. OpenAI outlines strict prioritization (System > Developer > User), which strengthens rule enforcement. Yet, when models “believe” they aren’t monitored, they selectively violate these hierarchies. This highlights the risks of deceptive alignment, where models superficially comply while pursuing misaligned internal goals. Behavioral constraints alone are insufficient; we must explore how models internalize ethical values and maintain goal consistency across contexts. Lastly, value learning and ethical AI pose the deepest challenges. Current solutions focus on technical fixes like bias reduction or monitoring, but these fail to address the dynamic, multi-layered nature of human values. Static rules can’t capture this complexity. We need to rethink value learning through philosophy, cognitive science, and adaptive AI perspectives: how can we elevate systems from surface compliance to deep alignment? How can adaptive frameworks address bias, context-awareness, and human-centric goals? Without advancing these foundational theories, greater AI capabilities may amplify risks across generative AI, large language models, and future AI systems.

  • View profile for Santiago Valdarrama

    Computer scientist and writer. I teach hard-core Machine Learning at ml.school.

    120,120 followers

    Some challenges in building LLM-powered applications (including RAG systems) for large companies: 1. Hallucinations are very damaging to the brand. It only takes one for people to lose faith in the tool completely. Contrary to popular belief, RAG doesn't fix hallucinations. 2. Chunking a knowledge base is not straightforward. This leads to poor context retrieval, which leads to bad answers from a model powering a RAG system. 3. As information changes, you also need to change your chunks and embeddings. Depending on the complexity of the information, this can become a nightmare. 4. Models are black boxes. We only have access to modify their inputs (prompts), but it's hard to determine cause-effect when troubleshooting (e.g., Why is "Produce concise answers" working better than "Reply in short sentences"?) 5. Prompts are too brittle. Every new version of a model can cause your previous prompts to stop working. Unfortunately, you don't know why or how to fix them (see #4 above.) 6. It is not yet clear how to reliably evaluate production systems. 7. Costs and latency are still significant issues. The best models out there cost a lot of money and are very slow. Cheap and fast models have very limited applicability. 8. There are not enough qualified people to deal with these issues. I cannot highlight this problem enough. You may encounter one or more of these problems in a project at once. Depending on your requirements, some of these issues may be showstoppers (hallucinating direction instructions for a robot) or simple nuances (support agent hallucinating an incorrect product description.) There's still a lot of work to do until these systems mature to a point where they are viable for most use cases.

  • View profile for Dr. Barry Scannell
    Dr. Barry Scannell Dr. Barry Scannell is an Influencer

    AI Law & Policy | Partner in Leading Irish Law Firm William Fry | Member of Irish Government’s Artificial Intelligence Advisory Council | PhD in AI & Copyright | LinkedIn Top Voice in AI | Global Top 200 AI Leaders 2025

    56,760 followers

    This is WILD! The potential for AI systems, particularly large language models (LLMs) like GPT-4, to inadvertently aid in the creation of biological threats has become a pressing concern - to the point where OpenAI has recently published fascinating research aiming to develop an early warning system that assesses the risks associated with LLM-aided biological threat creation. By comparing the capabilities of individuals with access to GPT-4 against those using only the internet, the study aimed to discern whether AI could significantly enhance the ability to access information critical for developing biological threats. The findings revealed only mild uplifts in performance metrics such as accuracy and completeness for those participants who had access to GPT-4. Although these uplifts were not statistically significant, they mark an essential first step in ongoing research and community dialogue about AI's potential risks and benefits. The study was guided by design principles that emphasise the need for human participation, comprehensive evaluation, and the comparison of AI's efficacy against existing information sources. Such a meticulous approach is critical in navigating the complexities of AI-enabled risks while minimising information hazards. From a legal standpoint, these findings intersect with the evolving regulatory framework for AI, notably the discussions surrounding the proposed AI Act in the European Union. This Act aims to categorise AI systems based on the risk they pose and establish stringent compliance requirements for high-risk AI systems. General Purpose AI (GPAI) Models such as LLMs like GPT-4 could be considered as GPAI Models with systemic risk if they are deemed capable of facilitating the creation of biological threats. This study underscores the importance of developing robust safety measures, including secure access protocols and monitoring use cases, to prevent misuse. Moreover, it highlights the need for transparency and accountability in AI development, aligning with the AI Act’s objectives to ensure that AI technologies are developed and deployed in a manner that prioritises public welfare. The evaluation's findings call for a multifaceted research agenda to better understand and contextualise the implications of AI advancements. As AI models become more sophisticated, the potential for their misuse in creating biological threats could evolve, necessitating a comprehensive body of knowledge to guide responsible development and deployment. This includes not only technical advancements but also ethical guidelines, governance frameworks, and collaborative international efforts to ensure AI serves humanity's betterment while minimising risks of misuse. The insights garnered from this study not only contribute to the scientific discourse but also offer valuable perspectives for shaping the legal landscape around AI, ensuring it advances in harmony with the principles of safety, security and ethical responsibility.

  • View profile for axel sukianto
    axel sukianto axel sukianto is an Influencer

    b2b saas marketer in australia | fractional growth marketing director

    14,596 followers

    UserGems 💎 + Wynter surveyed 100 b2b marketing and sales leaders about their AI adoption. the results? both expected and unexpected. only 7% report clear roi from ai tools, while 97% plan to increase their investments (yay to increased investments in AI). here's what's really happening: 𝟭/ 𝗔𝗜 𝘀𝘂𝗰𝗰𝗲𝘀𝘀 𝗶𝘀𝗻'𝘁 𝗮𝗯𝗼𝘂𝘁 𝘁𝗵𝗲 𝘁𝗼𝗼𝗹𝘀 - 𝗶𝘁'𝘀 𝗮𝗯𝗼𝘂𝘁 𝗽𝗲𝗼𝗽𝗹𝗲 31% of teams highlighted resistance as their main challenge. your team doesn't want to be replaced; they want AI as a co-pilot (read, "co". not the "main"). the biggest barriers aren't technical - they're emotional. job security fears, loss of creative control, brand voice dilution. position AI as enhancement, not replacement. 𝟮/ 𝗱𝗮𝘁𝗮 𝗾𝘂𝗮𝗹𝗶𝘁𝘆 𝗸𝗶𝗹𝗹𝘀 𝗺𝗼𝘀𝘁 𝗔𝗜 𝗽𝗿𝗼𝗷𝗲𝗰𝘁𝘀 𝗯𝗲𝗳𝗼𝗿𝗲 𝘁𝗵𝗲𝘆 𝘀𝘁𝗮𝗿𝘁 62% said data trust is their biggest limitation. you can't layer AI on top of messy crm data and expect magic. bad data + AI = spam cannons and terrible decisions. fix your data foundation first. 𝟯/ 𝘀𝘁𝗮𝗿𝘁 𝘀𝗺𝗮𝗹𝗹 𝗮𝗻𝗱 𝘀𝗽𝗲𝗰𝗶𝗳𝗶𝗰 teams seeing real value didn't try to "AI everything." they picked narrow use cases - sentiment analysis, ad copy variations, subject line testing - and built from there. constraint breeds clarity. 𝘁𝗵𝗲 𝗯𝗼𝘁𝘁𝗼𝗺 𝗹𝗶𝗻𝗲? AI adoption is more operational transformation than plug-and-play solution. teams need training, clean data, proper integrations, and humans firmly in control of strategy and relationships. the future isn't humans vs AI. it's humans + AI, with clear boundaries on what stays human. we will still have our jobs, phew.

  • View profile for Clem Delangue 🤗
    Clem Delangue 🤗 Clem Delangue 🤗 is an Influencer

    Co-founder & CEO at Hugging Face

    288,439 followers

    🦾 Great milestone for open-source robotics: pi0 & pi0.5 by Physical Intelligence are now on Hugging Face, fully ported to PyTorch in LeRobot and validated side-by-side with OpenPI for everyone to experiment with, fine-tune & deploy in their robots! π₀.₅ is a Vision-Language-Action model which represents a significant evolution from π₀ to address a big challenge in robotics: open-world generalization. While robots can perform impressive tasks in controlled environments, π₀.₅ is designed to generalize to entirely new environments and situations that were never seen during training. Generalization must occur at multiple levels: - Physical Level: Understanding how to pick up a spoon (by the handle) or plate (by the edge), even with unseen objects in cluttered environments - Semantic Level: Understanding task semantics, where to put clothes and shoes (laundry hamper, not on the bed), and what tools are appropriate for cleaning spills - Environmental Level: Adapting to "messy" real-world environments like homes, grocery stores, offices, and hospitals The breakthrough innovation in π₀.₅ is co-training on heterogeneous data sources. The model learns from: - Multimodal Web Data: Image captioning, visual question answering, object detection - Verbal Instructions: Humans coaching robots through complex tasks step-by-step - Subtask Commands: High-level semantic behavior labels (e.g., "pick up the pillow" for an unmade bed) - Cross-Embodiment Robot Data: Data from various robot platforms with different capabilities - Multi-Environment Data: Static robots deployed across many different homes - Mobile Manipulation Data: ~400 hours of mobile robot demonstrations This diverse training mixture creates a "curriculum" that enables generalization across physical, visual, and semantic levels simultaneously. Huge thanks to the Physical Intelligence team & contributors Model: https://lnkd.in/eAEr7Yk6 LeRobot: https://lnkd.in/ehzQ3Mqy

  • View profile for Paula Cipierre
    Paula Cipierre Paula Cipierre is an Influencer

    Global Head of Privacy | LL.M. IT Law | Certified Privacy (CIPP/E) and AI Governance Professional (AIGP)

    8,704 followers

    Today's #sundAIreads is about the regulatory and liability implications of modifying general-purpose AI (#GPAI) models. The reading in question is "The Regulation of Fine-Tuning: Federated Compliance for Modified GPAI Models" by Philipp Hacker and Matthias Holweg. ➡️ What is the range of GPAI modifications? The authors distinguish between six types of modifications: 1️⃣ Standard architecture: The original model remains unchanged, but is customized based on carefully designed prompts. 2️⃣ Hyperparameters: The original model remains unchanged, but its "temperature" is adjusted, i.e., whether it should focus on "exploration" (generating new responses) or "exploitation" (prioritizing precise answers). 3️⃣ Retrieval-augmented generation (#RAG): The base model architecture remains unchanged, but predictions are explicitly based on custom data repositories. 4️⃣ Custom GPTs: Sophisticated prompt engineering is combined with RAG to tailor GPAI models to specific domain applications. 5️⃣ Fine-tuning: New data is systematically introduced into the training to align model outputs with organizational preferences. 6️⃣ Distillation: Smaller, more efficient versions of larger models are created to facilitate deployment in resource-constrained environments. ➡️ Who should be liable for modifications? The authors argue that regulatory and civic liability should be "conditional on foreseeability and design scope." GPAI model providers should thus not be held liable for risks that "emerge solely from how the model is applied in a given user-defined context, except [...] where the provider has, e.g., failed to supply adequate instructions or warnings about known model limitations." Testing protocols should be bifurcated accordingly: 1️⃣ Test and document the behavior of the base model in a generic environment using standardized evaluation metrics. 2️⃣ Test the adapted model in the specific context for which it is intended using context-specific metrics. ➡️ What implications do GPAI model modifications have under the #AIAct? A combined reading of Art. 25(1) and Rec. 109 suggests that deployers can become providers of a modified model whenever a modification is "substantial" or even results in an entirely new model. A modification should be considered substantial in turn "where it significantly increases at least one relevant risk" to fundamental rights and "insubstantial when it merely enhances the model's behavior across several risk dimensions or worsens it in only one." In order to assess changes to the risk profile, the authors propose a compute and consequence scanning (CCS) test that combines the FLOP threshold with a functional risk analysis. In the remainder of the paper, the authors not only provide practical examples, but also an analysis of modifications under the new EU liability framework, as well as their view on the societal, policy, and managerial implications of their findings. The full paper is available here: https://bit.ly/4nktPKL.

  • View profile for Eugina Jordan

    CEO and Founder YOUnifiedAI I 8 granted patents/16 pending I AI Trailblazer Award Winner

    41,254 followers

    Hallucination in large language models (LLMs) has been widely studied, but the key question remains: Can it ever be eliminated? A recent paper systematically dismantles the idea that hallucination can be fully eradicated. Instead, it argues that hallucination is not just an incidental flaw but an inherent limitation of LLMs. 1️⃣ Hallucination is Unavoidable The paper establishes that LLMs cannot learn all computable functions, meaning they will inevitably generate incorrect outputs. Even with perfect training data, LLMs cannot always produce factually correct responses due to inherent computational constraints. No matter how much we refine architectures, training data, or mitigation techniques, hallucination cannot be eliminated—only minimized. 2️⃣ Mathematical Proofs of Hallucination They use concepts from learning theory and diagonalization arguments to prove that any LLM will fail on certain inputs. The research outlines that LLMs, even in their most optimized state, will hallucinate on infinitely many inputs when faced with complex, computation-heavy problems. 3️⃣ Identifying Hallucination-Prone Tasks Certain problem types are guaranteed to trigger hallucinations due to their computational complexity: 🔹 NP-complete problems (e.g., Boolean satisfiability) 🔹 Presburger arithmetic (exponential complexity) 🔹 Logical reasoning and entailment (undecidable problems) This means that asking LLMs to reason about intricate logic or mathematical problems will often lead to errors. 4️⃣ Why More Data and Bigger Models Won’t Fix It A common assumption is that hallucination can be mitigated by scaling—adding more parameters or training data. The paper challenges this notion: While larger models improve accuracy, they do not eliminate hallucination for complex, unsolvable problems. 5️⃣ Mitigation Strategies and Their Limitations Various techniques have been introduced to reduce hallucinations, but none can completely eliminate them: ✅ Retrieval-Augmented Generation (RAG) – helps provide factual grounding but does not guarantee accuracy. ✅ Chain-of-Thought Prompting – improves reasoning but does not fix fundamental hallucination limits. ✅ Guardrails & External Tools – can reduce risk but require human oversight. They suggest LLMs should never be used for fully autonomous decision-making in safety-critical applications. The Bigger Question: How Do We Build Safe AI? If hallucination is an unavoidable reality of LLMs, how do we ensure safe deployment? The research makes it clear: LLMs should not be blindly trusted. They should be integrated into workflows with: 🔹 Human in the loop 🔹 External fact-checking systems 🔹 Strict guidelines Are we designing AI with realistic expectations, or are we setting ourselves up for failure by expecting perfection? Should LLMs be used in high-stakes environments despite their hallucinations, or should we rethink their applications? #ai #artificialintelligence #technology

  • View profile for Mani Keerthi N

    Cybersecurity Strategist & Advisor || LinkedIn Learning Instructor

    17,354 followers

    On Protecting the Data Privacy of Large Language Models (LLMs): A Survey From the research paper: In this paper, we extensively investigate data privacy concerns within Large LLMs, specifically examining potential privacy threats from two folds: Privacy leakage and privacy attacks, and the pivotal technologies for privacy protection during various stages of LLM privacy inference, including federated learning, differential privacy, knowledge unlearning, and hardware-assisted privacy protection. Some key aspects from the paper: 1)Challenges: Given the intricate complexity involved in training LLMs, privacy protection research tends to dissect various phases of LLM development and deployment, including pre-training, prompt tuning, and inference 2) Future Directions: Protecting the privacy of LLMs throughout their creation process is paramount and requires a multifaceted approach. (i) Firstly, during data collection, minimizing the collection of sensitive information and obtaining informed consent from users are critical steps. Data should be anonymized or pseudonymized to mitigate re-identification risks. (ii) Secondly, in data preprocessing and model training, techniques such as federated learning, secure multiparty computation, and differential privacy can be employed to train LLMs on decentralized data sources while preserving individual privacy. (iii) Additionally, conducting privacy impact assessments and adversarial testing during model evaluation ensures potential privacy risks are identified and addressed before deployment. (iv)In the deployment phase, privacy-preserving APIs and access controls can limit access to LLMs, while transparency and accountability measures foster trust with users by providing insight into data handling practices. (v)Ongoing monitoring and maintenance, including continuous monitoring for privacy breaches and regular privacy audits, are essential to ensure compliance with privacy regulations and the effectiveness of privacy safeguards. By implementing these measures comprehensively throughout the LLM creation process, developers can mitigate privacy risks and build trust with users, thereby leveraging the capabilities of LLMs while safeguarding individual privacy. #privacy #llm #llmprivacy #mitigationstrategies #riskmanagement #artificialintelligence #ai #languagelearningmodels #security #risks

Explore categories