80% of enterprise AI projects are draining your budget with zero ROI. And it's not the technology that's failing: It's the hidden costs no one talks about. McKinsey's 2025 State of AI report reveals a startling truth: 80% of organizations see no tangible ROI impact from their AI investments. While your competitors focus on software licenses and computing costs, five hidden expenses are sabotaging your ROI: 1/ The talent gap: ↳ AI specialists command $175K-$350K annually. ↳ 67% of companies report severe AI talent shortages. ↳ 13% are now hiring AI compliance specialists. ↳ Only 6% have created AI ethics specialists. When your expensive new hire discovers you lack the infrastructure they need to succeed, they will leave within 9 months. 2/ The infrastructure trap: ↳ AI workloads require 5-8x more computing power than projected. ↳ Storage needs can increase 40-60% within 12 months. ↳ Network bandwidth demands can surge unexpectedly. What's budgeted as a $100K project suddenly demands $500K in infrastructure. 3/ The data preparation nightmare: ↳ Organizations underestimate data prep costs by 30-40%. ↳ 45-70% of AI project time is spent on data cleansing (trust me, I know). ↳ Poor data quality causes 30% of AI project failures (according to Gartner). Your AI model is only as good as your data. And most enterprise data isn't ready for AI consumption. 4/ The integration problem: ↳ Legacy system integration adds 25-40% to implementation costs. ↳ API development expenses are routinely overlooked. ↳ 64% of companies report significant workflow disruptions. No AI solution can exist in isolation. You have to integrate it with your existing tech stack, or it will create expensive silos. 5/ The governance burden: ↳ Risk management frameworks cost $50K-$150K to implement. ↳ New AI regulations emerge monthly across global markets. Without proper governance, your AI can become a liability, not an asset. The solution isn't abandoning AI. It's implementing it strategically with eyes wide open. Here's the 3-step framework we use at Avenir Technology to deliver measurable ROI: Step 1: Define real success metrics: ↳ Link AI initiatives directly to business KPIs. ↳ Build comprehensive cost models including hidden expenses. ↳ Establish clear go/no-go decision points. Step 2: Build the foundation first: ↳ Assess and upgrade infrastructure before deployment. ↳ Create data readiness scorecards for each AI use case. ↳ Invest in governance frameworks from day one. Step 3: Scale intelligently: ↳ Start with high-ROI, low-complexity use cases. ↳ Implement in phases with reassessment at each stage. Organizations following this framework see 3.2x higher ROI. Ready to implement AI that produces real ROI? Let's talk about how Avenir Technology can help. What AI implementation challenge are you facing? Share below. ♻️ Share this with someone who needs help implementing. ➕ Follow me, Ashley Nicholson, for more tech insights.
Challenges When Adopting New AI Frameworks
Explore top LinkedIn content from expert professionals.
Summary
Adopting new AI frameworks can present unique challenges, including technical, operational, and organizational hurdles. These obstacles often stem from the complexity of AI systems, hidden costs, and the evolving nature of governance and infrastructure needs.
- Understand the hidden costs: Account for expenses such as data preparation, system integration, and governance frameworks, which are often underestimated during planning.
- Prioritize robust infrastructure: Ensure that your organization’s infrastructure can support the high computational demands of AI workloads, including storage, bandwidth, and model deployment requirements.
- Establish governance early: Create clear AI governance frameworks to address regulatory compliance, risk management, and ethical concerns from the start.
-
-
This new white paper "Steps Toward AI Governance" summarizes insights from the 2024 EqualAI Summit, cosponsored by RAND in D.C. in July 2024, where senior executives discussed AI development and deployment, challenges in AI governance, and solutions for these issues across government and industry sectors. Link: https://lnkd.in/giDiaCA3 * * * The white paper outlines several technical and organizational challenges that impact effective AI governance: Technical Challenges: 1) Evaluation of External Models: Difficulties arise in assessing externally sourced AI models due to unclear testing standards and development transparency, in contrast to in-house models, which can be customized and fine-tuned to fit specific organizational needs. 2) High-Risk Use Cases: Prioritizing the evaluation of AI use cases with high risks is challenging due to the diverse and unpredictable outputs of AI, particularly generative AI. Traditional evaluation metrics may not capture all vulnerabilities, suggesting a need for flexible frameworks like red teaming. Organizational Challenges: 1) Misaligned Incentives: Organizational goals often conflict with the resource-intensive demands of implementing effective AI governance, particularly when not legally required. Lack of incentives for employees to raise concerns and the absence of whistleblower protections can lead to risks being overlooked. 2) Company Culture and Leadership: Establishing a culture that values AI governance is crucial but challenging. Effective governance requires authority and buy-in from leadership, including the board and C-suite executives. 3) Employee Buy-In: Employee resistance, driven by job security concerns, complicates AI adoption, highlighting the need for targeted training. 4) Vendor Relations: Effective AI governance is also impacted by gaps in technical knowledge between companies and vendors, leading to challenges in ensuring appropriate AI model evaluation and transparency. * * * Recommendations for Companies: 1) Catalog AI Use Cases: Maintain a centralized catalog of AI tools and applications, updated regularly to track usage and document specifications for risk assessment. 2) Standardize Vendor Questions: Develop a standardized questionnaire for vendors to ensure evaluations are based on consistent metrics, promoting better integration and governance in vendor relationships. 3) Create an AI Information Tool: Implement a chatbot or similar tool to provide clear, accessible answers to AI governance questions for employees, using diverse informational sources. 4) Foster Multistakeholder Engagement: Engage both internal stakeholders, such as C-suite executives, and external groups, including end users and marginalized communities. 5) Leverage Existing Processes: Utilize established organizational processes, such as crisis management and technical risk management, to integrate AI governance more efficiently into current frameworks.
-
Working with AI sometimes feels like I’ve traveled back in time 20 years ago. Every modern SaaS application is built on hundreds or thousands of packages. This has been a huge boon to development. Almost no one needs to write their own network library or database driver. We get to use high-level frameworks (React, Django, Poem, etc) that deeply abstract away common problems. This all works because the abstractions are pretty mature. Like manufacturing before it, embracing “interchangeable parts” for software has enabled us to move so quickly. 20 years ago, this wasn’t the case. We hadn’t figured out the patterns yet. The abstractions were immature, and things weren’t built to work well together. Integrating stuff was hard. It required lots of hand-crafting to stitch individual components together, and we had to create many low-level components from scratch. This is where we are with AI today. Abstractions are incredibly leaky. LLM performance varies dramatically even across versions of a model, let alone different models. Different jobs require completely different strategies. You might need to split tasks into easier subtasks (multiple LLM calls), apply different forms of RAG / context management, or do custom format conversion. Plain text as input and output is less than ideal for connecting components. Definitely not ‘interchangeable parts’ yet. Despite this, you see lots of AI application frameworks that claim to abstract away things like memory, RAG, and easily swap models. They act like we are 20 years into building with LLMs, not just 2. It’s a fantasy. Some frameworks have gotten traction on GitHub because they make it easy to build a quick prototype. But they fall apart when you try to build a real app with them, because those abstractions simply don’t work. You can’t “just” swap out a model - you almost certainly will have to do significant prompt iterations and possibly change your approach altogether. How you approach RAG or memory is almost certainly going to be very application specific. And so on. You should use a framework because it makes hard things easy — like React does with state management or a decent ORM does at preventing SQL injection. But most AI frameworks try to paper over the complexity. It doesn’t work. It’s still very hard to build a compelling end-user AI experience in 2025. Don’t let the hype cycle convince you otherwise.
-
AI in real-world applications is often just a small black box; The infrastructure surrounding the AI black box is vast and complex. As a product builder, you will spend disproportionate amount of time dealing with architecture and engineering challenges. There is very little actual AI work in large scale AI applications. Leading a team of outstanding engineers who are building an LLM product used by multiple enterprise customers, here are some lessons learned: Architecture: Optimizing a complex architecture consisting of dozens of services where components are entangled, and boundaries are blurred is hard. Hire outstanding software engineers with solid CS fundamentals and train them on generative AI. The other way round has rarely works. UX Design: Even a perfect AI agent can look less than perfect due to a poorly designed UX. Not all use cases are created equal. Understand what the user journey will look like and what are the users trying to achieve. All applications do not need to look like ChatGPT. Cost Management: With a few cents per 1000 tokens, LLMs may seem deceptively cheap. A single user query may involve dozens of inference calls resulting in big cloud bills. Developing a solid understanding of LLM pricing and capabilities appropriate for your use case and the overall application architecture can help keep costs lower. Performance: Users are going to be impatient when using your LLM application. Choosing the right number and size of chunks, fine-tuned app architecture, combined with the appropriate model can help reduce inference latency. Semantic caching of responses and streaming endpoints can help create a 'perception' of low latency. Data Governance: Data is still the king. All the data problems from classic ML systems still hold. Not keeping the data secure and high quality can cause all sorts of problems. Ensure proper access and quality controls. Scrub PII well, and educate yourself on all applicable regulations. AI Governance: LLMs can hallucinate and prompts can be hijacked. This can be major challenge for an enterprise, especially in a regulated industry. Use guardrails are critical for any customer-facing applications. Prompt Engineering: Very frequently, you will find your LLMs providing answers that are incomplete, incorrect or downright offensive. Spend a lot of time on prompt engineering. Review prompts very often. This is one of the biggest ROI areas. User Feedback and Analytics: Users can tell you how they feel about the product through implicit (heatmaps and engagement) and explicit (upvotes, comments) feedback. Setup monitoring, logging, tracing and analytics right from the beginning. Building enterprise AI products is more product engineering and problem solving than it is AI. Hire for engineering and problem solving skills. This paper is a must-read for all AI/ML engineers building applications at scale. #technicaldebt #ai #ml
-
Most people look at 𝗔𝗜 𝗮𝗴𝗲𝗻𝘁𝘀 and assume it is just 𝘁𝗼𝗼𝗹𝘀 𝘄𝗶𝗿𝗲𝗱 𝘁𝗼 𝗮 𝗹𝗮𝗻𝗴𝘂𝗮𝗴𝗲 𝗺𝗼𝗱𝗲𝗹. It’s easy to build a demo - maybe even POC - that way. It’s much harder to build something that lasts and scales. The real work begins beyond that and below the surface where systems need to coordinate, adapt, and operate in production environments - safely. That’s where most the friction is and the biggest hurdles: 𝗼𝗿𝗰𝗵𝗲𝘀𝘁𝗿𝗮𝘁𝗶𝗼𝗻, 𝗴𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲, 𝗶𝗻𝗳𝗿𝗮𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲, and 𝗹𝗼𝗻𝗴-𝘁𝗲𝗿𝗺 𝗺𝗮𝗶𝗻𝘁𝗮𝗶𝗻𝗮𝗯𝗶𝗹𝗶𝘁𝘆. And it’s where 𝘮𝘰𝘴𝘵 𝘴𝘰𝘭𝘶𝘵𝘪𝘰𝘯𝘴 𝘧𝘢𝘭𝘭 𝘴𝘩𝘰𝘳𝘵 - until this day. And it is not just a technical challenge. It is about designing systems that let people - with or without deep AI background - turn their idea to an Agentic solution, without needing to assemble the whole necessary stack themselves. To do that well, we believe five areas matter most: • Technology – Agents must evolve, stay efficient, and meet enterprise requirements. That requires deep infrastructure, not surface-level wrappers. • Tooling – Teams need tools that abstract complexity, reduce time-to-value, and work across levels of technical fluency. • Governance – Trust, explainability, and compliance should be defaults, not afterthoughts. • Infrastructure – Control matters. Systems should run where teams need them to, not just where a vendor dictates. • Enablement – Adoption only happens when people feel confident building. Training, documentation, and real support are non-negotiable. These are the areas we’ve chosen to invest in. At aiXplain instead of chasing trends, we decided to build 𝘁𝗵𝗲 𝗳𝗼𝘂𝗻𝗱𝗮𝘁𝗶𝗼𝗻𝘀 𝗻𝗲𝗲𝗱𝗲𝗱 𝘁𝗼 𝘁𝗮𝗸𝗲 𝗔𝗜 𝗯𝗲𝘆𝗼𝗻𝗱 𝗱𝗲𝗺𝗼𝘀 𝗮𝗻𝗱 𝗶𝗻𝘁𝗼 𝗿𝗲𝗮𝗹 𝗱𝗲𝗽𝗹𝗼𝘆𝗺𝗲𝗻𝘁.
-
I’ve worked with a lot of finserv engineering leaders trying to figure out where AI fits in their SDLC. Almost every conversation starts the same way: “Our devs are experimenting with Copilot and ChatGPT.” “We want to adopt AI, but we don’t know where to start.” Neither did we at first. AI is everywhere. Exec teams are pushing for it. The potential upside is huge. But here’s the problem… AI doesn’t work the same way with existing software as it does vibe coding a new app. And what works for an individual doesn’t work for teams of engineers. More than that, there’s no organization-wide adoption, so there are no speed gains. But here’s what I’ve learned: the blocker isn’t the tools or even the team members (which it can often feel like). It’s the lack of a system. The lack of standardization across the SDLC. The lack of process rigor. AI is being used the way Excel macros once were…individually, inconsistently, and invisibly. The result? Here’s what I keep seeing behind the scenes: ✅ Teams lack organizational playbooks – Engineers using AI in isolation, PMs trying to vibe prompts, no shared standards - many on the team aren’t using it at all. ✅ Tooling and process mismatch - Fragmented tools, different use cases, and inconsistent results. ✅ Context debt - No structured documentation on the system, no vector stores, resulting in low-quality AI outputs across codebases. ✅ Lack of AI trust - Concerns around AI missing critical business logic, introducing bugs, or failing quality checks stunt broader adoption. ✅ Time wasted - Without a clear roadmap to org-wide adoption, individual AI training just diverts resources from the roadmap with little to show for it. ✅ Leadership pressure - Leaders see the potential, but don’t have a way to scale wins beyond individual contributors. So, how do you change? The orgs that are doing this right are following a clear path: 📌 Stage 1: Experimental Engineers try Copilot or GPT on side projects. It’s fun, but isolated. No measurement. No reusability. 📌 Stage 2: Standardization Prompt libraries emerge. The org starts agreeing on how AI supports code, tests, PRs. Manual usage becomes repeatable. 📌 Stage 3: Systemization Individual agents are embedded in workflows. e.g. a pull request bot that uses your standardized AI prompt. Systems talk to systems. Humans supervise. 📌 Stage 4: Autonomous Coordination Agents hand off to each other. One agent’s output is another’s input. Humans handle exceptions and the parts of the SDLC that AI can’t automate. The hard part? Getting from Stage 2 to Stage 3. It takes enforced consistency. One process, defined org-wide. Without that, there are limited efficiency gains. AI can’t move from tool to teammate. I’ve come to believe this: If your org hasn’t defined how agents participate in your SDLC, you haven’t adopted AI. You’ve adopted experimentation. Thoughts? What's working or not?👇 #AI #AgenticEngineering #DevEx #SoftwareDevelopment #EngineeringLeadership
-
AI Adoption: Reality Bites After speaking with customers across various industries yesterday, one thing became crystal clear: there's a significant gap between AI hype and implementation reality. While pundits on X buzz about autonomous agents and sweeping automation, business leaders I spoke with are struggling with fundamentals: getting legal approval, navigating procurement processes, and addressing privacy, security, and governance concerns. What's more revealing is the counterintuitive truth emerging: organizations with the most robust digital transformation experience are often facing greater AI adoption friction. Their established governance structures—originally designed to protect—now create labyrinthine approval processes that nimbler competitors can sidestep. For product leaders, the opportunity lies not in selling technical capability, but in designing for organizational adoption pathways. Consider: - Prioritize modular implementations that can pass through governance checkpoints incrementally rather than requiring all-or-nothing approvals - Create "governance-as-code" frameworks that embed compliance requirements directly into product architecture - Develop value metrics that measure time-to-implementation, not just end-state ROI - Lean into understanability and transparency as part of your value prop - Build solutions that address the career risk stakeholders face when championing AI initiatives For business leaders, it's critical to internalize that the most successful AI implementations will come not from the organizations with the most advanced technology, but those who reinvent adoption processes themselves. Those who recognize AI requires governance innovation—not just technical innovation—will unlock sustainable value while others remain trapped in endless proof-of-concept cycles. What unexpected adoption hurdles are you encountering in your organization? I'd love to hear perspectives beyond the usual technical challenges.
-
Today's update on #GenAI adoption and enterprise #scaling raises some critical issues around infrastructure and cybersecurity that often get ignored in the shiny glitz of ever-evolving foundation models and their fancy valuations. --- AI Enterprise Scaling: The Infrastructure Reality Check Beneath the hype, the foundational pillars of enterprise AI—infrastructure, strategy, and security—are cracking under the strain of real-world deployment, preventing organizations from capturing promised value. Update 1: The Infrastructure Preparedness Crisis Challenge: Critical infrastructure gaps are leaving enterprises unprepared for AI workloads. Details: A Cisco analysis reveals only 13% of enterprises are fully prepared to support AI at scale. The issue is not a lack of ambition but a fundamental architectural mismatch; most data centers were not designed for the GPU-dense, data-hungry pipelines that demand high-throughput, low-latency traffic across heterogeneous stacks. Source: https://lnkd.in/gCUDVtV4 Update 2: The Strategic ROI Disconnect Challenge: A massive perception gap on AI strategy is undermining ROI. Details: Research shows that while 73% of executives believe their AI approach is strategic, only 47% of the workforce agrees. This disconnect suggests enterprises are misapplying AI to "old" problems instead of targeting the "dark" business processes where automation can unlock true value—the historically invisible, manual workflows. Source: https://lnkd.in/gUbTuJtR Update 3: Security Governance in the Dark Challenge: Pervasive visibility and control gaps are exposing firms to major AI-driven risks. Details: A staggering 90% of enterprises are unprepared for AI-driven cyberattacks. This is compounded by the fact that only 21% of organizations have visibility into all AI tools being used, and 77% lack AI-specific security practices to protect their models, pipelines, and data from compromise. Source: https://lnkd.in/graipKgU Key Takeaway The path to scalable AI is not paved with better models, but with foundational redesigns of infrastructure, strategy, and security to match the complex operational reality of the enterprise. --- In my upcoming book on Cognitive Chasm, I build upon my research by addressing the "how" of GenAI adoption, i.e., how could the enterprises systematically adopt GenAI and avoid falling into the #cognitivechasm that seems to be rampant in the industry, and "95% failure rate" seems to have been accepted as the de facto constant of cognitive adoption! As I often joke in my talks, most industries, and not just companies, would get outlawed if they even had a 20% failure rate. Think of an airline that says 20% of our flights don't land or reach some other destination!....will you ever travel with them?
-
Thought provoking and great conversation between Aravind Srinivas (Founder, Perplexity) and Ali Ghodsi (CEO, Databricks) today Perplexity Business Fellowship session sometime back offering deep insights into the practical realities and challenges of AI adoption in enterprises. TL;DR: 1. Reliability is crucial but challenging: Enterprises demand consistent, predictable results. Despite impressive model advancements, ensuring reliable outcomes at scale remains a significant hurdle. 2. Semantic ambiguity in enterprise Data: Ali pointed out that understanding enterprise data—often riddled with ambiguous terms (C meaning calcutta or california etc.)—is a substantial ongoing challenge, necessitating extensive human oversight to resolve. 3. Synthetic data & customized benchmarks: Given limited proprietary data, using synthetic data generation and custom benchmarks to enhance AI reliability is key. Yet, creating these benchmarks accurately remains complex and resource-intensive. 4. Strategic AI limitations: Ali expressed skepticism about AI’s current capability to automate high-level strategic tasks like CEO decision-making due to their complexity and nuanced human judgment required. 5. Incremental productivity, not fundamental transformation: AI significantly enhances productivity in straightforward tasks (HR, sales, finance) but struggles to transform complex, collaborative activities such as aligning product strategies and managing roadmap priorities. 6. Model fatigue and inference-time compute: Despite rapid model improvements, Ali highlighted the phenomenon of "model fatigue," where incremental model updates are becoming less impactful in perception, despite real underlying progress. 7. Human-centric coordination still essential: Even at Databricks, AI hasn’t yet addressed core challenges around human collaboration, politics, and organizational alignment. Human intuition, consensus-building, and negotiation remain central. Overall the key challenges for enterprises as highlighted by Ali are: - Quality and reliability of data - Evals- yardsticks where we can determine the system is working well. We still need best evals. - Extreme high quality data is a challenge (in that domain for that specific use case)- Synthetic data + evals are key. The path forward with AI is filled with potential—but clearly, it's still a journey with many practical challenges to navigate.
-
In the past few months, while I’ve been experimenting with it by myself on the side, I've worked with a variety of companies to assess their readiness for implementing #GenerativeAI. The pattern is striking: people are drawn to the allure of Gen AI for its elegant, rapid answers, but then often stumble upon age-old hurdles during implementation. The importance of robust #datamanagement is evident. Foundational capabilities are not merely helpful but essential, and neglecting them can endanger a company's reputation and business sustainability when training Gen AI models. Data still matters. ⚠️ Gen AI systems are generally advanced and complex, requiring large, diverse, and high-quality datasets to function optimally. One of the foremost challenges is therefore to maintain data quality. The old adage “garbage in, garbage out” holds true in the context of #GenAI. Just like any other AI use case or business process, the quality of the data fed into the system directly impacts the quality of the output. 💾 Another significant challenge is managing the sheer volume of data needed, especially for those who wish to train their own Gen AI models. While off-the-shelf models may require less data, custom training demands vast amounts of data and substantial processing power. This has a direct impact on the infrastructure and energy required. For instance, generating a single image can consume as much energy as fully charging a mobile phone. 🔐 Privacy and security concerns are paramount as many Gen AI applications rely on sensitive #data about individuals or companies. Consider the use case of personalizing communications, which cannot be effectively executed without having, indeed, personal details about the intended recipient. In Gen AI, the link between input data and outcomes is less explicit compared to other predictive models, particularly those with clearly defined dependent variables. This lack of transparency can make it challenging to understand how and why specific outputs are generated, complicating efforts to ensure #privacy and #security. This can also cause ethical problems when the training data contains biases. 🌐 Most Gen AI applications have a specific demand for data integration, as they require synthesis of information from a variety of sources. For instance, a Gen AI system designed for market analysis might need to integrate data from social media, financial reports, news articles and consumer behavior studies. The ability to integrate these disparate data sets not only demands the right technological solutions but also raises complexities around data compatibility, consistency, and processing efficiency. In the next few weeks, we’ll unpack these challenges in more detail, but for those that can’t wait, here’s the full article ➡️ https://lnkd.in/er-bAqrd