After years working on AI, here's what I’ve seen that works and what doesn’t in enterprises making AI real. If you want to move beyond pilots and into production: 𝟭/ 𝗔𝗜 𝗶𝘀 𝗮 𝘀𝘆𝘀𝘁𝗲𝗺𝘀 𝗽𝗿𝗼𝗯𝗹𝗲𝗺, 𝗻𝗼𝘁 𝗮 𝗺𝗼𝗱𝗲𝗹 𝗽𝗿𝗼𝗯𝗹𝗲𝗺 The most successful deployments aren’t about plugging in the latest model. They’re about orchestrating models inside secure, privacy-preserving workflows, with clear ownership and deterministic behavior. Build compound systems: - Think orchestration layers, not chat interfaces - Handle PII internally, only send safe inputs to models - Keep business logic and computation on your end 𝟮/ 𝗗𝗮𝘁𝗮 𝗽𝗿𝗶𝘃𝗮𝗰𝘆 𝗶𝘀𝗻’𝘁 𝗼𝗽𝘁𝗶𝗼𝗻𝗮𝗹, 𝗶𝘁’𝘀 𝘁𝗵𝗲 𝗳𝗼𝘂𝗻𝗱𝗮𝘁𝗶𝗼𝗻 No matter how good the model is, if you don’t design for privacy from day one, you’ll stall out before production. You need to place systems where nothing sensitive ever touches the LLM, specially if it is a 3rd party API call. That’s the bar. ✅ Local pre-processing ✅ Sensitive detection using internal SLMs ✅ Model only sees what it needs, never raw data 𝟯/ 𝗠𝗼𝗱𝗲𝗹-𝗮𝗴𝗻𝗼𝘀𝘁𝗶𝗰 = 𝗹𝗼𝗻𝗴-𝘁𝗲𝗿𝗺 𝗹𝗲𝘃𝗲𝗿𝗮𝗴𝗲 Leading AI platform strategy, we always aimed to be multi-model, multi-cloud. Why? Because the performance gap between top models is closing. And pricing, licensing, and latency really matter. 𝟰/ 𝗠𝘂𝗹𝘁𝗶-𝗮𝗴𝗲𝗻𝘁 𝘀𝘆𝘀𝘁𝗲𝗺𝘀 𝗮𝗿𝗲 𝘄𝗵𝗲𝗿𝗲 𝘁𝗵𝗲 𝗺𝗮𝗴𝗶𝗰 𝗵𝗮𝗽𝗽𝗲𝗻𝘀 We’re seeing real traction with agentic designs. I’ve recommended teams deploying internal AI agents that: - Extract, validate, and match data - Trigger downstream actions - Work in autonomous flows, with humans in the loop only at the end This isn’t science fiction. It’s happening now for some real workflows. 𝟱/ 𝗟𝗮𝘁𝗲𝗻𝗰𝘆 𝗮𝗻𝗱 𝗰𝗼𝘀𝘁 𝗮𝗿𝗲 𝗲𝗻𝘁𝗲𝗿𝗽𝗿𝗶𝘀𝗲 𝗸𝗶𝗹𝗹𝗲𝗿𝘀 You can have the smartest model in the world, but if it takes too long or costs too much, it won’t make it past your CFO or your ops team. I always advise teams to: - Benchmark for latency and accuracy - Monitor token costs like cloud spend - Stay lean, especially in customer-facing apps Don’t get distracted by the model-of-the-month. The real differentiator? How you integrate AI into your systems.
How to Utilize Multiple AI Models in Organizations
Explore top LinkedIn content from expert professionals.
Summary
Organizations can achieve more by combining multiple AI models, allowing them to use various specialized capabilities to solve complex tasks and improve workflows. This approach emphasizes integrating different AI tools effectively, ensuring security, performance, and scalability across operations.
- Build an orchestration system: Create a robust framework that integrates various AI models to handle specific tasks, ensuring they work together seamlessly while safeguarding data privacy.
- Match models with tasks: Assign the right model to each job, such as using faster, simpler models for basic tasks and more advanced models for complex problem-solving.
- Monitor and refine: Continuously track model performance, costs, and outputs to ensure they align with organizational goals and make improvements as needed.
-
-
AI Field notes: models are awesome in isolation; but the superpower of AI is in combining these models to be greater than the sum of their parts. Let's dive in. ⚙️ Foundation models are one of the most important software components of the next 100 years. These remarkable models are best thought of as reasoning and integration engines. Combing these models have compounding effects, like sparks in a firework display. ⚡️ The spark: taken in isolation, each foundation model has a sweet spot. Some are capable at natural language tasks, or summarization, or handling different languages; others are really fast; others are super affordable; some work really well on text; others excel at understanding images, or whiteboards, or sketches, or speech, and so on. 📊 Bedrock is the rocket: picking the right model for the right use case makes the difference between a successful prototype, and an impactful, bottom-line-moving new feature, product, or process (it's why we make this super easy to automate in Bedrock). 🎆 Combined together, you get fireworks. The combination of foundation models - each with their own sweet spot - isn't just additive, it is a force multiplier of capability. An AI system comprising multiple models is able to tap into all of these sweet spots at once, and the result is greater than the sum of its parts. ☀️ An AI system of sufficiently advanced capability won't just benefit from the compounding effect of these abilities, it will be enabled uniquely through them. Two quick examples. 1️⃣ Imagine a legal team automating document analysis and preparation, which could combine a powerful, deep model to understand legal texts, PDFs, diagrams, etc; a fast model to automate the generation of routine legal docs, and a low-cost model to refine output from other models for consistency of style, language, and tone. The result would be a faster, more efficient way to process – say – legal contracts, or understand the risk of old life insurance policies. 2️⃣ Or a smart city traffic management app which combines powerful models to analyze traffic patterns from images, fast models to render the results in real time, and models with a balance of intelligence and speed to coordinate short-term traffic management strategies based on current and future projections. The result would be more efficient routing of traffic, or emergency vehicles in peak commute times. While the sparks are exciting in close up, the fireworks are where the show is from the perspective of the business. It's why you see model providers like Anthropic launching model families - Haiku, Sonnet, and Opus - each with a unique spark, which combined together or with models from other providers, lead to amazing results. Exciting times. #aws #genai #ai
-
If you are a #bank and serious about #genAI, then your architecture should include an AI gateway. It is a multi-model world out there and banks might use OpenAI for one set of use cases and #FinBert for another. Further, banks will likely be using 30+ models through a variety of vendors. Instead of developing infrastructure around each model, an #aigateway is a management layer across all models that helps you consistently ensure security, privacy and output control while tracking performance. It can also help integrate models while orchestrating requests. An AI gateway can route simple retrieval requests to a smaller, faster and cheaper model, while complex requests that have a dose of required analysis can be routed to an enhanced reasoning model. Banks should consider a financial services-specific gateway from someone like Dynamo AI, a market leader such as Kong Inc. or from a hyperscaler such as IBM, Microsoft, or Amazon Web Services (AWS). #AI #ITStrategy #banking #bankperformance #banktechnology
-
SAP isn’t building AI. It’s assembling a Superteam. One AI model isn’t enough. 𝗦𝗔𝗣 𝘂𝘀𝗲𝘀 14. Here’s how. Co-created with Alejandro Fernandez 🚀 - the mind behind SAP’s AI playbook. This one’s your backstage pass to how SAP handpicks its smartest models from the world's top AI labs. 1. 𝗦𝗔𝗣 𝗠𝗮𝗻𝗮𝗴𝗲𝗱 𝗠𝗼𝗱𝗲𝗹𝘀 ➔ Mistral-Instruct ↳ Used for light, fast answers ➔ Meta LLaMA ↳ Helps with research and summarization ➔ IBM Granite ↳ Powers small chatbot tasks ➔ NVIDIA LLaMA ↳ Focused on embedding text inside applications 2. 𝗔𝗪𝗦 (𝗔𝗺𝗮𝘇𝗼𝗻 𝗕𝗲𝗱𝗿𝗼𝗰𝗸) ➔ Claude 3 ↳ Long and polite responses via Bedrock ➔ Amazon Titan ↳ Used in SAP orchestration for embeddings ➔ Amazon Nova ↳ Fast, private AI in preview mode 3. 𝗔𝘇𝘂𝗿𝗲 (𝗠𝗶𝗰𝗿𝗼𝘀𝗼𝗳𝘁) ➔ GPT-4o ↳ Smart, fast, sees and speaks ➔ GPT-4 ↳ Used for reasoning and complex tasks ➔ GPT-3.5 ↳ SAP-managed for quick answers 4. 𝗚𝗼𝗼𝗴𝗹𝗲 𝗖𝗹𝗼𝘂𝗱 (𝗩𝗲𝗿𝘁𝗲𝘅 𝗔𝗜) ➔ Gemini 1.5 Pro ↳ Multimodal, used for orchestration ➔ Gemini 1.5 Flash ↳ Used for user-level tasks with faster output 5. 𝗗𝗲𝗲𝗽𝗦𝗲𝗲𝗸 (𝗢𝗽𝗲𝗻 𝗦𝗼𝘂𝗿𝗰𝗲) ➔ DeepSeek AI ↳ Good for NLP and coding ➔ DeepSeek R ↳ Handles multilingual content, code, and docs This is how SAP builds intelligence Not by building everything By orchestrating what already works best P.S. If your company is moving to SAP BTP or exploring GenAI in SAP, this is the map you’ll want to keep. Let’s talk ♻️ 𝗦𝗮𝘃𝗲 𝘁𝗵𝗶𝘀 𝗽𝗼𝘀𝘁 𝗮𝗻𝗱 𝘀𝗵𝗮𝗿𝗲 𝗶𝘁 𝘄𝗶𝘁𝗵 𝘆𝗼𝘂𝗿 𝘁𝗲𝗮𝗺 𝗯𝗲𝗳𝗼𝗿𝗲 𝘀𝘁𝗮𝗿𝘁𝗶𝗻𝗴 𝘆𝗼𝘂𝗿 𝗻𝗲𝘅𝘁 𝗔𝗜 𝗽𝗿𝗼𝗷𝗲𝗰𝘁 𝗶𝗻 𝗦𝗔𝗣