Most people think of RAG (Retrieval-Augmented Generation) as: 𝘘𝘶𝘦𝘳𝘺 → 𝘝𝘦𝘤𝘵𝘰𝘳 𝘋𝘉 → 𝘓𝘓𝘔 → 𝘈𝘯𝘴𝘸𝘦𝘳 But that’s just step one. In 2025, we’re seeing a shift toward 𝗔𝗴𝗲𝗻𝘁𝗶𝗰 𝗥𝗔𝗚 systems—where LLMs don’t just retrieve and respond, but also 𝗿𝗲𝗮𝘀𝗼𝗻, 𝗽𝗹𝗮𝗻, 𝗮𝗻𝗱 𝗮𝗰𝘁. This visual breakdown captures the core idea: → A query is embedded and used to fetch relevant chunks from a vector DB. → An 𝗔𝗜 𝗔𝗴𝗲𝗻𝘁 leverages those chunks to craft context-aware prompts. → It can also invoke external tools: • Web Search • APIs • Internal Databases This unlocks workflows that are: • Dynamic • Context-aware • Action-oriented It's not just answering — it's deciding 𝘄𝗵𝗮𝘁 𝘁𝗼 𝗱𝗼 𝗻𝗲𝘅𝘁. Toolkits like 𝗟𝗮𝗻𝗴𝗚𝗿𝗮𝗽𝗵, 𝗖𝗿𝗲𝘄𝗔𝗜, 𝗚𝗼𝗼𝗴𝗹𝗲 𝗔𝗗𝗞, and 𝗔𝘂𝘁𝗼𝗚𝗲𝗻 are making this architecture practical for real-world systems. What tools or techniques are 𝘺𝘰𝘶 using to take your LLM apps beyond static chatbots?
Chat-Oriented Programming (CHOP) Trends for 2025
Explore top LinkedIn content from expert professionals.
Summary
Chat-oriented programming (CHOP) is an emerging approach where conversational AI agents interact with users, automate tasks, and collaborate with other agents, moving beyond simple chatbot responses. Trends for 2025 highlight autonomous agents that reason, plan, and act across complex workflows, powered by new protocols and architectures, making AI a central part of business operations.
- Explore agent systems: Look into AI agents that can automate tasks, collaborate with other agents, and access external tools to streamline your company's workflows.
- Update team skills: Encourage your engineering and product teams to learn prompt design and work with chat-based coding tools to stay ahead in software development.
- Plan security measures: Begin addressing operational and governance concerns to safely integrate chat-oriented agents with sensitive data and company systems.
-
-
2025 is the Year of Anthropic's MCP and Google's A2A. Everyone's talking about AI agents, but few understand the protocols that power them. 2025 is witnessing two pivotal protocols with two outstanding standards that aren't competitors, but complementary layers in the AI infrastructure: 𝗠𝗖𝗣 (𝗠𝗼𝗱𝗲𝗹 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝗣𝗿𝗼𝘁𝗼𝗰𝗼𝗹) by Anthropic • Creates vertical connections between applications and AI models • Flow: Application → Model → External Tools/Data • Solves context window limitations and standardizes tool access • Think of it as the nervous system connecting your brain to your body's tools 𝗔𝟮𝗔 (𝗔𝗴𝗲𝗻𝘁-𝘁𝗼-𝗔𝗴𝗲𝗻𝘁 𝗣𝗿𝗼𝘁𝗼𝗰𝗼𝗹) by Google • Enables horizontal communication between independent AI agents • Flow: Agent ↔ Agent (peer-to-peer) • Solves agent interoperability and complex multi-specialist workflows • Think of it as the language that lets different experts collaborate on your behalf Beyond technicality, each protocol has its core strengths. 𝗪𝗵𝗲𝗻 𝘁𝗼 𝘂𝘀𝗲 𝗠𝗖𝗣: • Building document Q&A systems • Creating code assistance tools • Developing personal data assistants • Needing fine-grained control over context 𝗪𝗵𝗲𝗻 𝘁𝗼 𝘂𝘀𝗲 𝗔𝟮𝗔: • Orchestrating multi-agent workflows • Automating cross-department processes • Creating agent marketplaces • Building distributed problem-solving systems Both protocols are gaining significant traction: 𝗠𝗖𝗣 𝗘𝗰𝗼𝘀𝘆𝘀𝘁𝗲𝗺: • Backed by major LLM providers (Anthropic, OpenAI, Google) • Strong developer tooling and SDKs • Focus on model-tool integration • Open-source with growing community support 𝗔𝟮𝗔 𝗘𝗰𝗼𝘀𝘆𝘀𝘁𝗲𝗺: • 50+ enterprise partners at launch • Emphasis on business workflow integration • Strong multimodal capabilities • Built for enterprise-grade applications Top AI solutions integrate both MCP and A2A to maximize their potential. • Use MCP to give your models access to tools and data • Use A2A to orchestrate collaboration between specialized agents • Think in layers: model-tool integration AND agent-agent communication Over to you: What tasks for AI agent do you think would benefit the most for A2A Protocol over MCP?
-
𝗧𝗟;𝗗𝗥 NeurIPS 2025 marks the definitive shift from "Chat" to "Autonomy." The research signals a split reality for the enterprise: generic models are converging into a commoditized "Artificial Hivemind," leaving proprietary data as your only real moat. However, the upside is massive. New "Gated Attention" architectures are redefining inference efficiency, while breakthroughs in 1,000-layer Deep RL are finally unlocking agents capable of navigating complex, long-horizon enterprise workflows without getting stuck. NeurIPS is around the corner and wanted to highlight some trends based on the best papers (https://lnkd.in/ejp6vEjD) 𝟯 𝗣𝗮𝗽𝗲𝗿𝘀 (𝗮𝗻𝗱 𝘁𝗵𝗲𝗺𝗲𝘀) 𝗬𝗼𝘂 𝗡𝗲𝗲𝗱 𝘁𝗼 𝗞𝗻𝗼𝘄 𝟭. 𝗧𝗵𝗲 𝗗𝗶𝗳𝗳𝗲𝗿𝗲𝗻𝘁𝗶𝗮𝘁𝗶𝗼𝗻 𝗖𝗿𝗶𝘀𝗶𝘀 • 𝗣𝗮𝗽𝗲𝗿: 𝗔𝗿𝘁𝗶𝗳𝗶𝗰𝗶𝗮𝗹 𝗛𝗶𝘃𝗲𝗺𝗶𝗻𝗱: The Open-Ended Homogeneity of Language Models • 𝗧𝗵𝗲 𝗦𝗶𝗴𝗻𝗮𝗹: Models trained on synthetic data and each other’s outputs are suffering from "inter-model homogeneity." They are converging on the same "average" answers. • 𝗧𝗵𝗲 𝗘𝗻𝘁𝗲𝗿𝗽𝗿𝗶𝘀𝗲 𝗥𝗲𝗮𝗹𝗶𝘁𝘆: If you rely on a vanilla wrapper around GPT, Claude and Gemini your business logic is becoming a commodity. 𝟮. 𝗧𝗵𝗲 𝗡𝗲𝘄 𝗔𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲 𝗦𝘁𝗮𝗻𝗱𝗮𝗿𝗱 • 𝗣𝗮𝗽𝗲𝗿: Gated Attention for Large Language Models (Qwen Team) • 𝗧𝗵𝗲 𝗦𝗶𝗴𝗻𝗮𝗹: By adding a simple "gate" to attention heads, we can stabilize training at massive scales and prevent "attention sinks." • 𝗧𝗵𝗲 𝗘𝗻𝘁𝗲𝗿𝗽𝗿𝗶𝘀𝗲 𝗥𝗲𝗮𝗹𝗶𝘁𝘆: This is the update for your self-hosted inference. Models using Gated Attention (like Qwen3-Next) can offer significantly better performance-per-dollar. 𝟯. 𝗧𝗵𝗲 𝗔𝗴𝗲𝗻𝘁𝗶𝗰 𝗨𝗻𝗹𝗼𝗰𝗸 𝗣𝗮𝗽𝗲𝗿: 1000 Layer Networks for Self-Supervised RL 𝗧𝗵𝗲 𝗦𝗶𝗴𝗻𝗮𝗹: We used to think RL couldn't scale in depth like LLMs. This paper proves we can train 1,000-layer RL networks using self-supervised contrastive learning. 𝗧𝗵𝗲 𝗘𝗻𝘁𝗲𝗿𝗽𝗿𝗶𝘀𝗲 𝗥𝗲𝗮𝗹𝗶𝘁𝘆: This enables L5 Autonomous Agents - agents that can navigate complex ERP/CRM workflows without getting stuck in loops. 𝗔𝗰𝘁𝗶𝗼𝗻𝘀 𝗳𝗼𝗿 𝗖𝗧𝗢𝘀 𝗮𝗻𝗱 𝗖𝗔𝗜𝗢𝘀 𝟭. 𝗣𝗶𝘃𝗼𝘁 𝘁𝗼 "𝗗𝗮𝘁𝗮 𝗜𝗻𝗷𝗲𝗰𝘁𝗶𝗼𝗻": Go beyond prompt engineering with context and data engeineering. Focus even more on RAG and Fine-Tuning pipelines that inject your proprietary data to break the "Hivemind" average. 𝟮. 𝗔𝗱𝗼𝗽𝘁𝗶𝗻𝗴 𝗚𝗮𝘁𝗲𝗱 𝗠𝗼𝗱𝗲𝗹𝘀: When evaluating open-weights models for 2026, mandate "Gated Attention" architectures to lower your long-term inference TCO. 𝟯. 𝗣𝗶𝗹𝗼𝘁 𝗗𝗲𝗲𝗽 𝗥𝗟: Move your "Agent" pilots beyond simple tool use. Start testing self-supervised RL on internal workflows to build agents that learn from your experts' corrections.
-
𝗧𝗟;𝗗𝗥: 𝗡𝗲𝘃𝗲𝗿 𝗺𝗶𝗻𝗱 𝘁𝗵𝗲 𝗔𝗜 𝗮𝗴𝗲𝗻𝘁 𝗵𝘆𝗽𝗲 𝗱𝗿𝗶𝘃𝗲𝗻 𝗯𝘆 𝘁𝗵𝗲 𝗰𝗼𝗼𝗹 𝗸𝗶𝗱𝘀. 𝗘𝘀𝘁𝗮𝗯𝗹𝗶𝘀𝗵𝗲𝗱 𝗦𝗮𝗮𝗦 𝗰𝗼𝗺𝗽𝗮𝗻𝗶𝗲𝘀 𝗮𝗿𝗲 𝗾𝘂𝗶𝗲𝘁𝗹𝘆 𝗴𝗼𝗶𝗻𝗴 𝗮𝗹𝗹-𝗶𝗻 𝗼𝗻 𝗮𝗴𝗲𝗻𝘁𝘀 𝗳𝗼𝗿 𝟮𝟬𝟮𝟱. Everyone's buzzing all day about building agents and exposing MCP interfaces, but honestly, it felt to me that the immediate product impact would be mostly driven by vibe-coders and the "cool startups." 😎 To try and understand how real is it?, I did a quick survey with a few dozen of our customers. Not just AI startups, but mainly established B2B SaaS companies, quite evenly spread across company sizes, funding stages, industries, and geographies. Surprisingly, 𝗺𝗼𝗿𝗲 𝘁𝗵𝗮𝗻 𝟯𝟳% (!!) 𝗵𝗮𝘃𝗲 𝗔𝗜 𝗮𝗴𝗲𝗻𝘁𝘀 𝗮𝘀 𝗽𝗮𝗿𝘁 𝗼𝗳 𝘁𝗵𝗲𝗶𝗿 𝟮𝟬𝟮𝟱 𝗽𝗿𝗼𝗱𝘂𝗰𝘁 𝗿𝗼𝗮𝗱𝗺𝗮𝗽. 𝗔𝗺 𝗜 𝘁𝗵𝗲 𝗼𝗻𝗹𝘆 𝗼𝗻𝗲 𝗮𝗺𝗮𝘇𝗲𝗱 𝗵𝗲𝗿𝗲? These companies are mature, backed by serious investors, with real customers and revenue streams. They definitely have something to lose if this agent trend is just hype and won’t have business impact anytime soon… or worse than that, they can face potential issues around security, scalability, or just a bad user experience. Next, I asked our customers about what kind of agent products they’re planning to develop. I found five main 𝗽𝗿𝗼𝗱𝘂𝗰𝘁 𝘁𝘆𝗽𝗲𝘀 emerging clearly: 𝟭. 𝗚𝗵𝗼𝘀𝘁 𝗔𝗴𝗲𝗻𝘁 (𝟯𝟳%) Invisible agents automating tasks behind the scenes, like an analytics insights agent pushing valuable analysis notifications over Slack. 𝟮. 𝗖𝗼-𝗣𝗶𝗹𝗼𝘁 (𝟯𝟱%) Real-time helpers living inside your apps, providing instant suggestions and chat-like guidance as you work (e.g., inner-app chatbot helpers). 𝟯. 𝗦𝗼𝗹𝗼 𝗔𝗴𝗲𝗻𝘁 (𝟭𝟭%) (𝘸𝘪𝘵𝘩 𝘱𝘭𝘢𝘯𝘴 𝘢𝘳𝘰𝘶𝘯𝘥 𝘴𝘦𝘤𝘰𝘯𝘥 𝘱𝘳𝘰𝘥𝘶𝘤𝘵𝘴) Independent task-focused agents, often personalized with names, like your friendly project management agent. 𝟰. 𝗠𝗮𝗿𝗸𝗲𝘁𝗽𝗹𝗮𝗰𝗲 𝗔𝗴𝗲𝗻𝘁 (𝟭𝟮%) Discoverable and pluggable agents available via centralized marketplaces, such as the Instacart "AskInsta" agent on the OpenAI GPT Store. 𝟱. 𝗘𝗻𝘁𝗲𝗿𝗽𝗿𝗶𝘀𝗲 𝗢𝗦 (𝟱%) Integrated ecosystems of specialized agents for organizational tasks, like an SDR agent you onboard into your Slack or Glean environment. How does this line up with your company's roadmap or industry trends you're observing? I'd love to hear your thoughts. #AI #AIagents #AgenticIdentity #SaaS #B2B #ProductManagement #EnterpriseTech #TechTrends
-
I have been lucky enough to talk to a lot of smart folks in the codegen space recently. Startups, enterprises, VCs, etc. I was asked a thoughtful question today that I figured would be good to share: what three things should a development team be doing to prepare for assistive or agentic code generation tools? Here are my three suggestions: 1. Invest in understanding the state of the art w/ assistive tools. Copilot is cool, but Cursor is fricken magic. You probably want to find a way to use Cursor if you can make it work for you. These capabilities get better the more integrated it is into your environment. This is a space you are going to need to invest in keeping abreast of, so carve off dedicated resources to get and stay smart. 2. Training is imperative. This isn't a 'build it and they will come' moment. As amazing as the capabilities are, curiosity isn't free, and especially not in an organization that is under execution strain. i) Update your training resources and agenda to include content on how to not only use codegen tooling (e.g. being security aware) but to make them really shine and get the most out of them. Create incentives for use of the tools. Create space for people to play (genai hackathons, etc). ii) The optimal skill profile of your engineering team is going to change. Everyone should have some familiarity w/ prompt engineering, understand the patterns of genai systems etc. You should also recognize that curious generalists w/ codegen tools (and general genai chops) are going to have an edge on traditional specialists (e.g. pure-play front end engineer) in this new world. Help your team become curious generalists that are powered by assistive and agentic tools. 3. Start paving the way for an agentic future. While assistive tools are incredibly powerful, the IDE is not going to be the right point of integration for value creation in the future. Agents are going to need to access code, docs, data and potentially production system control planes and will be well situation to operate asynchronously. CHOP (chat oriented programming) is going to become a thing. Start paying attention *now* to what operational, security, policy, governance and other concerns need to be worked through to start allowing agentic systems access to sensitive systems. There is tremendous power there, but there also be dragons. Starting thinking about this now because it is going to be a heavy lift but the value creation will be disproportionate and you will be out competed by your competitors if you are badly behind. What are your three things?