Vendors who told enterprises that AI networking would have a profound impact are proving correct. Credit: Thanadon88/Shutterstock.com Enterprises have always seen AI as specialized elements of a business process, not as some at-everyone’s-elbow assistant. That’s what enterprises call an “agent” AI model. Maybe it interacts directly with people, maybe with software in a workflow, or even with other agents, but it nearly always interacts with enterprise data. That’s different from the online chat form of AI that everyone uses, and the impact of the difference is really important to enterprises, their network traffic, and their budgets. AI agents are, in many ways, a chain of things. [ Related: Agentic AI – Ongoing news and insights ] The first way is that unlike online generative AI or even enterprise-trained chatbots, agents are likely to be based on pre-trained “foundation models,” and so their data appetites are ongoing rather than concentrated in the model-creating processes. Agents may be more like software components, unlike generative AI services, but they aren’t like software in the way they get to your data, and that means that software-centric processes to assess traffic impact, cost impact, and governance and security will need some extra attention. A programmer doesn’t build reads and writes into AI models like they do with software. In fact, there aren’t any read and write commands in an AI agent at all. With AI agents, the Model Context Protocol (MCP) provides the linkage to company data, and only indirectly, through an MCP server and “tools” that integrate the data with the model. The model accesses the tool, running on the server, and the tool accesses the data. Change either server or toolkit, and you potentially change data access, in the “what it is,” “where it is,” and “how much” dimensions, creating our second “chain” behavior for AI agents. An MCP tool is kind of insidious, enterprises say. It’s a proxy for a set of capabilities that include database access, event access, and even updates and actions. Give a worker access to an agent that, in turn, has access to an MCP server and its tools, and you give that worker both permission and capability to do whatever the tools allow. They don’t even realize they’re doing it, because how the model uses the tools, and the data, is effectively inside the proverbial black box. The chain linking AI agents to data makes assessing the data traffic impact of agent use a challenge. More so because it’s very unlikely that you’ll use a single tool or even a single MCP server. AI agent data policies should be set first by what server a given model can use, then what tools the server offers, and finally what data a tool can access. That way, you can ensure that a given AI agent doesn’t end up having accidental access to data that’s restricted, and that you don’t offer that access to people who shouldn’t have it. The chain analogy is critical here. Realistic uses of AI agents will require core database access; what can possibly make an AI business case that isn’t tied to a company’s critical data? The four critical elements of these applications—the agent, the MCP server, the tools, and the data— are all dragged along with each other, and traffic on the network is the linkage in the chain. How much traffic is generated? Here, enterprises had another surprise. Enterprises told me that their initial view of their AI hosting was an “AI cluster” with a casual data link to their main data center network. With AI agents, they now see smaller AI servers actually installed within their primary data centers, and all the traffic AI creates, within the model and to and from it, now flows on the data center network. Vendors who told enterprises that AI networking would have a profound impact are proving correct. You can run a query or perform a task with an agent and have that task parse an entire database of thousands or millions of records. Someone not aware of what an agent application implies in terms of data usage can easily create as much traffic as a whole week’s normal access-and-update would create. Enough, they say, to impact network capacity and the QoE of other applications. And, enterprises remind us, if that traffic crosses in/out of the cloud, the cloud costs could skyrocket. About a third of the enterprises said that issues with AI agents generated enough traffic to create local congestion on the network or a blip in cloud costs large enough to trigger a financial review. MCP tool use by agents is also a major security and governance headache. Enterprises point out that MCP standards haven’t always required strong authentication, and they also say that since a tool can actually update things, it’s possible for a malformed (or hacked) tool to contaminate, fabricate, or delete data. To avoid this, enterprises recommend that AI agents not have access to tools that can update data or take action in the real world, unless there’s considerable oversight into tool and agent design to ensure the agents don’t go rogue. Review and design are the key to controlling the other issues, too. Traffic issues can be mitigated by careful placement of AI agent models. Since AI agents are less demanding than the huge LLMs used for online generative AI, so you can distribute the agent hosts, even rack them with traditional servers, including the servers that control the databases the agents will use. It does mean, say a majority of enterprises, that the data center network topology and capacity should be reviewed to ensure it can handle the additional traffic AI will generate. None of the enterprises thought AI agents would require InfiniBand versus Ethernet, though, which is good news for enterprise data center network planners—and vendors. Enterprises told me that their initial view of their AI hosting was an “AI cluster” with a casual data link to their main data center network. With AI agents, they now see smaller AI servers actually installed within their primary data centers, and all the traffic AI creates, within the model and to and from it, now flows on the data center network. So vendors who told enterprises that AI networking would have a profound impact are proving correct. The next step, enterprises say, is to design your MCP tools to require strong authentication and provide some protection against runaway prompts that could explode traffic and cost. The best way to do this may be to avoid exposing large databases to agents used by non-professionals, which can be done by not providing MCP tools for this sort of access. Since some workers may need access, and can justify the cost/risk, that means having role-specific MCP servers that reflect the level of access and the database types a class of workers should be empowered to use. Some enterprises also suggest that avoiding “chat agents”, meaning AI agents used in interactive mode, is a good strategy. They’ve built GUIs in front of agents to present a selective interface that doesn’t include things that shouldn’t be done, and that can even apply policy controls to what’s selected based on the user’s sign-on. Nobody thinks that simply educating workers is likely to control traffic and cost without some special steps, but it’s important to note that none of the enterprises think that applying some tighter controls is a problem, and that some traffic and even cloud cost impact can’t be justified. The enterprise view of AI is now, and has always been, that its value can’t be unlocked unless it’s used to analyze a company’s own information; historical, real-time, or both. They’re accommodating what they’ve learned about the risks of agent AI, because they’re committed to the benefits. The question, then, is whether the vendors are committed, too, and enterprises don’t think they really are. “We have plenty of people who are willing to sell us AI pieces, but not many who want to offer AI solutions to business problems,” one big enterprise told me. Enterprises are working through the application of AI, via agents, but they’d move faster and more efficiently with some help, so maybe AI proponents should be thinking more about how to provide it. Artificial IntelligenceCloud ComputingNetworking SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below.