IBM TechXchange 2025: Why Going Back to Data Basics is the Only Real News
By David Linthicum
IBM’s TechXchange event this week, as expected, drew in a crowd of technologists, analysts, and a parade of business suits eager to hear the next big thing in enterprise tech. The high-production affairs, the bold statements about “enterprise AI readiness,” the flurry of partner logos—all familiar, all textbook IBM. And yet, as I followed the sessions and dug through the headlines, one thought kept recurring: it’s all… very much the same. More companies, more partner announcements, more talk about AI assistants and developer productivity. The cycle continues, and the spectacle is routine.
When you’ve seen as many of these events as I have, you recognize the pattern. The bulk of the “news” is partnerships—this time with Anthropic, among others. The message: we’re open, we’re ecosystem-friendly, we’re ready for multimodal AI in the enterprise. But let’s be honest: if you’re a CIO sitting anywhere outside of the Valley echo chamber, you’re asking, “What does that mean for me? Why should I care?”
The answer is, you probably shouldn’t—not about the partnerships, anyway.
The Glenn Gary, Glenn Ross moment this week was not a new LLM, not a surprise acquisition, not some “breakthrough” demo. It was a wave of announcements implicitly admitting the hard truth almost everyone in enterprise IT is quietly aware of but rarely states aloud: the hype train around AI is running on empty unless you address your data.
Let’s rewind a little. Over the last two years, the entire enterprise AI crowd has been in a mad dash to “get AI in production.” Boards demanded results, budgets were flooded with pilot projects, and consultancies churned out templates and playbooks promising quick wins. Now, the reckoning. According to MIT and other researchers, roughly 95% of these first-wave enterprise AI projects are failing to deliver meaningful value. The common denominator in these failures? Not the model, not the compute power, not even a lack of talent (though that certainly comes up). It’s data. More specifically, bad, messy, siloed, ill-understood data.
IBM, in a move that may seem understated to the press and perhaps even “not innovative” to industry pundits, is now saying the quiet part out loud: we need to get back to basics with data. The “big” announcements this week—new tooling in watsonx for data integration, a more extensive governance framework (AgentOps), a “free developer edition” of their data platform—none of these are things that will get you stage space at Dreamforce. They are not the kind of superficial product drops that go viral on LinkedIn. But they do hint at a much-needed course correction for enterprises swimming in the debris left behind by AI hype.
Let’s pick apart what this really means. For the past eighteen months, most enterprises mistook AI adoption for a race to throw as many vendor models and chatbot solutions at a wall to see what sticks. When you fast-forward to 2025, the cracks in this approach are visible everywhere. Whether it’s hallucinating LLMs giving wrong financial recommendations or pilot automation projects grinding to a halt when they bump up against data they can’t trust, the lesson is crystal clear: there is no shortcut. Without clean, meaningful, well-governed data, AI is not just ineffective—it’s dangerous and costly.
Some might look at IBM’s return to “data roots” as a lack of innovation, an admission that the company is stuck in the past—after all, didn’t we solve data warehousing and governance a decade ago? The answer, of course, is no. What we have are fragmented lakes and warehouses, hybrid cloud spaghetti, and a culture that rarely treats data as a true enterprise asset. IBM’s strength, if nothing else, is its institutional memory. It knows the game of the mainframe and of mission-critical workloads. Maybe that means it can get away with reminders about “data quality” and “data meaning” that would fall flat coming from a SaaS startup. But it doesn’t mean it’s sexy.
Recommended by LinkedIn
And that’s actually the point. It’s not sexy, but it is necessary.
Repositioning watsonx away from just sleeker AI models toward foundational data plumbing is not going to win IBM any awards from the “innovation” crowd. But it might just be what saves the next wave of enterprise AI projects. The new “Agentic Data Integration,” the governance capabilities, the focus on lineage and auditability—all of that is, structurally, IBM telling enterprise customers: slow down and fix your data before you try to scale your AI. If you ignore this, it won’t matter how many partnerships you have or what assistant sits in your workflow.
This brings us to another point that’s worth emphasizing—these announcements are more signal than noise for those who care to listen. If you’re in the business of making AI work at scale, you should be focused on data integration, data quality, and data meaning. Not the model du jour, not the latest multi-cloud management gadget, but the boring, deeply unglamorous work of understanding your own data estate.
Many executives I talk to are just beginning to realize that the “innovation” they need is not the technology their vendors want to sell—they need to reinvent their stance on data stewardship. Look around: the organizations seeing success with AI are the ones who spent the last decade doubling down on their data architecture, not the ones jumping on the latest framework. IBM’s announcements are a reminder, however thinly veiled, that this is the only game in town that matters.
Of course, IBM still padded their agenda with the usual suspects: more partnerships, more solutions, more developer productivity talk. None of these are novel. Every tech event these days tries to sell the idea of “ecosystem.” You could fill a bingo card with the repeated buzzwords: “trust,” “productivity,” “innovation,” “seamless integration with x.” At this point, I’d wager most enterprise CIOs are more interested in a case study of someone actually making their AI work, from soup to nuts, than another logo on the slide deck.
So where does this leave us after TechXchange 2025? Ironically, the only truly important announcement at this gathering—the one with staying power—is IBM quietly doubling down on the unglamorous side of enterprise tech. They aren’t bringing a new AI model or making a splashy acquisition. They’re telling the market that unless you start tackling the fundamentals of your data estate, your AI journey is going nowhere.
IBM’s message may not make headlines, and it probably won’t move their stock price much in the short term. But for customers and practitioners trying to turn AI from a science project into a profit center, it’s a welcome note of honesty. Clean up your data first. Build a foundation before you experiment. That’s not headline-grabbing, but it is how real digital transformation actually happens.
And that, I’d argue, was the only real news this week. Everything else is just window dressing.
The fun part of this is that this will be the most-read review of the event, and I was not even there. 😇