VMware’s shift to real-world telemetry data reveals a significant gap between recommended and actual resource usage. Credit: JHVEPhoto - shutterstock.com Enterprises running VMware’s vSAN storage platform may have spent significantly more on hardware than necessary. The company has acknowledged that its long-standing sizing recommendations were based on synthetic testing that did not reflect real-world conditions. VMware revealed that analysis of telemetry data from thousands of production vSAN clusters showed that “vSAN clusters use much less RAM than expected” and “may use fewer CPU resources than expected.” “Hardware guidance for vSAN has historically been derived from synthetic testing,” Pete Koehler, product marketing engineer at VMware wrote in a blog post announcing revised specifications. “While useful, synthetic tests do not reflect the characteristics of real world workloads and the behavior of the storage system.” Based on these findings, the company has now revised its specifications for ReadyNode – server configurations certified by VMware for use in vSAN deployments – downward, in some cases dramatically. In the latest guidance, the company has reduced RAM requirements for storage clusters by up to 67%, while the minimum for CPU cores fell by up to 33%. For HCI clusters, memory requirements are reduced by up to 50%, according to the blog post. To put that in perspective: the highest-performing storage cluster profile previously required 768GB of RAM per host. The new guidance recommends a minimum of 256GB. The smallest profile drops from 256GB to 128GB, with cores reduced from 24 to 16, the post added. VMware claimed the revised specifications could save enterprises in the “five figures” per host, with “cascading savings” across distributed storage clusters through reduced licensing, power, cooling, and rack space requirements. Scale of the over-investment While VMware is framing the announcement as a cost-saving opportunity, analysts said it raises uncomfortable questions for organisations that invested heavily in infrastructure based on the company’s previous guidance. “This isn’t a simple case of minor over-engineering,” said Sanchit Vir Gogia, chief analyst and CEO at Greyhound Research. “Enterprises that followed VMware’s earlier vSAN guidance invested heavily in infrastructure that far exceeded what real-world workloads required. These weren’t theoretical missteps — they resulted in racks of over-provisioned memory and underutilised compute sitting idle across data centres globally.” Charlie Dai, VP and principal analyst at Forrester, agreed the impact is substantial. “VMware’s previous inflated guidance may translate to substantial infrastructure costs. Therefore, this new guidance will be very impactful to reduce unnecessary capital spend.” Broadcom did not immediately respond to a request for comment. Why did it take so long? The scale of the gap between recommended and actual resource usage raises a more pointed question: VMware has collected telemetry data from customer environments for years, so why did the company not identify and address this sooner? “The telemetry was there. What was missing was the mechanism — and the will — to act on it,” said Gogia. “For years, customers flagged that production clusters weren’t coming close to the resource ceilings VMware prescribed. Still, the official sizing remained unchanged.” Dai said the delay reflects a systemic issue that extends beyond VMware. “The delay reflects a gap between lab-based validation and production realities. This raises broader concerns: vendor sizing guidance often prioritises risk avoidance over cost efficiency, so CIOs should validate recommendations against real workload data.” VMware framed its previous approach as appropriately cautious, designed to ensure vSAN “has sufficient hardware to meet the desired performance expectations under the most extreme circumstances.” Yet the gap between synthetic test requirements and actual production needs proved substantial enough that the company overhauled its entire ReadyNode certification framework. Timing raises eyebrows The admission also came at an awkward moment for VMware’s parent company, Broadcom, which has faced intense criticism over licensing changes that drove up costs for many enterprise customers following its $69 billion acquisition in late 2023. Broadcom’s restructuring bundled VMware products into broader suites and shifted customers toward subscription models, prompting some organisations to evaluate alternatives, including Nutanix, OpenStack, and Proxmox. Against this backdrop, analysts noted the convenient timing of VMware’s announcement. “It’s hard to ignore the timing,” said Gogia. “VMware’s recalibration lands in the thick of customer unease around Broadcom’s licensing changes. Reducing hardware requirements effectively reshapes the cost narrative and makes the VMware stack more palatable just as many CIOs are exploring exit strategies.” Dai agreed. “Amid backlash over Broadcom licensing and rising interest in Nutanix and OpenStack, VMware needs to signal cost optimisation. While this improves TCO, enterprises should view it as both a technical adjustment and a competitive retention play.” However, Gogia cautioned that reduced hardware costs do not address the core concerns driving customer unease. “Trimming back hardware requirements may ease the cost burden, but it doesn’t resolve the deeper unease around how VMware’s licensing is structured, how prices might evolve, or what the long-term product direction truly looks like. It’s a welcome course correction, not a reset button.” What should CIOs do now Given these complexities, what should enterprises currently running or planning vSAN deployments actually do with this information? “This update should prompt every CIO running — or planning — a vSAN deployment to take a fresh look at their infrastructure strategy,” said Gogia. “CIOs should prioritise forward-looking application of the new sizing model, use it to influence upcoming contracts, and avoid the temptation to reengineer stable clusters mid-cycle unless there’s a compelling case.” Dai recommended a similar approach. “For existing deployments, CIOs should evaluate whether hardware can be repurposed or scaled down in refresh cycles. For new projects, apply revised specs to avoid overprovisioning. More broadly, they should embed telemetry-driven sizing into virtualization strategy to prevent similar inefficiencies across platforms.” Both analysts emphasised that the lessons extend beyond VMware. “This is also a wake-up call for CIOs and architects: vendor guidance cannot be followed blindly,” said Gogia. “Internal telemetry, context-specific modelling, and continuous validation must now take centre stage in infrastructure planning.” Data CenterEnterprise StorageVirtualization SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below.