Expanded DRAM and HBM output is expected to relieve infrastructure bottlenecks for hyperscalers and data centers amid rising AI demand. Credit: Shutterstock/A4ASHISHMISHRA Samsung Electronics has said it will begin mass-producing its most advanced memory chips next year to meet surging AI demand, a shift analysts say could help clear the supply bottlenecks that have slowed any cloud and enterprise network upgrades. “Demand for HBM4 is also projected to increase, and the Company plans to proactively respond with capacity expansion in 1c [DRAM],” Samsung said in a statement. “It will also concentrate on expanding sales of other high-value-added products, such as DDR5, LPDDR5x, and high-density QLC SSDs to meet demand for AI applications.” Samsung said it will continue to prioritize server demand for HBM3E and high-density enterprise SSDs in the fourth quarter, highlighting strong orders from AI and conventional data centers. The announcement follows rival SK Hynix’s bullish forecast of an extended “super-cycle” in the memory market, after the company said it had sold out all its high-bandwidth chip production for next year. Ramping up output of high-bandwidth memory (HBM) and DRAM could ease component shortages that have slowed network and data center expansion, potentially avoiding delays in infrastructure upgrades across hyperscale and enterprise environments. A rebound in DRAM and HBM production could also restore predictability in hardware procurement cycles and reduce costs for high-throughput systems. The company confirmed that its latest-generation HBM3E chips are now being shipped to “all related customers,” a possible sign that supply to major AI chipmakers like Nvidia may be stabilizing. With mass production of HBM4 expected next year, Samsung could eventually help relieve pressure on the broader enterprise infrastructure ecosystem, from cloud providers building new AI clusters to data center operators seeking to expand switching and storage capacity. Samsung’s Foundry division also plans to begin operating its new 2nm fab in Taylor, Texas, in 2026 and supply HBM4 base-dies, a move that could further stabilize component availability for US cloud and networking infrastructure providers. Easing the memory chokehold Easing DRAM and NAND lead times will unlock delayed infrastructure projects, particularly among hyperscalers, according to Manish Rawat, semiconductor analyst at TechInsights. “As component availability improves from months to weeks, deferred server and storage upgrades can transition to active scheduling,” Rawat said. “Hyperscalers are expected to lead these restarts, followed by large enterprises once pricing and delivery stabilize. Improved access to high-density memory will also drive faster refresh cycles and higher-performance rack designs, favoring denser server configurations. Procurement models may shift from long-term, buffer-heavy strategies to more agile, just-in-time or spot-buy approaches.”Samsung’s expanded role as a “meaningful volume supplier” of HBM3E 12-high DRAM will also be crucial for hyperscalers planning their 2026 AI infrastructure rollouts, according to Danish Faruqui, CEO of Fab Economics. “Without Samsung’s contribution, most hyperscaler ASIC programs, including Google’s TPU v7, AWS’s Trainium 3, and Microsoft’s in-house accelerators, were facing one- to two-quarter delays due to the limited HBM3E 12-high supply from SK Hynix,” Faruqui said. “These products form the backbone of next-generation AI data centers, and volume ramp-up depends directly on Samsung’s ability to deliver.” Other analysts agree that the timing is pivotal, as the bottleneck in AI infrastructure is shifting from GPUs to memory. The rapid pace of AI model training and inference has moved performance constraints beyond compute to memory bandwidth and density, making diversification of supply essential. “The industry doesn’t want another Nvidia-like situation where a single supplier becomes a chokepoint,” said Pareekh Jain, CEO at Pareekh Consulting. “By encouraging Samsung to scale production alongside SK Hynix, AI chipmakers are ensuring multiple sources of supply so that memory doesn’t become the next bottleneck.” Limits to memory relief Even with Samsung’s capacity expansion, analysts caution that the global supply of advanced memory chips is unlikely to keep pace with soaring AI demand. The surge in HBM production is already straining overall DRAM capacity, and the imbalance may persist well into 2026. Shrish Pant, director analyst at Gartner, said any notion of oversupply is unrealistic given current manufacturing constraints. “Any meaningful oversupply of DRAM and NAND in 2026 is highly unlikely, as HBM production now consumes more than a quarter of total DRAM wafer capacity,” Pant said. “Even though Samsung’s ramp-up will help significantly, it still won’t be enough to meet expected demand.” Pant noted that memory continues to be “the silent enabler of AI data centers,” constraining both speed and scale for AI and high-bandwidth networking infrastructure. Where impact will emerge While Samsung’s capacity expansion is unlikely to eliminate supply constraints, analysts say its effects will begin to surface unevenly across segments of the data center ecosystem. “The first visible relief from Samsung’s additional capacity will likely appear in AI-focused data center deployments, followed by AI-networking infrastructure, where higher bandwidth memory is now the limiting factor rather than compute,” Faruqui said. Pant said core cloud data centers built on traditional infrastructure will likely be the biggest beneficiaries of stabilized DRAM and NAND supply and pricing, as hyperscalers and operators continue to optimize their non-AI spending through 2026 and 2027. “Once DRAM and NAND prices stabilize, a second wave of growth will come from targeted edge deployments by CDNs, telcos, and regional providers focused on latency-sensitive and data-sovereign workloads but progressing more slowly due to distributed site constraints,” Rawat said. “AI networking upgrades will occur selectively and colocated within major data centers and AI campuses, with broader enterprise-wide network refreshes deferred until clearer demand signals and cost recovery paths emerge.” Data Center SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below.