Opening the NVLink ecosystem is a ‘masterstroke’ that leaves Nvidia firmly in control, says analyst. Credit: gguy / Shutterstock Arm is hedging its AI networking bets, joining Nvidia’s NVLink Fusion ecosystem in addition to the rival UALink consortium that it joined last year. The CPU designer announced at SC25, the International Conference for High Performance Computing, Networking, Storage, and Analysis, that it would join Nvidia and its existing ecosystem partners, Intel, Fujitsu, and Qualcomm. NVLink Fusion, announced at Computex in May, lets partners build what Nvidia describes as semi-custom AI infrastructure, but only using the NVLink computing fabric that Nvidia first introduced over a decade ago. “Some partners want to mix different CPUs and accelerate technologies for specialized use cases,” said Dion Harris, senior director, HPC and AI infrastructure solutions at Nvidia. “NVLink Fusion enables hyperscalers and custom ASIC builders to leverage Nvidia’s rack scale architecture to rapidly deploy custom silicon,” Harris said during a media briefing. “Arm is integrating NVLink IP so that their customers can build CPU SOCs to connect to Nvidia GPUs. The addition of Arm gives customers more options for specialized, semi-custom infrastructure.” Open control However, Sanchit Vir Gogia, CEO of Greyhound Research, said, “Arm joining NVLink Fusion is being celebrated as a moment of openness, but the architectural consequences are more nuanced. Fusion allows hyperscalers and national labs to plug non-Nvidia CPUs and accelerators into Nvidia GPUs via a coherent interconnect, but with one rule: the connection must terminate on Nvidia’s fabric. This creates the impression of flexibility while reinforcing a center of gravity that remains exclusively under Nvidia’s control.” He added, “by bringing Arm, Fujitsu, Qualcomm, and custom silicon builders into the fold, Nvidia is ensuring that even heterogeneous designs remain Nvidia-coherent. It is a strategic masterstroke. Competitors can innovate around the edges, but not around the backbone.” All four of these partners are already part of the more than 80 member UALink consortium, founded in 2024, which has developed a new industry standard, released in April, that defines a low-latency, high-bandwidth interconnect for communication between accelerators and switches in AI computing pods. The UALink 1.0 specification enables the connection of up to 1,024 accelerators within an AI pod, “delivering the open standard interconnect for next-generation AI cluster performance,” the consortium said in its announcement. Nvidia is not a member of the group. Artificial IntelligenceCPUs and ProcessorsNetworking SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below.