Americas

  • United States

Arm backs both sides in UALink vs NVLink battle for bandwidth

News
Nov 17, 20253 mins

Opening the NVLink ecosystem is a ‘masterstroke’ that leaves Nvidia firmly in control, says analyst.

Illustration of Arm chip in motherboard for laptops and desktop PCs
Credit: gguy / Shutterstock

Arm is hedging its AI networking bets, joining Nvidia’s NVLink Fusion ecosystem in addition to the rival UALink consortium that it joined last year.

The CPU designer announced at SC25, the International Conference for High Performance Computing, Networking, Storage, and Analysis, that it would join Nvidia and its existing ecosystem partners, Intel, Fujitsu, and Qualcomm.

NVLink Fusion, announced at Computex in May, lets partners build what Nvidia describes as semi-custom AI infrastructure, but only using the NVLink computing fabric that Nvidia first introduced over a decade ago.

“Some partners want to mix different CPUs and accelerate technologies for specialized use cases,” said Dion Harris, senior director, HPC and AI infrastructure solutions at Nvidia.

“NVLink Fusion enables hyperscalers and custom ASIC builders to leverage Nvidia’s rack scale architecture to rapidly deploy custom silicon,” Harris said during a media briefing. “Arm is integrating NVLink IP so that their customers can build CPU SOCs to connect to Nvidia GPUs. The addition of Arm gives customers more options for specialized, semi-custom infrastructure.”

Open control

However, Sanchit Vir Gogia, CEO of Greyhound Research, said, “Arm joining NVLink Fusion is being celebrated as a moment of openness, but the architectural consequences are more nuanced. Fusion allows hyperscalers and national labs to plug non-Nvidia CPUs and accelerators into Nvidia GPUs via a coherent interconnect, but with one rule: the connection must terminate on Nvidia’s fabric. This creates the impression of flexibility while reinforcing a center of gravity that remains exclusively under Nvidia’s control.”

He added, “by bringing Arm, Fujitsu, Qualcomm, and custom silicon builders into the fold, Nvidia is ensuring that even heterogeneous designs remain Nvidia-coherent. It is a strategic masterstroke. Competitors can innovate around the edges, but not around the backbone.”

All four of these partners are already part of the more than 80 member UALink consortium, founded in 2024, which has developed a new industry standard, released in April, that defines a low-latency, high-bandwidth interconnect for communication between accelerators and switches in AI computing pods. The UALink 1.0 specification enables the connection of up to 1,024 accelerators within an AI pod, “delivering the open standard interconnect for next-generation AI cluster performance,” the consortium said in its announcement. Nvidia is not a member of the group.

Lynn Greiner

Lynn Greiner has been interpreting tech for businesses for over 20 years and has worked in the industry as well as writing about it, giving her a unique perspective into the issues companies face. She has both IT credentials and a business degree.

Lynn was most recently Editor in Chief of IT World Canada. Earlier in her career, Lynn held IT leadership roles at Ipsos and The NPD Group Canada. Her work has appeared in The Globe and Mail, Financial Post, InformIT, and Channel Daily News, among other publications.

She won a 2014 Excellence in Science & Technology Reporting Award sponsored by National Public Relations for her work raising the public profile of science and technology and contributing to the building of a science and technology culture in Canada.

More from this author