By putting optics in silicon, CPO promises dramatic boosts in speed while lowering power requirements, if it can meet reliability expectations and outlast competing approaches. Credit: Shutterstock: ID: 194741126 | By asharkyu The proliferation of artificial intelligence workloads is creating concern about the ability of data center networks to keep up with demand. One part of the solution is co-packaged optics (CPO), which involves incorporating optical technology more deeply into data center network switches. CPO promises not only to support the higher speeds that AI workloads demand but also to reduce power consumption – a crucial factor in all data centers. It’s early days for CPO, and not all switch vendors are on board with the idea that it’s the best path forward. But many experts say CPO is a viable solution to the nearly insatiable demand for data center network speed and capacity that AI is creating. What is co-packaged optics? Traditionally, data center switches connected to a copper network cable via a network interface card. With the transition to fiber-optic networking, the NIC must include a transceiver that uses a digital signal processor (DSP) to translate electronic signals to optical signals for transmission over fiber-optic cables (and vice versa). CPO eliminates the transceiver and DSP, instead embedding the electronic-to-optical translation onto the switch application-specific integrated circuit (ASIC). In other words, all the electrical-to-optical translations take place on a chip, in silicon. It’s then possible to connect an optical cable directly to the switch and get a continuous optical stream from the switch ASIC to the fiber cable – no conversions required. This is not just theoretical. TSMC, the world’s largest semiconductor manufacturing company, has a process for building CPO-capable chips and is working with vendors including Nvidia and Broadcom to deploy them, says Scott Wilkinson, lead analyst with the consultancy Cignal AI. Indeed, Nvidia has already announced optical switches that support CPO. Why do we need co-packaged optics? The development of CPO is timely due to the extreme demand AI is creating on data centers and data center networks. In the leaf-spine architecture that has dominated large data centers for more than a decade, servers within a rack typically connect via copper cables to a top-of-rack (ToR) switch. ToR switches then connect to one another to form the backbone of the data center network, or the spine. That backbone is usually fiber-optic, which means the ToR switches need optical connections to the backbone. Optical NICs forge those connections. So, every ToR switch that needs to connect to the fiber-optic backbone needs a NIC with an optical transceiver. In an AI data center, that’s a lot of connections, meaning a lot of NICs. “We’re talking about tens and hundreds and thousands of racks,” says Gilad Shainer, senior vice president, networking, with Nvidia, which in March announced a series of optical switches that support CPO. What’s more, AI workloads are not necessarily confined to a single server as traditional workloads are, nor even a single rack. Copper cables work well to connect servers within a rack (or within a chassis) because the distances are short. But copper can’t support the speeds and low latency required to ship data over the longer distances between racks, Shainer says. How CPO reduces data center power consumption The result is AI data center networks have far more optical cables than their traditional predecessors. That contributes significantly to the power consumption of those data centers because it takes power to convert electronic signals into light, and to power the lasers that create that light. What’s more, each transition a signal must go through, where electricity is converted to light or vice versa, creates some amount of signal loss. So, you need enough power to ensure the signal can pass through multiple transitions, Shainer says. “In a large AI factory, the power consumption of the optical network becomes a big number. It can be almost 10% of the compute capacity,” he says. By eliminating all those transceivers, CPO can reduce the power requirements dedicated to networking by at least 3.5 times, he says. That may be overstating things, but the savings are significant, according to Zeus Kerravala, founder and principal analyst with ZK Research. “CPO can reduce interconnect power by 60% or 70% in some cases,” he says. “So that can save quite a bit per switch versus pluggable modules.” How co-packaged optics support ever-increasing speeds Speed is another issue. As compute demand continues to increase, it will drive demand for faster speeds within a rack or chassis – so-called scale-up networks (versus connections between racks, known as scale-out). At some point, we’ll see scale-up networks that require 400G/lane. A “lane” refers to the capacity of a given laser. An 800G module, for example, may have eight lasers running at 100G each, so eight lanes. “When we go to 1.6 terabit, which is what’s coming out right now, most of those are 200G per lane. And then the next generation would be 400G per lane,” Wilkinson says. “You’ll get to the point when those connections can’t be copper anymore. They have to be optics to get the density that you need.” How reliable is co-packaged optics? Another issue is reliability, which is paramount in any data center. Here, the jury is still out. We don’t yet have CPO deployed at scale, so nobody can say for sure how reliable it will be. But there’s plenty of well-reasoned conjecture. When it announced its CPO-capable switches, Nvidia said they would improve resiliency by 10 times at scale compared to previous switch generations. Several factors contribute to this claim, including the fact that the optical switches require four times fewer lasers, Shainer says. Whereas the laser source was previously part of the transceiver, the optical engine is now incorporated onto the ASIC, allowing multiple optical channels to share a single laser. Additionally, in Nvidia’s implementation, the laser source is located outside of the switch. “We want to keep the ability to replace a laser source in case it has failed and needs to be replaced,” he says. “They are completely hot-swappable, so you don’t need to shut down the switch.” Nonetheless, you may often hear that when something fails in a CPO box, you need to replace the entire box. That may be true if it’s the photonics engine embedded in silicon inside the box. “But they shouldn’t fail that often. There are not a lot of moving parts in there,” Wilkinson says. While he understands the argument around failures, he doesn’t expect it to pan out as CPO gets deployed. “It’s a fallacy,” he says. There’s also a simple workaround to the resiliency issue, which hyperscalers are already talking about, Karavalas says: overbuild. “Have 10% more ports than you need or 5%,” he says. “If you lose a port because the optic goes bad, you just move it and plug it in somewhere else.” Which vendors are backing co-packaged optics? In terms of vendors that have or plan to have CPO offerings, the list is not long, unless you include various component players like TSMC. But in terms of major switch vendors, here’s a rundown: Broadcom has been making steady progress on CPO since 2021. It is now shipping “to early access customers and partners” its third-generation offering, the Tomahawk 6 – Davisson (TH6-Davisson). Developed in conjunction with Micas Networks, TSMC, HPE, and others, the TH6-Davisson is an Ethernet switch supporting 102.4T bps of optically enabled switching capacity. In March 2025, Nvidia announced new photonics switches that include support for CPO, likewise developed with partners including TSMC. The Nvidia Spectrum-X Photonics switches support a total throughput of 400T bps in various port configurations while the liquid-cooled Nvidia Quantum-X Photonics switches provide 144 ports of 800Gb/s InfiniBand based on 200G bps SerDes. No word yet on exactly when they’ll be shipping other than “next year,” with the InfiniBand models debuting first, according to Shainer. Cisco is proceeding cautiously with its optics strategy, apparently due at least in part to the reliability issue. It demoed a CPO switch in 2023 but has yet to make a formal announcement. Bill Gartner, senior vice president and general manager of Cisco’s optical systems and optics business, recently told Network World: “The thing that I think we have to be cautious about as an industry is that CPO package assembly will have 1,000 or more optics connections, and that means fiber attached to pieces of silicon. And I think the industry is going to go through a learning curve of making that a very high yield and very highly reliable.” How does linear pluggable optics compare to CPO? Arista, you may have noticed, is absent from that CPO list. It is backing a competing technology, linear pluggable optics (LPO). LPO is similar in theory to CPO but with a distinct difference: It does not embed optics at the chip level. Instead, as its name implies, it uses a familiar, pluggable format. It eliminates the DSP on the transceiver and instead uses linear transimpedance amplifiers (TIAs) to drive optical signals. The net result, proponents say, is LPO can connect to standard switch ports, is easily replaceable, and delivers much of the same power efficiency as CPO. In short, LPO is trying to do the same thing as CPO, but removing the DSP creates issues, Wilkinson says. The DSP in traditional transceivers compensates for distortions in the electrical link from the switch, across a circuit board, through a connector, and to the pluggable optic, he says. There is a DSP on the switch as well, and LPO assumes that DSP will be good enough to compensate for any distortions. “In some cases that may be true, but the variability across devices and networks makes it impossible to guarantee in all cases,” he says. “With CPO, that problem goes away because the electrical link between the switch and the optical engines is vanishingly small.” Vijay Vusirikala, a distinguished leader for AI systems at Arista, has a different view. The underlying optics – namely, the number of lasers and silicon photonics technology – are the same for LPO and CPO, he says (via email). Consequently, the two technologies deliver the same benefits in terms of power consumption. In fact, LPO may draw less power at higher speeds, such as 1600G. The difference lies in packaging, meaning on-chip vs. pluggable. “However, the difference is packaging has a dramatic impact on serviceability. If an LPO optics [module] fails, it can easily be replaced in a matter of minutes,” Vusirikala says. “With CPO, a failure of the optics requires replacing the entire switch, which typically takes hours.” What’s more, pluggable optics modules as a whole have an edge in terms of maturity, with some 100 million units expected to ship next year, whereas CPO isn’t likely to ship in volume until 2027, he says. What about co-packaged copper vs. CPO? Yet another technology, co-packaged copper (CPC), is similar in concept to CPO but uses copper cabling co-packaged with the ASIC instead of optics. “CPC is only going to work where copper works, so it’s only going to be for short distances –like the scale-up, inside-the-AI-node kind of application,” Wilkinson says. “Eventually copper is going to run out of gas, and you’ve got to go to optics.” Kerravala agrees. “The use cases for CPC are more legacy interconnects, small enterprise racks, and short reach,” he says. But CPO has a big edge in terms of power consumption. With CPC, “you wind up with a bigger footprint and using more power,” he says. Should enterprises care about co-packaged optics? Given the competing technologies and use cases, enterprise users may well ask, “Do I need to care about co-packaged optics?” Clearly, the technology is for high-end data center switches, so it’s mainly a hyperscaler play – but not entirely. “If you are a large enterprise and have rack upon rack upon rack of switches, and they’re all basically doing the same thing, this might be interesting to you,” Wilkinson says. Kerravala says CPO is not urgent, which is good given the limited available options. “But as enterprises get into building their own private AI and [high-performance computing] clusters, then it’s worth taking a look at,” he says. “The enterprise would really need to understand their AI roadmap and what they’re going to build vs. what they’re going to run in the cloud.” Nvidia’s argument comes down to, if you’re building a data center, you might as well use CPO. “I would expect that everyone building data centers would love using CPO to reduce power consumption, increase compute capacity, reduce the cost, and improve the resiliency of the data center,” Shainer says. Artificial IntelligenceData CenterNetwork SwitchesNetworkingNetworking Devices SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below.