Everyone is watching the GPU. They should be looking at the cable.
While Nvidia’s Blackwell chips grab the headlines, the bottleneck for the next AI generation isn’t silicon—it’s connectivity. Enter 1.6 Terabit Ethernet (1.6TbE). Currently being finalized by the IEEE, this architecture isn’t just an upgrade; it’s a necessary double-time march for data center throughput.
Why the rush? “Trillion-parameter plus” models. Training beasts like the hypothetical GPT-6 requires treating thousands of GPUs as one giant brain. That brain needs a nervous system. Pre-standard hardware started shipping in late 2024, with mass adoption slated for 2026. If this plumbing doesn’t arrive on time, the scaling laws driving AI hit a physical wall. Hard.
The Bottleneck: Why 800G is No Longer Enough
Training LLMs is a “scale-out” nightmare. It’s not about one fast chip; it’s about 50,000 of them talking at once. When GPU A shouts a vector to GPU B across the hall, that signal hits optical transceivers and switches.
Right now, 800 Gigabit Ethernet (800GbE) is the standard. It’s failing. For dense clusters like Nvidia’s GB200 NVL72, 800G forces architects to sprawl—more cables, more switch ports, more latency. 1.6T solves this by doubling the speed of the “lanes” inside the cable. Same physical footprint, double the data.
224G SerDes: The Physics Shift
You can’t just bundle more copper and call it a day. 1.6T demands new signal physics. The key here is the 224 Gbps SerDes (Serializer/Deserializer).
Old 800G networks used eight lanes of 100 Gbps. The new IEEE 802.3dj standard keeps the eight lanes but cranks them to 200 Gbps each. The magic comes from PAM4 (Pulse Amplitude Modulation 4-level), a technique packing two bits of data into every single pulse.
Doubling lane speed to 224 Gbps hits that 1.6 Terabit mark using existing form factors—OSFP and QSFP-DD. This backward compatibility is the killer feature. Technicians can upgrade bandwidth without ripping out the server racks.
Comparison of Ethernet Generations
| Feature | 400G Ethernet | 800G Ethernet | 1.6T Ethernet |
|---|---|---|---|
| Primary Era | 2018–2022 | 2022–2025 | 2025–2027 |
| SerDes Speed | 50 Gbps PAM4 | 100 Gbps PAM4 | 200 Gbps PAM4 |
| Lane Configuration | 8 × 50G | 8 × 100G | 8 × 200G |
| Typical Switch Chip | Broadcom Tomahawk 4 | Broadcom Tomahawk 5 | Broadcom Tomahawk 6 |
| Primary Workload | Cloud / Video Streaming | GenAI (GPT-4 era) | Next-Gen AI (GPT-6 era) |
The Silicon Giants: Broadcom and Marvell

Silicon manufacturers are currently fighting a war over the DSPs and ASICs needed to manage this traffic.
Broadcom fired the first shot with the Tomahawk 6. This ASIC runs a switching capacity of 102.4 Tbps. In plain English? A single chip handles tens of thousands of simultaneous 4K streams. It supports 64 ports of 1.6T Ethernet, flattening the network hierarchy and cutting the “hops” data takes between GPUs.
Marvell counters with the Nova 2 optical DSP. This chip sits inside the cable module, translating electricity to light. Built on a 5nm process, it’s designed to survive the intense heat of high-speed data transfer—critical when cooling already eats 40% of a data center’s power budget.
The Heat Problem: Enter Linear Pluggable Optics (LPO)

Speed generates heat. A massive chunk of data center energy is burned just converting signals from electricity to light and back. For 1.6T to make financial sense, the industry needs Linear Drive Pluggable Optics (LPO).
Traditional modules use a DSP to scrub the signal before sending it. LPO rips that DSP out entirely. It relies on the switch ASIC (like the Tomahawk 6) to handle signal integrity. Less hardware in the cable means less heat—cutting power consumption by up to 50% per port.
This isn’t a luxury; it’s survival. With AI clusters projected to consume gigawatts, shaving watts off a transceiver saves millions in OpEx. It allows for denser GPU packing without melting the rack.
Looking Forward
1.6T Ethernet is the quiet infrastructure enabling the next AI wave. Models get the press; the wires do the work. Sub-millisecond latency is the only way to move exabytes of data effectively.
As 1.6T rolls out through 2026, engineers are already eyeing the 3.2 Terabit horizon. But for now, the 224G SerDes lane is the physical limit of commercial deployment. When GPT-6 arrives, it won’t just be a triumph of code. It will be a victory for the optical engineering that keeps the hive mind connected.