AI drives more Ethernet, everywhere

by Pelican Press
2 views 3 minutes read

AI drives more Ethernet, everywhere

The inaugural report on the high-speed Ethernet (HSE) market and the impact of artificial intelligence (AI) advancements on datacentre, telecom and enterprise networking by devices and networks test and assurance services provider Spirent Communications has found AI’s influence cannot be overstated, as it radically transforms datacentres and interconnects, surpassing the impact of traditional cloud applications.

The report, The future of high-speed Ethernet across data center, telecom, and enterprise networking, looked at key drivers and market impacts, including insights from over 340 HSE engagements supported by Spirent in the past year.

Among the key findings was that HSE port shipments continued to accelerate over the past year, with suppliers shipping more than 70 million HSE ports in 2023, with volume expected to explode to more than 240 million ports between 2024 and 2026.

Ahead of traditional demand curves, markets are already looking to 1.6T Ethernet to pursue AI-driven opportunities as soon as next year, seeing a ramp-up of higher speeds. The report anticipates that 800G, while still gaining traction, will soon be complemented by 1.6T Ethernet in an effort to meet near-term needs, as AI models grow in complexity and size, requiring more bandwidth and speed.

Spirent said the survey shows the impact of AI is changing the datacentre and interconnect ecosystem around it, making it necessary to rearchitect the network to support new performance and scalability requirements. As a result, the data revealed, the market will continue to see rapid migration to 400/800G and beyond.

Spirent added that an AI fabric requires new testing approaches. AI datacentre performance testing requires test cases configured to generate AI workloads using real servers, what it called “an extremely expensive” undertaking. As a result, new cost-efficient ways of stress-testing AI datacentre networking are being used that emulate realistic xPU workload traffic.

Another key finding was that while hyperscalers are migrating to 800G, enterprises are not waiting on future developments to make progress, and telecom operators need to throw out traditional playbooks to meet customers where they are in ambitious deployment cycles.

AI inference uptake will mean that edge capacity will grow. Significant amounts of AI traffic will be at the edge, prompting the need for early capacity upgrades in access and transport networks. Forecasts suggest edge locations could require additional capacities, with far-edge sites requiring 25-50G speed grade upgrades, mid-edge sites requiring 100-200G, and near-edge sites requiring 400G with potentially a faster refresh cycle to 800G.

Remote Direct Memory Access over Converged Ethernet (RoCEv2) was highlighted as a crucial enabler of high-performance, low-latency networking, made possible by facilitating direct memory access between devices over the Ethernet. The report highlights the growing adoption of RoCEv2 in back-end datacentres for AI interconnect fabrics.

Commenting on the report’s findings, Aniket Khosla, vice-president of wireline product management at Spirent, said: “As the market focuses on the power and promise of AI, there is tremendous pressure to move faster, push the boundaries of speed and relentlessly pursue every competitive edge available in the market. AI is driving an inflection point in the market, and there is strong demand to understand and get ahead of the trends driving this.”



Source link

#drives #Ethernet

You may also like