NVIDIA announced NVIDIA Spectrum-XGS Ethernet, a scalable technology that consolidates distributed data centers and enables gigascale “AI super factories.” As demand for AI skyrockets, individual data centers face power and capacity limitations within a single facility. To expand, scaling out beyond a single facility is necessary. However, existing Ethernet network infrastructure is constrained by high latency, jitter, and unpredictable performance. Spectrum-XGS Ethernet is a groundbreaking technology added to the NVIDIA Spectrum-X Ethernet platform, eliminating these constraints by introducing a scale-across infrastructure. Spectrum-XGS Ethernet serves as the third pillar of AI computing beyond scale-up and scale-out, extending superior performance and scale to interconnect multiple distributed data centers to form large-scale AI factories that enable gigascale intelligence. “The AI industrial revolution is upon us, and large-scale AI factories are essential infrastructure,” said Jensen Huang, founder and CEO of NVIDIA.
“NVIDIA Spectrum-XGS Ethernet adds scale-across capabilities to our scale-up and scale-out capabilities, enabling us to connect data centers across cities, countries, and continents into massive gigascale AI superfactories.” Spectrum-XGS Ethernet is fully integrated into the Spectrum-X platform and features algorithms that dynamically adapt the network based on the distance between data center facilities. Featuring advanced, distance-adjusted congestion control, precise latency management, and end-to-end telemetry, Spectrum-XGS Ethernet nearly doubles the performance of the NVIDIA Collective Communications Library (NCCL), accelerating multi-GPU and multi-node communications and delivering predictable performance across geographically distributed AI clusters. The result is a fully optimized network for long-distance connections, enabling multiple data centers to operate as a single AI factory. CoreWeave, a pioneer in the hyperscale space, is among the companies adopting this new infrastructure and will be one of the first to connect its data centers with Spectrum-XGS Ethernet. The Spectrum-X Ethernet networking platform delivers 1.6x the bandwidth density of off-the-shelf Ethernet to multi-tenant hyperscale AI factories, including the world’s largest AI supercomputers.
Comprised of NVIDIA Spectrum-X switches and NVIDIA ConnectX-8 SuperNICs, it delivers seamless scalability, ultra-low latency, and breakthrough performance to companies building the future of AI. This announcement follows NVIDIA’s networking innovations, including the NVIDIA Spectrum-X and NVIDIA Quantum-X silicon photonics network switches, which will enable AI factories to connect millions of GPUs across sites while reducing energy consumption and operational costs.On the 22nd (local time), NVIDIA announced NVIDIA Spectrum-XGS Ethernet, a scalable technology that consolidates distributed data centers and enables gigascale “AI super factories .” As demand for AI skyrockets, individual data centers face power and capacity limitations within a single facility. To expand, scaling out beyond a single facility is necessary. However, existing Ethernet network infrastructure is constrained by high latency, jitter, and unpredictable performance. Spectrum-XGS Ethernet is a groundbreaking technology added to the NVIDIA Spectrum-X Ethernet platform, eliminating these constraints by introducing a scale-across infrastructure. Spectrum-XGS Ethernet serves as the third pillar of AI computing beyond scale-up and scale-out, extending superior performance and scale to interconnect multiple distributed data centers to form large-scale AI factories that enable gigascale intelligence. “The AI industrial revolution is upon us, and large-scale AI factories are essential infrastructure,” said Jensen Huang , founder and CEO of NVIDIA.
こちらもお読みください: Mitsubishi Electric rolls out interactive multi-AI agent
“NVIDIA Spectrum-XGS Ethernet adds scale-across capabilities to our scale-up and scale-out capabilities, enabling us to connect data centers across cities, countries, and continents into massive gigascale AI superfactories.” Spectrum-XGS Ethernet is fully integrated into the Spectrum-X platform and features algorithms that dynamically adapt the network based on the distance between data center facilities. Featuring advanced, distance-adjusted congestion control, precise latency management, and end-to-end telemetry, Spectrum-XGS Ethernet nearly doubles the performance of the NVIDIA Collective Communications Library (NCCL), accelerating multi-GPU and multi-node communications and delivering predictable performance across geographically distributed AI clusters. The result is a fully optimized network for long-distance connections, enabling multiple data centers to operate as a single AI factory.
CoreWeave, a pioneer in the hyperscale space, is among the companies adopting this new infrastructure and will be one of the first to connect its data centers with Spectrum-XGS Ethernet. The Spectrum-X Ethernet networking platform delivers 1.6x the bandwidth density of off-the-shelf Ethernet to multi-tenant hyperscale AI factories, including the world’s largest AI supercomputers. Comprised of NVIDIA Spectrum-X switches and NVIDIA ConnectX-8 SuperNICs, it delivers seamless scalability, ultra-low latency, and breakthrough performance to companies building the future of AI. This announcement follows NVIDIA’s networking innovations, including the エヌビディア Spectrum-X and NVIDIA Quantum-X silicon photonics network switches, which will enable AI factories to connect millions of GPUs across sites while reducing energy consumption and operational costs.
ソース ヤフー