NTT Docomo Business Inc. (formerly NTT Communications) announced that it has achieved the world’s first high-speed data transfer in an environment where servers are distributed across different data centers by combining long-distance 800G-ZR connections with a proprietary tool that utilizes RDMA technology (hereinafter referred to as the RDMA transfer tool).
The technology demonstrated in this demonstration simplifies network configuration in distributed data centers, enabling high-speed data transfer while reducing power consumption and operational costs. This achievement allows distant data centers to be used as a single data center, achieving flexible data center utilization. 800G-ZR is attracting attention as an efficient network connection technology, not only for its high-capacity, high-speed transmission of 800Gbps, but also for its ability to handle the rapidly increasing traffic volume between data centers. As server interfaces evolve to 100G and 400G, 800G-ZR can transmit this aggregated traffic over long distances with low latency. Furthermore, the technology is implemented in a compact module that can be inserted directly into routers and switches, simplifying network configuration and reducing power consumption and operational costs. RDMA (Remote Direct Memory Access) is a mechanism that allows data to be transferred by directly accessing the memory of the destination server. Writing data directly from the NIC without going through the CPU enables high-speed data transfer.
While RDMA has issues with transfer processing quality when used over long distances, our proprietary RDMA transfer tool achieves high-speed data transfers even over long distances while minimizing CPU resource consumption. In this demonstration, we successfully achieved 800Gbps-class high-bandwidth connections and high-speed data transfers between multiple servers in data centers by combining 800G-ZR, one of the technological components of IOWN APN, with our proprietary RDMA transfer tool, a world first. Compared to conventional technology, the time required for a 1600GB data transfer was reduced from approximately 389 seconds to approximately 68 seconds, a reduction of up to one-sixth. We also confirmed that CPU utilization was reduced from approximately 20% to approximately 5%, a reduction of up to one-fifth. This represents a major step forward toward building the high-speed, low-load data processing infrastructure required in the AI era. Furthermore, achieving 800Gbps inter-datacenter connectivity further improves data center processing efficiency, contributing to flexible resource utilization and enhanced inter-site collaboration.
こちらもお読みください: AIP Capital and BeYoke Capital Announce Partnership
Performance nearly equivalent to a single data center was confirmed in a simulated two-site environment 3,000 km apart. Zhang Xiaojing, an NTT Docomo Business Evangelist who handles IOWN’s computing technology evaluation and verification, highlighted the recent trend in data centers, noting the increasing demand for AI. “Even with GPT-3 a few years ago, the standard was 512 NVIDIA H100 GPUs. Meanwhile, typical servers typically have around eight GPUs per server, and multiple GPU servers are increasingly being used side-by-side.” He cited high computational power and parallel processing capabilities, high power consumption and heat generation, high-speed interconnects and large storage capacity, flexible scalability, and operational management as essential characteristics for AI GPU infrastructure. Given these challenges, while constraints such as power density per rack, cooling capacity, and floor load limits have led to a “necessary” trend toward distributing data centers, there has also been a “choice” to disperse data centers for business continuity and disaster recovery purposes. The Ministry of Internal Affairs and Communications’ view on AI infrastructure also cites the decentralization of data centers to achieve watt-bit collaboration. NTT Docomo Business announced its “AI-Centric ICT Platform” in June 2025 and is working on decentralizing data centers using the IOWN APN. Eitetsu Noyama, section chief of the IOWN Promotion Office at the NTT Docomo Business Innovation Center, explained the update to “GPU over APN,” which utilizes IOWN in distributed data centers. Toward the realization of distributed data centers using the IOWN APN, NTT Docomo Business successfully conducted the world’s first generative AI learning demonstration experiment in a distributed data center in October 2024, and successfully built a three-site distributed GPU data center in March 2025.
In this experiment, the time required for AI model training was measured between two locations, simulating an ultra-long distance of 3,000 km. In the experiment, pre-training of LLM (tsuzumi 7B) was performed using four NVIDIA H100 Tensor Core GPUs on two nodes. Compared to the training time required at a single data center, the time required at distributed data centers via APN was approximately 1.07 times faster, achieving nearly equivalent performance. Meanwhile, the time required at distributed data centers via the Internet was approximately 5.10 times faster, confirming the effectiveness of distributed data centers via APN. Yasuhiro Kimura, manager of the IOWN Promotion Office at NTT Docomo’s Business Innovation Center, explained the 800G-ZR and RDMA transfer tools used in the demonstration. While 400Gbps networks began to become widespread around 2021, he explained that the need for high-capacity, high-speed communications required for generative AI and other technologies has driven the evolution of network equipment, with 800Gbps now gaining attention. 800G-ZR is a transmission standard that enables long-distance, high-capacity, and high-speed optical communications at 800Gbps. Compared to previous models, it can be realized with smaller modules, allowing modules to be inserted directly into routers and switches. RDMA technology, meanwhile, enables high-speed data transfer by directly accessing the destination server’s memory. Direct memory access without CPU intervention achieves high-speed communications.
NTT DOCOMO Business developed a tool that incorporates two features: parallelization of connections and increased data transfer volume per transfer. Experiments using the RDMA transfer tool demonstrated that transfer time was reduced by up to one-sixth, while traffic volume achieved approximately eight times the bandwidth. Furthermore, CPU utilization was reduced by up to one-fifth. By leveraging the respective features of 800G-ZR and the RDMA transfer tool, the team confirmed their usefulness in distributed data centers, improving the efficiency of GPU cluster environments, simplifying network operations, and enabling flexible resource utilization. NTTドコモ事業 explained that based on the results of this demonstration, they will further expand the possibilities of GPU clusters in data centers connected via IOWN APN, and plan to start offering a GPU over APN verification environment to customers in fiscal year 2026.
ソース ヤフー