Japanese technology giant NEC Corporation has launched a new Composable Disaggregated Infrastructure Solution designed to support distributed AI computing, offering data centers and enterprises greater flexibility and efficiency in how computing resources like CPUs and GPUs are allocated. This development marks an important step in Japan’s advancing infrastructure for high‑performance AI workloads.
What NEC’s Disaggregated Platform Does
“NEC’s Disaggregated Platform utilizes the company’s proprietary ‘ExpEther’ Virtualization Technology that abstracts computing resources, like Central Processing Units or Graphics Processing Units, from their respective racks in order to dynamically spread these computing resources across a data center network,” explained a spokesperson. “This enables the operator to dynamically allocate computing power based on real-time requirements.”
Important features are:
Flexible resource allocation: These processors and graphics processors can be combined and allocated as needed, as opposed to being allocated to specific servers.
Reduced capital and operating expenditures: Over-provisioning and unused hardware can be minimized in the data center, which will further reduce capital and operating expenditures.
High‑speed connectivity: The solution uses NEC’s ExpEther boards with 100 Gb/s Ethernet optical fiber links to maintain low‑latency transmission between distributed components.
NEC has already tested the technology with partners, including Osaka University and in its Inzai City data center, with verification results to be published gradually.
Also Read: Japan’s Path to Responsible AI: A Model for Innovation, Trust and Global Governance
Why This Matters for AI and Data Centers
As AI workloads — especially large generative models and analytics systems — continue to grow in scale and complexity, traditional monolithic server architectures struggle to efficiently manage resources. The operators frequently have to provide servers for peak demand, and during normal operation, the hardware stays underutilized. This is solved by disaggregation, which decouples the compute resources from fixed servers to facilitate flexible sharing and allocation.
This reflects broader trends in the area of distributed computing, whereby similar disaggregation concepts are used by platforms such as NVIDIA Dynamo to enable the scaling of AI inference across hundreds or thousands of GPUs by essentially allocating work where it can be done most efficiently.
For NEC, the new platform positions the company as a key provider of next‑generation data center infrastructure — a critical capability for organizations rolling out AI services at scale, whether for cloud providers, research institutions, or enterprise compute clusters.
Impact on Japan’s Tech and AI Ecosystem
- Strengthening Domestic AI Infrastructure
Japan’s AI research and industrial sectors, from robotics and autonomous systems to biotech and manufacturing, require robust, scalable computing infrastructure. NEC’s platform offers a domestic solution that reduces dependency on foreign hardware stacks and gives local operators more flexible resource management.
- Supporting Distributed AI Adoption
Distributed AI — where models and workloads span multiple servers or locations — benefits from infrastructure that can dynamically allocate resources. NEC’s introduction supports needs in areas like edge AI, where compute resources are spread across many nodes, and cloud‑AI services, which must scale elastically.
- Boosting Japan’s Data Center Competitiveness
Data centers play an integral role in the digital Japan economy, enabling cloud services, fintech, and the overall digitalization of industries. Flexible and modular solutions will result in cost savings and better performance, and this will enhance competitiveness in the global market.
Business and Industry Implications
Cloud providers can leverage disaggregated compute to provide more granular service tiers and potentially lower costs for customers.
This enables private clouds or HPC clusters within enterprises to make better use of existing hardware at a lower operational overhead.
AI start-ups and research institutions deserve much-needed infrastructures that can support heavy compute loads flexibly, without having expensive dedicated hardware for every project.
The strategy also fits within a wider industry move toward migration to software‑defined infrastructure where compute, storage, and networking layers are decoupled to allow more fluid management and optimization.
Looking Ahead
NEC will start implementing its disaggregated solution in Japan, with plans to extend the reach of the solution to other industries in the future. With the increasing adoption of AI by organizations, technologies such as NEC’s disaggregated solution, which balances efficiency, scalability, and affordability, are set to take a pivotal role in the development of AI systems.
By facilitating more optimal use of computing resources, NEC’s composable architecture could potentially accelerate distributed AI projects not only in Japan, but internationally, helping to drive innovation in the realms of cloud computing, edge services, intelligent automation, and scientific research.


