AVEVA and NVIDIA have decided to work together on something very specific. They want to build a digital twin setup that actually works for large scale AI factories, not just in theory but across the full lifecycle.
The plan is to bring AVEVA’s engineering and operations software into NVIDIA Omniverse DSX Blueprint. From there, they are building both physical and digital modules that can be deployed in big data center environments. Think AI factories running at gigawatt scale. The approach follows EPC style execution, so design, build, and operate all stay connected instead of being handled in silos.
What AVEVA is bringing in is its industrial stack, especially the CONNECT platform and its digital twin capabilities. The idea is to simulate, design, and optimize everything before it goes live, which cuts down the time it takes to get these AI facilities up and running. At the same time, they are trying to squeeze more efficiency out of GPUs, which is where most of the cost and performance pressure sits.
こちらもお読みください: ERI、伊藤忠商事と共同で日本発のE-Wasteベンチャーを立ち上げ
This is not a two company play. Schneider Electric and ETAP are also part of the setup. Together, they are covering design, simulation, construction, and operations in one connected flow.
On the product side, AVEVA is plugging multiple tools into the system. Engineering now supports OpenUSD based assets, so teams can reuse and redesign faster. Asset Information Management acts as the single source of truth across the lifecycle. Process Simulation focuses on liquid cooling networks, which are critical for high density AI setups. The PI System pulls IT and OT data together and is expected to connect with エヌビディア’s NV-Tesseract model for real time analysis at scale.
Then there are operations. AVEVA’s control and operations platforms bring electrical, mechanical, and safety systems into one place, making it easier to monitor performance, catch issues early, and keep these data heavy environments stable.
Step back and this is where things are heading. AI infrastructure is getting too complex to manage in pieces. This kind of integration is less about adding new tools and more about making everything work together without slowing down deployment.


