NVIDIA announced the expansion of its AI platform, NVIDIA DGX Cloud Lepton, which features a global computing marketplace connecting developers building agent-based and physical AI applications , making GPUs available from a growing network of cloud providers.
Mistral AI, Nebius, Nscale, Fluidstack, Hydra Host, Scaleway and Together AI are now offering NVIDIA Blackwell and other NVIDIA architecture GPUs in the marketplace, expanding regional access to high-performance computing. AWS and Microsoft Azure are the first major cloud providers to join DGX Cloud Lepton. They join CoreWeave, Crusoe, Firmus, Foxconn, GMI Cloud, Lambda and Yotta Data Services in the marketplace.
To make accelerated computing more accessible to the global AI community, Hugging Face is introducing Training Cluster as a Service . The new service is integrated with DGX Cloud Lepton, seamlessly connecting AI researchers and developers building foundational models to the NVIDIA computing ecosystem.
Also Read: Insight Edge develops document analysis library using LLM
NVIDIA is working with leading European venture capital firms Accel, Elaia, Partech and Sofinnova Partners to provide DGX Cloud Lepton Marketplace credits to their portfolio companies to help startups access accelerated computing resources and scale development in the region.
“DGX Cloud Lepton connects European developers to a global AI infrastructure,” said Jensen Huang, founder and CEO of NVIDIA. “By working with regional partners, we are building a network of AI factories where developers, researchers and companies can scale local innovation into global growth.”
DGX Cloud Lepton unifies cloud AI services and GPU resources from across NVIDIA’s computing ecosystem onto a single platform, simplifying the process of accessing reliable, high-performance GPU resources within a given region, allowing developers to keep their data local and supporting data governance and sovereign AI requirements.
Additionally, by integrating with the NVIDIA software suite, including NVIDIA NIM ™ and NeMo ™ microservices and NVIDIA Cloud Functions , DGX Cloud Lepton streamlines and accelerates every stage of AI application development and deployment at scale. The marketplace works with the universal NIM microservice to support a wide range of large-scale language models, including the most popular open LLM architectures and more than 1 million models hosted publicly and privately on Hugging Face.
For cloud providers, DGX Cloud Lepton includes management software that continuously monitors GPU health in real time and automates root cause analysis, minimizing manual intervention and reducing downtime, streamlining providers’ operations and ensuring customers have access to more reliable high-performance computing.
SOURCE: PRTimes