NVIDIA announced new open models, frameworks and AI infrastructure for physical AI and showcased general-purpose robots from its global partners for every industry.
New NVIDIA technologies accelerate workflows across the robot development lifecycle, accelerating next-generation robotics, including building general-purpose specialist robots that can quickly learn a wide range of tasks.
Global leaders such as Boston Dynamics, Caterpillar, Franka Robotics, Humanoid , LG Electronics and NEURA Robotics are unveiling new AI-powered robots using the NVIDIA robotics stack.
“Robotics’ ChatGPT moment has arrived. Breakthroughs in Physical AI – models that understand, reason and plan actions based on the real world – are opening up entirely new application areas,” said Jensen Huang, founder and CEO of NVIDIA. “NVIDIA’s full stack of Jetson robotics processors, CUDA, Omniverse and open Physical AI models will enable our global partner ecosystem to revolutionize many industries with AI-driven robotics.”
Also Read: InfoDeliver Establishes Joint AI Lab with ByteCompute, Inc.
New open model advances robotic learning and reasoning
Evolving current robots, which are expensive, single-task, and difficult to program, into “specialist-generalist” robots with reasoning capabilities will require enormous capital and expertise to build the foundational models.
NVIDIA is building a family of open models that enable developers to avoid resource-intensive pre-training and instead focus on building the next generation of AI robots and autonomous machines. These new models provided by Hugging Face include:
- NVIDIA Cosmos™ Transfer 2.5 and NVIDIA Cosmos Predict 2.5 – Open, fully customizable world models that enable physics-based synthetic data generation and robot policy evaluation in simulation for physical AI.
- NVIDIA Cosmos Reason 2 is an open reasoning visual language model (VLM) that enables intelligent machines to see, understand, and act in the physical world like humans do.
- NVIDIA Isaac™ GR00T N1.6 is an open-reasoning visual-linguistic-action (VLA) model built for humanoid robots. It enables full-body control and leverages NVIDIA Cosmos Reason for better reasoning and contextual understanding.
Franka Robotics, NEURA Robotics, and Humanoid use GR00T-enabled workflows to simulate, train, and validate new robot behaviors. Salesforce uses a combination of Agentforce, Cosmos Reason, and NVIDIA’s Blueprint for video search and summarization to analyze video data captured by its robots, cutting incident resolution times in half.
LEM Surgical is using NVIDIA Isaac for Healthcare and Cosmos Transfer to train the autonomous arm of its Dynamis surgical robot, powered by NVIDIA Jetson AGX Thor ™ and Holoscan. XRLabs is using Thor and Isaac for Healthcare for its exoscope and other surgical scopes, bringing real-time AI analytics guidance to surgeons.
A new open-source simulation and computing framework for robotics development
Scalable simulation is essential for training and evaluating robots, but current workflows remain fragmented and difficult to manage. Benchmarking is often manual and difficult to scale, and end-to-end pipelines require complex orchestration across disparate computing resources.
NVIDIA today released a new open-source framework on GitHub that simplifies complex pipelines and accelerates the transition from research to real-world use cases.
NVIDIA Isaac Lab-Arena is an open-source framework available on GitHub that provides a collaborative system for large-scale robot policy evaluation and benchmarking in simulated environments. The evaluation and task layers were designed in close collaboration with Lightwheel . Isaac Lab-Arena connects with industry-leading benchmarks like Libero and Robocasa to standardize testing and ensure robot skills are robust and reliable before deployment on physical hardware.
NVIDIA OSMO is a cloud-native orchestration framework that unifies robotics development into a single, easy-to-use command center. OSMO accelerates development cycles by enabling developers to define and execute workflows such as synthetic data generation , model training, and software-in-the-loop testing across heterogeneous computing environments, from workstations to multiple cloud instances.
SOURCE: PRTimes

