TIER IV, the pioneering force behind the world’s first open-source software for autonomous driving, is proud to announce a strategic collaboration with Carnegie Mellon University (CMU), widely regarded as a birthplace of autonomous vehicles in 1984, to realize the new Level 4+ autonomy concept. Together, they aim to advance scalability, explainability, and safety through a hybrid architecture that combines data-centric AI approaches with the best practices in robotics, while also unlocking the potential of embodied AI to improve transparency and traceability in decision-making.
This collaboration is further strengthened through Safety21, the US Department of Transportation’s National University Transportation Center for Safety, led by CMU Professor Raj Rajkumar. TIER IV has joined Safety21’s Advisory Council, promoting the value of open-source software through Autoware*, which serves as the foundation for state-of-the-art research and development that addresses the trade-offs between safety and user experience in autonomous driving systems.
Traditional Level 4 autonomy has been built on robotics methods such as probabilistic estimation and machine learning, relying on hand-crafted behavioral rules, predefined high-definition maps, and localized data sets to coordinate core functions such as sensing, localization, perception, planning, and control. Autoware originated from this architecture and has been successfully deployed in autonomous driving systems around the world.
Also Read: Isuzu and Fujitsu Partners to Develop Commercial SDVs
The new Level 4+ autonomy concept, advocated through this collaboration, represents an intermediate step between SAE J3016 Level 4 and Level 5. It remains within the Level 4 classification in terms of human roles, but incorporates key aspects of Level 5 system features. As a result, the vehicle can operate under virtually all conditions by flexibly expanding its operational design domains (ODDs) to cover previously unencountered scenarios.
The Level 4+ system features do not require the human to take over dynamic driving tasks (DDT). However, they may leverage additional information provided from outside the system, as part of strategic functions, to dynamically respond to environmental changes within the target operational domain (TOD). Meanwhile, the system continues to control tactical and operational functions. In this framework, the system retains full responsibility for safety assurance, even when external strategic input influences its behavior. For example, a human may provide guidance that adjusts waypoint planning at runtime to help the system align its behavior with both the defined ODD and the TOD.
Emerging end-to-end AI models, a key variant of data-centric AI approaches, are promising for realizing Level 4+ autonomy, particularly when integrated with rule-based systems and human-in-the-loop strategies. However, they also present critical challenges, including high data requirements, limited explainability in decision-making, and difficulties in establishing robust safety assurance. Because it is often unclear how such models generalize learned behaviors or what influences their outputs, ensuring trustworthy real-world deployment remains a key hurdle.
SOURCE: PRNewsWire