Advantech Co., Ltd. announced the launch of the MIC-742-AT Robotics Development Kit, featuring the NVIDIA Jetson Thor module and supporting the NVIDIA Holoscan platform. This development kit is designed for next-generation robotics and physical AI applications, and combines Jetson Thor’s outstanding AI processing power with integrated functionality for the NVIDIA Holoscan platform. This enables robots to achieve real-time inference and ultra-low latency sensor processing at the edge, enabling them to “see, understand, and act.”
Next-generation AI computing for robotics
Powered by an NVIDIA Jetson Thor module, the MIC-742-AT delivers up to 2,070 FP4 TFLOPS of AI performance and 128GB of high-bandwidth LPDDR5X memory. Designed for the high reliability required for industrial applications, the system consumes only 150W of power. Its wide operating temperature range of -10°C to 60°C and compact design make it suitable for space-saving environments. Furthermore, it supports 8 channels of GMSL 2.0, enabling high-speed, low-latency sensor connectivity, supporting the development of advanced recognition systems. These features make this platform an ideal solution for high-performance edge AI applications, such as humanoid robots, autonomous mobile robots (AMRs), and surgical robots.
Next-generation sensor fusion and integration solutions
Responding to industry trends such as sensor standardization and Ethernet-based packet transmission, Advantech has adopted the NVIDIA Holoscan platform and NVIDIA GPUDirect RDMANV , which allows sensor data to be streamed directly to GPU memory without going through the CPU, achieving low-latency data processing while avoiding bottlenecks. This results in faster AI inference and real-time decision-making.
こちらもお読みください: Takenaka, NTT Docomo, Asratec develop Spatial ID Robot System
さらに アドバンテック offers the MIC-FG-HSBA1, which incorporates a Holoscan Sensor Bridge, and the EKI-2712X-SPE switch accessory for developers who require real-time synchronization of multiple sensors. By utilizing these, data from stereo cameras, LiDAR, radar, ultrasonic sensors, IMUs, and other sensors can be synchronized with low latency and high bandwidth in sub-millisecond increments. This enables multimodal data processing with precise timing, significantly improving the accuracy of AI recognition and the reliability of robotics decision-making.
ソース PRタイムズ