Konnect-linK Co., Ltd. has officially begun research and development of AI that operates directly within IoT devices and industrial machinery, also known as “physical AI,” at its subsidiary ” Konnect Frontier Lab. ”
Konnect-linK has traditionally promoted the implementation of AI in on-premise environments and edge devices, working in the so-called edge AI domain where perception and decision-making are performed on the device side without relying on the cloud. Now, we have officially begun research and development to expand this to physical AI.
This press release indicates that our group has completed the verification and knowledge integration phases of its in-house research and development project, which has been underway for over a year, focusing on the compatibility of AI utilization, foundational technologies, and IoT sensor devices with the physical domain , and has now moved into the social implementation phase.
Also Read: Fibergate and HOMETACT to Make Rentals Smarter Assets
At our subsidiary Konnect Frontier Lab, we research and develop all processes for receiving, interpreting, judging, and outputting perceptual and non-perceptual information , construct appropriate original architectures , and integrate them into physical media (robots, industrial machinery, mobility devices, drones, sensor devices, infrastructure equipment, etc.). Our goal is to evolve from “AI at a distance” to “AI residing in objects” and then to “AI that acts in real space. ”
This will accelerate the social implementation of “AI that truly works on-site,” where AI can perceive the situation on-site, make instantaneous decisions, and take direct action in the real world through robots and machines.
Furthermore, we plan to conduct joint demonstration experiments with multiple companies by the end of the next fiscal year (end of April 2027) to create concrete use cases and verify their feasibility for actual implementation.
Background: Why “Physical AI” now?
Up until now, AI has primarily operated in data centers and has been a “thinking and answering” entity. It reads text, analyzes images, and generates optimal responses—but its scope of activity has often been confined to screens.
The change that is happening now is a transformation in which AI will perceive the real world and directly interact with it . AI will understand the situation in front of it through cameras, read the environment from sensors, and reflect the results of that judgment in the physical space as the movements of robots and machines—this is physical AI.
This change has immense significance in areas where “what is happening on the ground” holds essential value—Japan’s core industries such as manufacturing, logistics, construction, healthcare, and nursing care. Japan is projected to face a labor shortage of approximately 7 million people by 2030 *1, and it is no longer possible to maintain operations solely through manpower. The introduction of physical AI—where AI understands the situation on the ground and takes on some of the work through machines and robots— is becoming a realistic solution to these structural challenges.
Furthermore, for AI to function in the real world, it needs to operate right next to the actual location. Cloud-based processing makes it difficult to react immediately to events occurring right before your eyes, and there are limitations to real-time control of machines. Physical AI, by its very nature, is based on the premise that perception, judgment, and control are completed on the device at the actual location.
SOURCE: PRTimes


