NRI Secure Technologies launched “AI Yellow Team,” a service that analyzes potential threats to AI agent systems and implements security measures. The service supports ensuring security from the system design stage through risk analysis based on international guidelines, the latest threat trends, and the company’s proprietary knowledge cultivated through security assessments.
The company offers “AI Red Team,” a service that performs pre-release security assessments for AI-based systems, and “AI Blue Team,” a service that monitors security after release. The launch of AI Yellow Team enables the construction and operation of highly secure systems at every stage. In recent years, there has been a growing trend toward linking AI models with external data sources and tools, and for multiple AI agents to work together.
こちらもお読みください: Classmethod And Anthropic Launch AI Consulting Services
Meanwhile, new threats specific to AI agents have emerged, including the potential for the autonomous execution capabilities of AI agents to be exploited, gradually overriding the objectives they are intended to fulfill, or for attacks that combine legitimate authority and functionality. During the design stage, the AI Yellow Team’s experts follow four steps: interviews, visualization, threat analysis, and security countermeasure proposals. They identify and analyze threats that may occur during the system development and operation stages, evaluate the validity of security measures, and propose appropriate solutions. Based on international guidelines and incorporating the latest trends and the company’s knowledge of AI security, the team is able to handle the latest attack scenarios, and also enables developers to understand potential threats and easily consider high-priority security measures.
ソース ヤフー