GMO Cybersecurity by Ierae has launched an ‘AI Agent Penetration Testing’ service that basically treats your AI like a live target. Not theory. Not checklist audits. Actual white-hat hackers simulating real attacks on AI agents, chatbots, and RAG systems inside enterprise environments.
The current issue they deal with now becomes more critical than before. Companies are rushing to plug AI into workflows, data systems, and external tools. The same accessibility that provides AI users with beneficial features creates security threats for them. Think data leaks through prompt injection, over-permissioned agents doing things they shouldn’t, or AI becoming an entry point for wider system compromise.
Their approach is simple in idea, harder in execution. Understand how the business uses AI, recreate attack scenarios, and test it in a real environment using company systems. The focus is not just on the model, but on permissions, integrations, and data access. That’s where most of the risk actually sits.
Also Read: Check Point and Google Cloud strengthen AI agent security
There is also a broader shift playing out here. Traditional security testing was built for apps and networks. AI breaks that model. Now security needs to think like attackers who target behavior, context, and decision-making.
Bottom line, AI adoption is moving faster than AI security. Services like this are trying to close that gap before it turns into a bigger problem.


