Check Point Software Technologies is teaming up with グーグル・クラウド to plug a gap that’s starting to show up as AI agents move into real work.
They’re integrating チェックポイント’s AI Defense Plane with Gemini Enterprise. The goal is not just better security, but the kind that actually understands how AI agents behave in production.
Here’s the shift. AI is no longer just answering questions. It is executing tasks, calling tools, and interacting with systems. Traditional security models that focus on user access are not enough anymore. The real risk now sits in what the agent is allowed to do and how it behaves while doing it.
こちらもお読みください: NTTとKnowBe4、日本における人的リスクに対処するサイバーセキュリティ研修を拡大
This integration builds security across three layers. First, full visibility into all deployed agents, including their tools and connections. Second, governance before deployment, where teams can define what agents are allowed or blocked from doing. The system operates through three protection methods which include active monitoring for runtime protection and automatic blocking of security threats that include prompt injection attacks and unsafe tool usage and data security breaches during agent operations.
The bigger picture is hard to ignore. As enterprises scale AI agents, security is moving closer to runtime control rather than static permissions. This is less about access and more about behavior. And that is where the next set of risks, and solutions, are taking shape.


