CrowdStrike has announced the general availability of Falcon AI Detection and Response, or AIDR, expanding its Falcon platform to cover a fast growing problem. Securing AI systems not just where they run, but how they think, respond, and act.
As enterprises roll out AI across development teams and everyday employee workflows, new attack surfaces are showing up. The biggest one is the interaction layer. Prompts, responses, and agent actions are now targets. Attackers use hidden instructions, prompt injection, and jailbreak techniques to manipulate outcomes or pull sensitive data. In simple terms, prompts are becoming the new malware.
Also Read: Contract Minister Adds MCP Support and Pulls Legal Work into Natural Language
Falcon AIDR applies the same detection and response model CrowdStrike popularized with endpoint security to AI environments. It provides real time protection across AI development and usage, covering data, models, agents, identities, infrastructure, and interactions. The platform offers visibility into how AI is being used, blocks known prompt injection techniques, enforces usage policies, and prevents sensitive data from leaking into models or external systems.
The move reflects a broader shift in security. AI is no longer a side experiment. It is production infrastructure. CrowdStrike is betting that enterprises will need centralized, runtime control over AI behavior before misuse becomes the next large scale breach category.

