MONO BRAIN Co., Ltd. has released ‘Model Security Range’ as an open source project. The idea is not theory. It is hands-on testing. You get an environment where AI systems are intentionally made vulnerable, and you try to break them in controlled ways.
This setup focuses on real attack patterns that are already showing up in production. The security threats include prompt injection and misuse of tool permissions and data contamination. You can test the risks by running experiments to see which components of the system fail.
They work to solve an existing problem in their current research. AI security today is messy. Teams cannot easily reproduce attacks, and everyone tests things differently. So results do not match, and fixes are hard to validate. This system tries to standardize that. Setup, attack execution, and recovery are all defined clearly so anyone can run the same test and compare outcomes.
Also Read: TCS Japan partners Illumio to focus on Cyber Containment
It also covers multiple AI setups. Not just one model type. You can test across RAG systems, agents, OCR pipelines, and machine learning models. The scenarios include things like leaking sensitive data through prompt injection and abusing over-permissioned tools and conducting indirect attacks through file uploads and training data poisoning.
This document serves developers and security teams and researchers who work on building or auditing AI systems. The system supports various use cases which include pre-release testing internal red team exercises and training activities.
The bigger picture is simple. AI threats are evolving faster than traditional security practices. Tools like this are trying to close that gap by making security testing more practical and repeatable.


