A new approach to understanding and operationalizing AI risks
To secure AI systems, organizations first need a comprehensive understanding of the unique AI threat landscape. To this end, we've introduced a reimagined AI security taxonomy: our Integrated AI Security and Safety Framework is a unified, lifecycle-aware taxonomy that organizes AI security and safety risks across modalities, agents, pipelines, and the broader ecosystem.
This taxonomy is organized by attacker objective and attack technique for intuitive use. It serves as an operationalization framework that integrates unique AI security threats, content, and harms associated with AI inputs and outputs, as well as supply chain and agentic threats into a single structure that recognizes the interconnected nature of modern AI risks. The taxonomy will be regularly updated to reflect the evolving threat landscape.
We developed this framework to assist AI and security communities in navigating AI security and safety threats, complete with descriptions, examples, and mappings to AI security standards we co-developed alongside NIST Adversarial Machine Learning, MITRE ATLAS, and OWASP Top 10 for LLM and GenAI Applications and Top 10 for Agentic Applications.