When your CISO mentions “AI security” in the next board meeting, what exactly do they mean? Are they talking about protecting your AI systems from attacks? Using AI to catch hackers? Preventing employees from leaking data to an unapproved AI service? Ensuring your AI doesn’t produce harmful outputs?
The answer might be “all of the above”; and that’s precisely the problem.
AI became deeply embedded in enterprise operations. As a result, the intersection of “AI” and “security” has become increasingly complex and confusing. The same terms are used to describe fundamentally different domains with distinct objectives, leading to miscommunication that can derail security strategies, misallocate resources, and leave critical gaps in protection. We need a shared understanding and shared language.
Jason Lish (Cisco’s Chief Information Security Officer) and Larry Lidz (Cisco’s VP of Software Security) co-authored this paper with me to help address this challenge head-on. Together, we introduce a five-domain taxonomy designed to bring clarity to AI security conversations across enterprise operations.
The Communication Challenge
Consider this scenario: your executive team asks you to present the company’s “AI security strategy” at the next board meeting. Without a common framework, each stakeholder may walk into that conversation with a very different interpretation of what’s being asked. Is the board asking about:
- Protecting your AI models from adversarial attacks?
- Using AI to enhance your threat detection?
- Preventing data leakage to external AI services?
- Providing guardrails for AI output safety?
- Ensuring regulatory compliance for AI systems?
- Defending against AI-enabled or AI-generated cyber threats? This ambiguity leads to very real organizational problems, including:
- Miscommunication in executive and board discussions
- Misaligned vendor evaluations— comparing apples to oranges
- Fragmented security strategies with dangerous gaps
- Resource misallocation focusing on the wrong objectives
Without a shared framework, organizations struggle to accurately assess risks, assign accountability, and implement comprehensive, coherent AI security strategies.
The Five Domains of AI Security
We propose a framework that organizes the AI-security landscape into five clear, intentionally distinct domains. Each addresses different concerns, involves different threat actors, requires different controls, and typically falls under different organizational ownership. The domains are:
- Securing AI
- AI for Security
- AI Governance
- AI Safety
- Responsible AI
Each domain addresses a distinct category of risky and is designed to be used in conjunction with the others to create a comprehensive AI strategy.
These five domains don’t exist in isolation; they reinforce and depend on one another and must be intentionally aligned. Learn more about each domain in the paper, which is intended as a starting point for industry dialogue, not a prescriptive checklist. Organizations are encouraged to adapt and extend the taxonomy to their specific contexts while preserving the core distinctions between domains.
Framework Alignment
Just as the NIST Cybersecurity Framework provides a common language to talk about the domains of cybersecurity while not removing the need for detailed cybersecurity framework such as NIST SP 800-53 and ISO 27001, this taxonomy is not meant to work in isolation of more detailed frameworks, but rather to provide common vocabulary across industry.
As such, the paper builds on Cisco’s Integrated AI Security and Safety Framework recently introduced by my colleague Amy Chang. It also aligns with established industry frameworks, such as the Coalition for Secure AI (CoSAI) Risk Map, MITRE ATLAS, and others.
The intersection of AI and security is not a single problem to solve, but a constellation of distinct risk domains; each requiring different expertise, controls, and organizational ownership. By aligning with these domains with organizational context, organizations can:
- Communicate precisely about AI security concerns without ambiguity
- Assess risk comprehensively across all relevant domains
- Assign accountability clearly to the right teams
- Invest strategically rather than reactively

