Cybersecurity Trend: AI Governance Regulations Are Creating New Security Roles
The EU AI Act, NIST AI RMF, and emerging state-level AI regulations are creating demand for professionals who can assess, audit, and secure AI systems. This intersection of AI governance and cybersecurity is producing new career paths.
Founder, DecipherU. Ed.D. Learning Sciences (University of Miami), MBA Marketing, M.S. OLL (Barry University), M.S. Applied AI in progress (Northeastern University).
The European Union's AI Act (Regulation 2024/1689), which entered into force in August 2024, is the first major regulatory framework specifically governing artificial intelligence. The Act classifies AI systems by risk level (unacceptable, high, limited, minimal) and imposes security, transparency, and governance requirements on high-risk systems. These include cybersecurity requirements: high-risk AI systems must be resilient to attacks, and providers must implement risk management throughout the system lifecycle.
In the United States, NIST published the AI Risk Management Framework (AI RMF 1.0) in January 2023, followed by Executive Order 14110 on Safe, Secure, and Trustworthy AI in October 2023. While less prescriptive than the EU AI Act, these frameworks establish expectations for AI security assessment, red-teaming of AI systems, and governance structures.
For cybersecurity careers, the convergence of AI governance and security creates several new role profiles. AI security engineers assess the security of machine learning pipelines, including training data integrity, model poisoning resistance, and adversarial robustness. AI red team members test AI systems using techniques from both traditional penetration testing and ML-specific attack methods (data poisoning, model inversion, prompt injection). AI governance analysts map AI systems against regulatory requirements and organizational risk tolerance.
Brundage et al. (2020) outlined the security implications of malicious AI use, establishing the threat model that now informs both regulation and career demand. Their work identified three categories of AI security concern: using AI to enhance existing attacks, exploiting vulnerabilities in AI systems, and autonomous AI systems that operate outside their intended boundaries.
The skills required span cybersecurity fundamentals (access control, encryption, monitoring), machine learning concepts (model training, inference, evaluation), and governance frameworks (EU AI Act risk classifications, NIST AI RMF categories). No single existing certification covers this full scope, though CompTIA SecAI+ targets the intersection directly and CISSP has added AI-relevant content to its Common Body of Knowledge.
For career planning, professionals who build AI security skills now are positioning for a field that will grow significantly as regulations take effect and enforcement begins. The EU AI Act's compliance deadlines extend through 2027, creating a multi-year demand curve for AI governance and security expertise.
The 2024-2028 period represents the formative years for AI security as a distinct career specialization. Early entrants will shape the standards, tooling, and best practices that define the field.
Verifiable Predictions
AI security engineer becomes a distinct job title at 20% of large enterprises by 2027
EU AI Act enforcement creates measurable demand for AI auditors by 2027
AI red teaming services market exceeds $1B by 2028
Related Cybersecurity Resources
Related Career Guides
Related Salary Guides
References
- European Parliament and Council (2024). Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence (AI Act). Official Journal of the European Union.
- NIST (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology. 10.6028/NIST.AI.100-1
- Brundage, M., Avin, S., Clark, J., et al. (2018). The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. arXiv preprint. 10.48550/arXiv.1802.07228
This trend analysis represents original research and interpretation by DecipherU. Predictions are based on publicly available data and cited academic sources. Actual outcomes may differ. This content is for educational purposes and does not constitute investment, career, or financial advice.
The EU AI Act, NIST AI RMF, and emerging state-level AI regulations are creating demand for professionals who can assess, audit, and secure AI systems. This intersection of AI governance and cybersecurity is producing new career paths. Check the related career guides above for specific role-level implications.
This analysis covers the 2024-2028 period. DecipherU reviews and updates trend articles monthly. The article includes 3 verifiable predictions that will be tracked and updated as events unfold.
Based on this trend, relevant certifications include comptia-secai, cissp. Visit our certification guides for current pricing, exam format, and ROI analysis.
Sources
- European Parliament and Council (2024) — Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence (AI Act). Official Journal of the European Union
- NIST (2023) — Artificial Intelligence Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology
- Brundage, M., Avin, S., Clark, J., et al. (2018) — The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. arXiv preprint
Get cybersecurity career insights delivered weekly
Join cybersecurity professionals receiving weekly intelligence on threats, job market trends, salary data, and career growth strategies.
Get Cybersecurity Career Intelligence
Weekly insights on threats, job trends, and career growth.
Unsubscribe anytime. More options