Cybersecurity career intelligence
Get weekly cybersecurity career intelligence
© 2026 Bespoke Intermedia LLC
Founded by Julian Calvo, Ed.D. · Cybersecurity career intelligence · Est. 2024
Cybersecurity for AI · Premium course
A 12-week cybersecurity engineering path for the practitioners building, operating, and securing production AI systems. It maps to the Northeastern M.S. Applied AI specializing in Cybersecurity credential, the same convergence the curriculum describes.
AI Security Engineering is a 12-week cybersecurity course for engineers who own the security posture of AI systems in production. The curriculum sits at the convergence of cybersecurity and applied AI: prompt injection defense, AI red teaming, AI evaluation for safety, AI infrastructure hardening, training data security, model security, AI supply chain integrity, AI incident response, and AI compliance engineering against the EU AI Act, NIST AI RMF, and ISO/IEC 42001. Every module pairs primary-source standards with one practical lab so the work transfers from reading to reviewable artifacts in a portfolio.
The course is opinionated about source quality. NIST AI RMF and the NIST Generative AI Profile (NIST AI 600-1) anchor the governance content. The OWASP LLM Top 10 and OWASP ML Top 10 anchor the application security content. MITRE ATLAS anchors the adversarial mindset. The EU AI Act consolidated text and ISO/IEC 42001 anchor the regulatory work. Anthropic, OpenAI, and Google DeepMind safety and red-teaming publications anchor the practitioner methods. No vendor white papers without primary-source backing. No copying of any source text. Citations are provided so you can read the originals yourself.
This path was written by Julian Calvo, Ed.D., M.S., who is completing the Master of Science in Applied AI specializing in Cybersecurity at Northeastern University. The course outcome is a documented AI security architecture you can put in front of a hiring panel for AI security engineer, AI red teamer, AI infrastructure security engineer, or AI governance engineer roles. It does not promise a job. It gives you a 60 to 80 hour body of evidence that you understand how AI systems break and how to make them less likely to break.
Week 01 · 6h · 6 lessons
Frame the field. Set the threat model categories you will use across the course and the primary-source standards that govern AI security work in 2026.
Practical lab. Lab 1: Diagram an end-to-end AI product of your choice and label every component with its threat category and the controls you would expect to find in a mature deployment.
Reading.
Week 02 · 7h · 7 lessons
Build the defenses that matter most for current LLM products: input filtering, output validation, context isolation, and tool-use guardrails.
Practical lab. Lab 2: Implement a layered prompt injection defense for a sample LLM application using input filtering, structured prompting, and output validation, then break it with five attack patterns.
Reading.
Week 03 · 7h · 7 lessons
Build the adversarial mindset and the working test suite. Learn how AI red teams operate at major labs and what a credible red-team report looks like.
Practical lab. Lab 3: Run a structured red team against a public LLM chatbot of your choice, write a 1500-word report with at least five findings mapped to OWASP LLM categories, and reproduce each finding from the report alone.
Reading.
Week 04 · 6h · 6 lessons
Eval design is the engineering discipline that turns red teams from one-off events into a continuous quality signal. Build the evals that matter for safety properties.
Practical lab. Lab 4: Design and ship a safety eval suite of at least 50 prompts across five categories, run it against two different LLMs, and write a one-page comparison.
Reading.
Week 05 · 6h · 6 lessons
Secure the platforms that train and serve models. Apply cloud security fundamentals to the specifics of GPU clusters, model registries, and inference endpoints.
Practical lab. Lab 5: Author a hardened AWS, Azure, or GCP architecture for a hypothetical AI inference service, including IAM, network isolation, secrets, logging, and abuse detection, and write a one-page threat model that justifies each control.
Reading.
Week 06 · 6h · 6 lessons
The data layer is where many AI failures begin. Cover poisoning detection, PII handling, differential privacy fundamentals, and the lineage you need for compliance.
Practical lab. Lab 6: Audit a public dataset for PII and poisoning indicators, document findings in a 1000-word writeup, and propose three controls a team would apply before training on it.
Reading.
Week 07 · 6h · 6 lessons
Treat the model itself as an asset under attack. Cover extraction defenses, watermarking, secure serving patterns, and adversarial example testing.
Practical lab. Lab 7: Run a small-scale model extraction attack against a public model with documented permission (or in your own sandbox), write up the attack cost and recommended controls.
Reading.
Week 08 · 6h · 6 lessons
AI products inherit risk from pretrained models, datasets, and ML libraries. Audit the supply chain the way mature software teams audit dependencies.
Practical lab. Lab 8: Audit a real Hugging Face model and its dependencies, produce a one-page provenance and risk report, and propose hardening steps you would apply before production use.
Reading.
Week 09 · 6h · 6 lessons
AI systems fail in ways traditional IR runbooks do not cover. Build the detection, triage, and post-incident analysis patterns your team will actually use.
Practical lab. Lab 9: Author a complete AI incident response runbook for a hypothetical jailbreak that affects 10,000 users, including detection, triage, containment, recovery, and post-incident steps.
Reading.
Week 10 · 7h · 6 lessons
Translate the EU AI Act, NIST AI RMF, and ISO/IEC 42001 into engineering controls. Compliance work is engineering work when it stays close to the system.
Practical lab. Lab 10: Pick a hypothetical high-risk AI product, classify it under the EU AI Act, map its controls to NIST AI RMF subcategories, and document gaps a team would close before launch.
Reading.
Week 11 · 6h · 6 lessons
Apply privacy by design to AI products. Cover right-to-be-forgotten in trained models, federated learning fundamentals, and the GDPR Article 22 requirements for automated decisions.
Practical lab. Lab 11: Author a privacy engineering plan for an AI product that processes user data, including DPIA outline, retention design, and a right-to-be-forgotten workflow.
Reading.
Week 12 · 7h · 6 lessons
Synthesize the 12 weeks into a portfolio artifact. Design and document a complete AI security architecture for a hypothetical AI deployment.
Practical lab. Lab 12 (Capstone): Submit a complete AI security architecture document covering threat model, controls at each layer, eval suite, incident runbook, and compliance mapping for a hypothetical high-risk AI product.
Reading.
Capstone
Design and document a complete AI security architecture for a hypothetical AI deployment that will trigger EU AI Act high-risk classification. The capstone integrates every prior week into a single portfolio artifact: threat model and risk register, prompt injection defenses, structured red team plan, safety evaluation suite, infrastructure controls, data governance and lineage, model security and supply chain controls, AI incident response runbook, and compliance mapping against the EU AI Act, NIST AI RMF, and ISO/IEC 42001. The artifact is the document you will share with hiring panels for AI security engineer roles.
Authored by
Founder, DecipherU
Founder, DecipherU. Ed.D. Learning Sciences. M.S. Applied AI specializing in Cybersecurity at Northeastern. Career intelligence for the AI economy.