Cybersecurity career intelligence
Get weekly cybersecurity career intelligence
© 2026 Bespoke Intermedia LLC
Founded by Julian Calvo, Ed.D. · Cybersecurity career intelligence · Est. 2024
Applied AI · Premium flagship course
A 12-week premium Applied AI flagship covering production AI engineering: prompt engineering at scale, embeddings, RAG, evaluation, agents, fine-tuning, multimodal, deployment, cost economics, and AI safety. Cybersecurity convergence woven throughout. The course references the Northeastern M.S. Applied AI specializing in Cybersecurity credential and weaves cybersecurity convergence (NIST AI RMF, OWASP LLM Top 10, MITRE ATLAS) into every module because production AI in 2026 cannot ship without addressing prompt injection, data exfiltration, and excessive agency.
AI Engineering Mastery is a 12-week premium Applied AI flagship course for software engineers, ML engineers, and security engineers building production AI systems in 2026 and 2027. The curriculum sequences twelve weekly modules across the production AI craft: AI engineering tooling and the 2026 landscape, prompt engineering at production scale, embeddings and vector databases, retrieval-augmented generation patterns, evaluation frameworks, multi-agent architectures, fine-tuning and model selection, multimodal AI applications, production deployment patterns, cost optimization and inference economics, AI safety and responsible deployment, and a capstone production AI system. Cybersecurity convergence is woven throughout because production AI systems cannot ship without addressing prompt injection, data exfiltration, model abuse, and supply chain risk. The course is built on primary sources only: Anthropic, OpenAI, Google DeepMind, and Meta AI official engineering documentation; NIST AI Risk Management Framework (AI 100-1) and the Generative AI Profile (AI 600-1); OWASP LLM Top 10; MITRE ATLAS for adversarial threats; and peer-reviewed academic research from arXiv on retrieval, evaluation, agent design, and inference optimization. Generic AI engineering advice without primary-source backing is excluded. Authored by Julian Calvo, Ed.D. in Learning Sciences with the M.S. Applied AI specializing in Cybersecurity in progress at Northeastern University. Every module pairs reading with a hands-on artifact the learner produces and adds to a production AI portfolio. The capstone is a production AI system shipped against a documented evaluation set with cost, latency, and quality budgets and a documented threat model.
The course follows the dependency order of the production AI craft rather than a textbook chapter order. Week 1 grounds the learner in the 2026 AI engineering landscape and tools so every later module has a tool and a target. Weeks 2 through 11 walk the production AI lifecycle in dependency order: prompts, embeddings, retrieval, evaluation, agents, fine-tuning, multimodal, deployment, cost, safety. Week 12 integrates the work into a production AI system with a documented threat model. Pedagogically the design draws on Kolb's experiential learning cycle (1984) and Bandura's self-efficacy theory (1997): every module sequences a concept, primary-source readings, a hands-on artifact, and a written reflection note. Evidence quality is opinionated. Architecture claims are anchored to AI lab official engineering documentation, peer-reviewed research, or named production case studies disclosed by the operating company. The cybersecurity convergence is anchored to NIST AI RMF, NIST AI 600-1 (Generative AI Profile), OWASP LLM Top 10, and MITRE ATLAS.
Week 01 · 6h · 4 topics
The Applied AI engineering landscape in 2026, the canonical tool stack (model providers, orchestration, vector stores, evaluation, observability, gateway), the cybersecurity surface every AI engineer owns, and a written tooling decision document the learner returns to in every later module.
Learning objectives.
Topics.
Assessment: 5 questions · 360 minutes total
Week 02 · 6h · 5 topics
Production prompt engineering, structured output discipline, prompt caching economics, prompt evaluation harnesses, prompt injection defense, and the prompt portfolio document the engineer ships.
Learning objectives.
Topics.
Assessment: 5 questions · 360 minutes total
Week 03 · 6h · 4 topics
Embedding model selection, dimensionality and storage tradeoffs, vector store comparison (pgvector, Qdrant, Pinecone, Weaviate, Turbopuffer), hybrid search (vector plus keyword), and the embedding decision document the engineer ships.
Learning objectives.
Topics.
Assessment: 5 questions · 360 minutes total
Week 04 · 6h · 4 topics
RAG architectures (naive, advanced, modular), chunking strategies, query rewriting and HyDE, contextual retrieval, citation and grounding, and the RAG cybersecurity surface (data exfiltration, retrieval poisoning).
Learning objectives.
Topics.
Assessment: 5 questions · 360 minutes total
Week 05 · 6h · 4 topics
Eval-first development, ground truth construction, LLM-as-judge patterns, regression evals, online evals (production), evaluation cybersecurity (eval poisoning), and the evaluation harness the engineer ships.
Learning objectives.
Topics.
Assessment: 5 questions · 360 minutes total
Week 06 · 6h · 4 topics
Single-agent vs multi-agent design, agent loops, tool use patterns, planner-executor and orchestrator-worker, agent observability, agent cybersecurity (excessive agency, tool abuse), and the agent the engineer ships.
Learning objectives.
Topics.
Assessment: 5 questions · 360 minutes total
Week 07 · 6h · 4 topics
When fine-tuning earns its cost, dataset construction, parameter-efficient methods (LoRA, QLoRA), preference tuning (DPO, RLHF), open-weights options (Llama, Mistral), and the model selection decision the engineer ships.
Learning objectives.
Topics.
Assessment: 5 questions · 360 minutes total
Week 08 · 6h · 4 topics
Vision (image understanding, OCR, charts), audio (speech-to-text, text-to-speech, voice agents), document AI (PDFs, tables), video, multimodal evaluation, and multimodal cybersecurity (image-borne prompt injection).
Learning objectives.
Topics.
Assessment: 5 questions · 360 minutes total
Week 09 · 6h · 5 topics
Gateway architecture, multi-provider routing, streaming, retries and timeouts, rate limiting, observability (traces, metrics, logs), feature flags, canaries, and the deployment runbook the engineer ships.
Learning objectives.
Topics.
Assessment: 5 questions · 360 minutes total
Week 10 · 6h · 4 topics
The unit economics of AI features, model tier routing, prompt caching at scale, batch inference, quantization, distillation, on-device options, and the cost reduction playbook the engineer ships.
Learning objectives.
Topics.
Assessment: 5 questions · 360 minutes total
Week 11 · 6h · 4 topics
NIST AI RMF in practice, OWASP LLM Top 10 application controls, content moderation, jailbreak defense, abuse monitoring, red-teaming, model card and system card discipline, and the responsible deployment runbook the engineer ships.
Learning objectives.
Topics.
Assessment: 5 questions · 360 minutes total
Week 12 · 6h · 4 topics
Integrate the eleven prior weeks into a production AI system: scoped task, six-layer architecture, evaluation harness, gateway with policy, observability, cost model, threat model, system card, and rollout plan. The capstone is the work that earns the certificate.
Learning objectives.
Topics.
Assessment: 5 questions · 360 minutes total
Capstone
The capstone integrates the eleven prior weekly artifacts (tooling decision, prompt portfolio, embedding decision, RAG pipeline, evaluation harness, agent, fine-tune decision, multimodal feature, deployment runbook, cost model, AI safety controls) into a single production AI system. The system has a scoped task, a six-layer architecture, an evaluation harness with capability, robustness, and behavioral layers (50 plus examples each, runs in CI), a gateway with policy, observability with cost and quality dimensions, a documented threat model mapped to OWASP LLM Top 10 and MITRE ATLAS, a system card, a cost model, and a rollout plan with feature flags, canaries, and rollback criteria. The capstone is graded against three named failure modes: no eval harness with ground truth, no threat model addressing prompt injection plus two other LLM Top 10 risks, and no observability with rollback. A passing capstone earns the DecipherU AI Engineering Mastery certificate of completion.
Authored by
Founder, DecipherU
Founder, DecipherU. Ed.D. Learning Sciences. M.S. Applied AI specializing in Cybersecurity at Northeastern. Career intelligence for the AI economy.
Companion courses
AI Engineering Mastery teaches the deeper production AI craft. AI Career Transition teaches the engineering transition arc into AI roles. AI Product Management teaches scoping AI features, evaluation methodology, and authoring AI product specs that ship. The two companion courses are $397 each and live in the Applied AI foundation catalog.