Why cybersecurity is unusually constructionist
Seymour Papert published Mindstorms in 1980[1]. The book introduced constructionism as a pedagogy distinct from constructivism. Constructivism, which Papert inherited from Piaget[7], holds that learners build knowledge through active engagement with problems. Constructionism narrows the claim: the most durable learning happens when learners build external artifacts that other people can see, touch, critique, and build on. A LEGO robot. A Logo procedure. A working computer program. The artifact is not a demonstration of the learning. The artifact is where the learning lives.
Cybersecurity is almost perfectly suited to constructionist design. Every SIEM query a learner writes is an artifact. Every home lab environment is an artifact. Every detection rule, every pull request, every incident write-up is an artifact that exposes reasoning to other people. The field already runs on a portfolio economy. Hiring managers read GitHub repositories before they read resumes. Senior engineers evaluate candidates by asking them to explain a specific commit or a specific investigation, not by quizzing them on vocabulary. The field does not need to import constructionist principles. It needs to apply them deliberately rather than stumble into them accidentally.
Papert and Harel's 1991 essay, Situating Constructionism[2], articulated why the theory matters for adults as well as children. The artifact-building requirement is not developmental. It is cognitive. Adults who build artifacts consolidate learning the way children do, and adults who do not build artifacts fail to consolidate learning even when they feel fluent in conversation about it. The cybersecurity career changer who has read ten chapters of a Security+ textbook without building a single home lab is in exactly the failure mode Papert and Harel described: she has absorbed content but constructed nothing, and her understanding will not hold up to the first ambiguous alert.
What most cybersecurity learners build, and why it fails
The typical self-taught cybersecurity learner follows a predictable path. She buys a Security+ voucher. She completes TryHackMe or HackTheBox rooms. She acquires a home lab, usually three or four Windows and Linux VMs with Active Directory. She writes lab notes in Markdown and posts them on a personal site or in a GitHub repository. If she is disciplined, she produces between ten and thirty write-ups in six months. She then applies for SOC Analyst Tier 1 roles and often fails at the recruiter screen.
The failure mode is instructive. The write-ups demonstrate that she completed lab rooms. They do not demonstrate reasoning. A TryHackMe write-up that narrates the solution path but does not explain why she considered and rejected alternative paths is a constructionist artifact in Papert's[1] sense, but it is a weak one. Kafai and Resnick's[3] edited volume made the same observation across many classroom studies: artifacts whose construction does not externalize the reasoning behind the design decisions produce weaker learning than artifacts that do. The discipline is not in building. It is in building with reflection visible inside the artifact.
Ritchhart, Church, and Morrison[9] articulated the same principle in the making-thinking-visible literature. Artifacts that do not expose the underlying cognitive moves provide neither the learner nor the evaluator with evidence of competence. A hiring manager reading a write-up that says "I opened Metasploit, selected the exploit module, and got a shell" has no evidence that the learner could navigate a situation where Metasploit did not already have the module. A hiring manager reading a write-up that says "I considered two enumeration paths, rejected the first because the banner showed a patched version, followed the second through three failed hypotheses before finding the misconfigured service, and here is what I learned about when to pivot" has evidence the learner can function under ambiguity.
The difference between the two write-ups is not effort. It is structure. The first write-up is a narrative of actions. The second is a constructionist artifact that makes reasoning visible. Constructionism prescribes the second. Most cybersecurity learners produce the first, because nobody told them which kind of artifact would carry them through a recruiter screen.
Jonassen's template for the cybersecurity lab environment
David Jonassen's 1999 chapter in Reigeluth's second volume of Instructional-Design Theories and Models[6] gave constructivist designers a practical template. Jonassen argued that a well-designed constructivist learning environment has six components: a problem or project at the center, related cases that extend the learner's frame of reference, information resources available when the learner needs them, cognitive tools that amplify thinking, conversation and collaboration tools, and social and contextual supports that scaffold the whole.
The cybersecurity home lab as typically built fails on at least four of Jonassen's six components. There is a problem in the center: a misconfigured AD environment, for example. There may be cases available through TryHackMe rooms or HackTheBox scenarios. But the information resources are scattered across Reddit posts and Medium articles with variable reliability, the cognitive tools are generic developer tools without domain-specific scaffolding, the conversation and collaboration tools are not part of the lab at all (the learner studies alone), and the social and contextual supports are absent (no mentor, no community, no scaffolded progression path).
A constructionist cybersecurity lab environment designed to Jonassen's template would look different. It would have a problem at the center, yes. It would also have a curated case set that extends beyond the specific scenario to show the learner how practitioners have faced variations on this problem in real engagements. It would have an information resource layer that the learner can consult without breaking her flow, scoped to the specific problem she is working on. It would have cognitive tools that scaffold the specific sub-skills the problem demands, such as query templates for SIEM construction or ATT&CK navigation aids for detection mapping. It would have a conversation layer in which the learner can discuss her reasoning with peers or mentors as she works. And it would have explicit social and contextual support in the form of scheduled check-ins, community membership, and a clear map of where her current problem sits in a progression.
The DecipherU knowledge graph and AI Career Coach are two components of the Jonassen-template approach. The graph provides the information-resource layer scoped to cybersecurity career decisions. The Coach provides the cognitive-tool layer scoped to Jonassen's six components. But the full implementation also requires the social-contextual layer that DecipherU cannot provide on its own. A cybersecurity learner who wants a Jonassen-complete environment should combine the graph and the Coach with at least one live community membership, at least one mentor relationship, and a scheduled check-in cadence. The research is unequivocal on this point.
Vygotsky's ZPD applied to cybersecurity lab selection
Vygotsky's zone of proximal development[4] is the difference between what a learner can do alone and what she can do with expert guidance. Wood, Bruner, and Ross[5] extended the idea with the concept of scaffolding, the specific support structures an expert provides so the learner can operate at the top of her ZPD without collapse. The two concepts together provide a practical tool for selecting cybersecurity labs at the appropriate level.
Consider a learner whose background is IT support. She knows TCP/IP reasonably well. She has configured Active Directory in a small office. She has not worked with SIEM tools. She has not done incident response. A TryHackMe beginner Linux fundamentals room is below her ZPD. A multi-day HackTheBox pro lab on Active Directory exploitation chains is above her ZPD, and the gap is wide enough that scaffolding will not close it without a mentor working alongside her. The sweet spot is a structured scenario involving Windows event logs, Sysmon, and a SIEM she has not used before, where her existing AD knowledge is load-bearing and the SIEM interaction is the stretch task. She will finish the scenario. She will learn the new tool. She will build a write-up that demonstrates reasoning at the edge of her capability, which is exactly what Ericsson's[13] deliberate practice framework prescribes.
The practical implication for cybersecurity career changers is that lab selection should be a deliberate ZPD diagnosis, not a progression-tree default. The TryHackMe and HackTheBox progression trees are designed well, but they are calibrated to a generic learner. Every adult career changer has a specific adjacent-skill profile, and that profile determines which lab is the sweet spot for her. A sysadmin, a software developer, a compliance professional, and a teacher all enter cybersecurity with different adjacent skills and therefore different ZPDs. The lab that is perfectly calibrated for the sysadmin is boring for the software developer and crushing for the teacher.
A structured ZPD diagnosis before lab selection takes twenty minutes and produces a six-month plan that outperforms a generic progression tree by a substantial margin in the adult-learning literature[14]. The DecipherU readiness assessments are one instrument for producing the diagnosis. A skilled mentor conversation is another. The written output of either approach becomes an artifact itself, which the learner can revisit as her skills grow to measure her own progression against her earlier self.
The portfolio as construction, not collection
The word portfolio has been hollowed out. In most cybersecurity career-advice sources, a portfolio is described as a collection of completed labs, published write-ups, or GitHub commits. In the constructionist sense[1][2], a portfolio is not a collection. It is an artifact in its own right, which the learner constructs from her other artifacts, with its own design choices and its own reasoning visible at the surface level. The difference is not aesthetic. It shows up in hiring outcomes.
A portfolio designed as a collection, displayed as a chronological list of twenty lab write-ups, forces the hiring manager to construct the portfolio's meaning on her own. She may not bother. A portfolio designed as a constructionist artifact, with an introduction that explains the learner's trajectory, a curated selection of artifacts showing progression across specific cybersecurity sub-skills, cross-links that demonstrate how the learner connected ideas from different labs, and a concluding reflection on what she learned and what she plans next, is itself evidence of the reasoning the field values. The portfolio, at that level, is a Ritchhart-style[9] making-thinking-visible artifact. The learner's thinking is on the page, not only in the underlying work.
Spence's signaling theory[10] predicts that this kind of constructed portfolio functions as a strong signal because it is costly to fake for a low-ability candidate. An ability to produce the kind of reflective narrative a well-designed portfolio requires cannot be acquired in a weekend from AI-generated templates. The signal is therefore robust. A hiring manager who reads a constructed portfolio and a collection portfolio side by side extracts different information from each, and both she and the labor-market data report that the constructed portfolio produces more offers.
The DecipherU Cybersecurity Career Transition course, grounded in Knowles, Mezirow, Kolb, Bandura, Vygotsky, and Dreyfus, explicitly asks the learner to treat her portfolio as a designed artifact. Module four of that course is dedicated to Spence's signaling theory and Ritchhart's making-thinking-visible framework. The learner completes the module by producing a portfolio structure, not a portfolio of labs. She builds the labs separately. What she builds in the module is the scaffolding that organizes the labs and exposes her reasoning to anyone who reads the portfolio.
Bandura's self-efficacy layer in the constructionist design
Constructionist learning environments also happen to be well-designed for self-efficacy engineering, which matters for adult cybersecurity career changers whose domain-specific confidence is low at the start. Bandura[11] identified four sources of domain-specific self-efficacy: mastery experiences, vicarious experiences, verbal persuasion, and physiological regulation. Constructionist artifact-building produces mastery experiences by design. A learner who has built twelve lab write-ups and assembled them into a portfolio has twelve mastery experiences she can point to as evidence of capability. She has, in Bandura's terms, constructed her own self-efficacy rather than waiting for someone else to confer it on her.
Vicarious experiences are provided by the cybersecurity community layer. Watching other learners progress through labs and portfolios, reading their write-ups, participating in Discord or BSides events, produces the second source. A cybersecurity learner who studies alone misses this source entirely. A learner who participates in two or three community spaces accumulates vicarious experiences at a rate that strengthens self-efficacy across months.
Verbal persuasion from a credible mentor or peer produces the third source. Bandura's research is specific on this point: the persuader must be credible in the learner's eyes for the effect to occur. Generic verbal encouragement, such as motivational content or generic chatbot affirmations, does not produce the self-efficacy effect. A senior practitioner saying she thinks the learner's detection write-up shows promising judgment produces the effect. The difference is real, and the Bandura research has replicated it across many domains.
Physiological regulation, the fourth source, is engineerable through study-session discipline. The learner who runs 90-minute focused sessions with deliberate breaks before frustration crosses into panic maintains a physiological state compatible with learning. The learner who grinds through three-hour sessions without regulation accumulates cortisol and produces a physiological state associated with learning-resistant arousal. The constructionist lab environment that incorporates explicit break discipline outperforms the unstructured environment over time.
A practical specification for a constructionist cybersecurity curriculum
Pulling together the literature in this essay produces a practical specification. A constructionist cybersecurity curriculum for adult career changers would include the following components.
One. A deliberate ZPD diagnosis before lab selection. The output is a six-month lab plan calibrated to the learner's adjacent skills and target role, produced either by a DecipherU readiness assessment or by a mentor conversation, and revisited monthly.
Two. A lab environment built to Jonassen's[6] six-component template. Problem at the center, curated cases, information resources, cognitive tools, conversation layer, and social-contextual support. The last two components require live community membership and scheduled check-ins, which the learner must arrange separately from any software platform.
Three. A weekly cadence matched to Kolb's[8] four-phase experiential learning cycle. Concrete experience (lab hours), reflective observation (written reflection in prose, minimum 150 words per lab), abstract conceptualization (one research source per week connected to the lab experience), active experimentation (modifying the next lab based on the reading).
Four. A portfolio that is a constructed artifact, not a collection. The portfolio has an introduction that narrates the learner's trajectory, a curated selection of twelve to twenty artifacts across specific cybersecurity sub-skills, cross-links that demonstrate integration, and a reflective concluding section. The portfolio is a Ritchhart[9] making-thinking-visible artifact at the meta-level.
Five. Self-efficacy engineering across Bandura's[11] four sources, with a weekly audit that asks the learner to identify mastery experiences produced, vicarious experiences consumed, verbal persuasion received from credible sources, and physiological state during study sessions. The audit takes twenty minutes per week and outperforms motivational content by substantial margins in the adult-learning literature.
Six. Deliberate practice on specific sub-skills following Ericsson, Krampe, and Tesch-Römer[13]. Twelve iterations per key artifact type, with feedback at each iteration, before the artifact is submitted as portfolio evidence. The twelfth iteration is typically where competence emerges.
This specification is not proprietary. It is an application of peer-reviewed work that has been available for decades. The reason it does not dominate cybersecurity career-transition practice is that most adult learners encounter the field through industry training products that were not designed on these principles. The products are not negligent. They are optimized for different outcomes (certification pass rates, lab completion counts) than the outcome constructionism predicts matters most, which is durable competence visible in constructed artifacts.
A closing note for cybersecurity hiring managers
Hiring managers who want to evaluate adult career changers fairly have a symmetric responsibility. A portfolio designed to the specification above produces different evidence than a collection portfolio, and evaluators who read both kinds without distinguishing them lose the signal value that careful candidates are trying to provide. Schön's[12] reflective-practitioner framework applies to evaluators as well as to candidates. The hiring manager who reads a candidate's portfolio reflectively, asks about specific design choices she notices, and calibrates her inferences accordingly extracts more information than the manager who scans the top of the portfolio for certifications and discards the rest.
The asymmetry in current practice is that candidates invest in deliberate artifact construction and evaluators discount the investment. Correcting the asymmetry requires hiring pipelines that give constructed portfolios the weight the signaling theory[10] says they deserve. The firms that adjust first will access candidate pools that remain invisible to their competitors, at least until the practice spreads. This is the operational implication of taking constructionist pedagogy seriously at the hiring end, and it is what the evidence predicts will happen to cybersecurity hiring as the adult career-changer pipeline continues to grow.
References
- [1]Papert, S. (1980). Mindstorms: Children, computers, and powerful ideas. Basic Books.
- [2]Papert, S., & Harel, I. (1991). Situating constructionism. In I. Harel & S. Papert (Eds.), Constructionism (pp. 1-11). Ablex Publishing.
- [3]Kafai, Y. B., & Resnick, M. (Eds.). (1996). Constructionism in practice: Designing, thinking, and learning in a digital world. Lawrence Erlbaum.
- [4]Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Harvard University Press.
- [5]Wood, D., Bruner, J. S., & Ross, G. (1976). The role of tutoring in problem solving. Journal of Child Psychology and Psychiatry, 17(2), 89-100. https://doi.org/10.1111/j.1469-7610.1976.tb00381.x
- [6]Jonassen, D. H. (1999). Designing constructivist learning environments. In C. M. Reigeluth (Ed.), Instructional-design theories and models (Vol. 2, pp. 215-239). Lawrence Erlbaum.
- [7]Piaget, J. (1970). Genetic epistemology. Columbia University Press.
- [8]Kolb, D. A. (1984). Experiential learning: Experience as the source of learning and development. Prentice-Hall.
- [9]Ritchhart, R., Church, M., & Morrison, K. (2011). Making thinking visible: How to promote engagement, understanding, and independence for all learners. Jossey-Bass.
- [10]Spence, M. (1973). Job market signaling. The Quarterly Journal of Economics, 87(3), 355-374. https://doi.org/10.2307/1882010
- [11]Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review, 84(2), 191-215. https://doi.org/10.1037/0033-295X.84.2.191
- [12]Schön, D. A. (1983). The reflective practitioner: How professionals think in action. Basic Books.
- [13]Ericsson, K. A., Krampe, R. T., & Tesch-Römer, C. (1993). The role of deliberate practice in the acquisition of expert performance. Psychological Review, 100(3), 363-406. https://doi.org/10.1037/0033-295X.100.3.363
- [14]Knowles, M. S., Holton, E. F., & Swanson, R. A. (2015). The adult learner: The definitive classic in adult education and human resource development (8th ed.). Routledge. https://doi.org/10.4324/9781315816951