< ciso
brief />
Tag Banner

All news with #threat modeling tag

6 articles

Deterministic vs Agentic AI in Security Validation

🔒 AI adoption is now a boardroom expectation, and Pentera’s AI Security and Exposure Report 2026 reports that every CISO surveyed already uses AI across their organizations. The piece argues that fully agentic systems, while powerful and adaptive, introduce probabilistic variability that undermines repeatable, measurable security validation. A hybrid approach—deterministic orchestration for consistent attack chains combined with AI for adaptive payloads and environmental interpretation—provides guardrails while preserving realism. This anchoring enables reliable retesting and continuous exposure validation without sacrificing contextual intelligence.
read more →

A Taxonomy of Cognitive Security and Reality Pentesting

🧠 Bruce Schneier highlights K. Melton’s recent framework on cognitive security, cognitive hacking, and “reality pentesting.” Melton organizes cognition into five architectural layers—sensory interface, neurocompiler, mind kernel, the mesh, and cultural substrate—and shows how fast, unconscious processes (Kahneman’s System 1) create exploitable backdoors. The taxonomy frames human perception as an IT-like attack surface and suggests practical implications for testing, defense, and threat modeling.
read more →

Adapting Threat Modeling for AI Applications at Scale

🛡️ The Microsoft Security Blog explains why threat modeling must be retooled for AI systems, noting that probabilistic behavior and complex input spaces require reasoning about ranges of likely outcomes rather than single execution paths. It identifies three core drivers — nondeterminism, instruction‑following bias, and system expansion through tools and memory — which widen attack surfaces and surface human‑centered risks like erosion of trust. The post advises starting from assets, mapping untrusted inputs, setting clear 'never do' boundaries, and embedding architectural mitigations, observability, and response plans to limit blast radius and sustain trust.
read more →

New Paradigm for Training Secure Software Engineers

🔒 As AI-assisted coding reshapes software delivery, security training must move from line-by-line vulnerability spotting to cultivating system-level judgment. Automated tools will increasingly catch common issues, but developers must learn threat modeling, identify unsafe assumptions in AI-generated code, and understand which automated gates require human review. Effective programs are bite-sized, hands-on, and embedded in toolchains, using contextual guardrails and micro-learning to teach in the flow of work.
read more →

The Promptware Kill Chain: A Framework for AI Threats

🛡️ The authors present a seven-step “promptware kill chain” to reframe prompt injection as a multistage malware paradigm targeting modern LLM-based systems. They describe how Initial Access can be direct or indirect—via web pages, emails, shared documents, or multimodal inputs—and how LLMs’ lack of separation between data and executable instructions enables escalation. The paper catalogs stages from jailbreaking and reconnaissance to persistence, C2, lateral movement, and harmful Actions on Objective, urging defenses that assume initial compromise and break the chain at later steps.
read more →

Threat Modeling Your Digital Life Under Authoritarianism

🔒 The article argues that personal threat modeling must adapt as governments increasingly combine their extensive administrative records with corporate surveillance data. It details what kinds of government-held data exist, how firms augment those records, and the distinct dangers of targeted versus mass surveillance. Practical mitigations are discussed—encryption, scrubbing accounts, burner devices—and the piece stresses that every defensive choice is a trade-off tied to individual goals.
read more →