The Promptware Kill Chain: A Framework for AI Threats
🛡️ The authors present a seven-step “promptware kill chain” to reframe prompt injection as a multistage malware paradigm targeting modern LLM-based systems. They describe how Initial Access can be direct or indirect—via web pages, emails, shared documents, or multimodal inputs—and how LLMs’ lack of separation between data and executable instructions enables escalation. The paper catalogs stages from jailbreaking and reconnaissance to persistence, C2, lateral movement, and harmful Actions on Objective, urging defenses that assume initial compromise and break the chain at later steps.
