LLM-Generated Passwords Are Structurally Predictable
🔐 Two independent research efforts from Irregular and Kaspersky demonstrate that modern LLMs produce passwords that are structurally predictable and far lower in effective entropy than they appear. Models often repeat the same strings across sessions and conform to human-like patterns that fool standard strength meters. Autonomous coding agents are embedding these credentials into configuration files and repositories, and conventional secret scanners lack the means to detect them. Organizations should audit codebases, rotate suspect credentials, and require explicit use of cryptographically secure RNGs for all generated secrets.
