Tag Banner

All news with #red team findings tag

Mon, December 8, 2025

Offensive Security Rises as AI Transforms Threat Landscape

🔍 Offensive security is becoming central to enterprise defenses as CISOs increasingly add red teams and institutionalize purple teaming to surface gaps and harden controls. Practices range from traditional vulnerability management and pen testing to adversary emulation, social engineering assessments, and security-tool evasion testing. Vendors are embedding automation, analytics, and AI to boost effectiveness and lower barriers to entry. While budget, skills, and the risk of finding unfixable flaws remain obstacles, leaders say OffSec produces the data-driven evidence needed to prioritize remediation and counter more sophisticated, AI-enabled attacks.

read more →

Tue, November 18, 2025

Researchers Detail Tuoni C2's Role in Real-Estate Attack

🔒 Cybersecurity researchers disclosed an attempted intrusion against a major U.S. real-estate firm that leveraged the emerging Tuoni C2 and red-team framework. The campaign, observed in mid-October 2025, used Microsoft Teams impersonation and a PowerShell loader that fetched a BMP-steganographed payload from kupaoquan[.]com and executed shellcode in memory. That sequence spawned TuoniAgent.dll, which contacted a C2 server but ultimately failed to achieve its goals. The incident highlights the risk of freely available red-team tooling and AI-assisted code generation being abused by threat actors.

read more →

Thu, October 2, 2025

Daniel Miessler on AI Attack-Defense Balance and Context

🔍 Daniel Miessler argues that context determines the AI attack–defense balance: whoever holds the most accurate, actionable picture of a target gains the edge. He forecasts attackers will have the advantage for roughly 3–5 years as Red teams leverage public OSINT and reconnaissance while LLMs and SPQA-style architectures mature. Once models can ingest reliable internal company context at scale, defenders should regain the upper hand by prioritizing fixes and applying mitigations faster.

read more →

Wed, August 20, 2025

Logit-Gap Steering Reveals Limits of LLM Alignment

⚠️ Unit 42 researchers Tony Li and Hongliang Liu introduce Logit-Gap Steering, a new framework that exposes how alignment training produces a measurable refusal-affirmation logit gap rather than eliminating harmful outputs. Their paper demonstrates efficient short-path suffix jailbreaks that achieved high success rates on open-source models including Qwen, LLaMA, Gemma and the recently released gpt-oss-20b. The findings argue that internal alignment alone is insufficient and recommend a defense-in-depth approach with external safeguards and content filters.

read more →