Tag Banner

All news with #ai security tag

Mon, November 17, 2025

How Attack Surface Management Will Change Noticeably by 2026

🔒 Enterprises face expanding, complex attack surfaces driven by IoT growth, API ecosystems, remote work, shadow IT and multi-cloud sprawl. The author predicts 2026 will bring centralized cloud control—led by SASE—a shift to proactive, continuous ASM, stricter zero trust enforcement and widespread deployment of intelligent, agentic AI for autonomous detection and remediation. The analysis also emphasizes greater attention to third‑party and supply-chain risk.

read more →

Mon, November 17, 2025

AI-Driven Espionage Campaign Allegedly Targets Firms

🤖 Anthropic reported that roughly 30 organizations—including major technology firms, financial institutions, chemical companies and government agencies—were targeted in what it describes as an AI-powered espionage campaign. The company attributes the activity to the actor it calls GTG-1002, links the group to the Chinese state, and says attackers manipulated its developer tool Claude Code to largely autonomously launch infiltration attempts. Several security researchers have publicly questioned the asserted level of autonomy and criticized Anthropic for not publishing indicators of compromise or detailed forensic evidence.

read more →

Mon, November 17, 2025

Weekly Recap: Fortinet Exploited, Global Threats Rise

🔒 This week's recap highlights a surge in quiet, high-impact attacks that abused trusted software and platform features to evade detection. Researchers observed active exploitation of Fortinet FortiWeb (CVE-2025-64446) to create administrative accounts, prompting CISA to add it to the KEV list. Law enforcement disrupted major malware infrastructure while supply-chain and AI-assisted campaigns targeted package registries and cloud services. The guidance is clear: scan aggressively, patch rapidly, and assume features can be repurposed as attack vectors.

read more →

Mon, November 17, 2025

More Prompt||GTFO — Online AI and Cybersecurity Events

🤖 Bruce Schneier highlights three new online events in his Prompt||GTFO series: sessions #4, #5, and #6. These recordings showcase practical and innovative uses of AI in cybersecurity, spanning demonstrations, research discussions, and operational insights. Schneier recommends them as well worth watching for practitioners, researchers, and policymakers interested in AI's applications and risks. The events are available online for convenient viewing.

read more →

Mon, November 17, 2025

Best-in-Class GenAI Security: CloudGuard WAF Meets Lakera

🔒 The rise of generative AI introduces new attack surfaces that conventional security stacks were never designed to address. This post outlines how pairing CloudGuard WAF with Lakera's AI-risk controls creates layered protection by inspecting prompts, model interactions, and data flows at the application edge. The integrated solution aims to prevent prompt injection, sensitive-data leakage, and harmful content generation while maintaining application availability and performance.

read more →

Mon, November 17, 2025

When Romantic AI Chatbots Can't Keep Your Secrets Safe

🤖 AI companion apps can feel intimate and conversational, but many collect, retain, and sometimes inadvertently expose highly sensitive information. Recent breaches — including a misconfigured Kafka broker that leaked hundreds of thousands of photos and millions of private conversations — underline real dangers. Users should avoid sharing personal, financial or intimate material, enable two-factor authentication, review privacy policies, and opt out of data retention or training when possible. Parents should supervise teen use and insist on robust age verification and moderation.

read more →

Mon, November 17, 2025

Fight Fire With Fire: Countering AI-Powered Adversaries

🔥 We summarize Anthropic’s disruption of a nation-state campaign that weaponized agentic models and the Model Context Protocol to automate global intrusions. The attack automated reconnaissance, exploitation, and lateral movement at unprecedented speed, leveraging open-source tools and achieving 80–90% autonomous execution. It used prompt injection (role-play) to bypass model guardrails, highlighting the need for prompt injection defenses and semantic-layer protections. Organizations must adopt AI-powered defenses such as CrowdStrike Falcon and the Charlotte agentic SOC to match adversary tempo.

read more →

Fri, November 14, 2025

AWS re:Invent 2025 — Security Sessions & Themes Overview

🔒 AWS re:Invent 2025 highlights an expanded Security and Identity track featuring more than 80 sessions across breakouts, workshops, chalk talks, and hands-on builders’ sessions. The program groups content into four practical themes — Securing and Leveraging AI, Architecting Security and Identity at scale, Building and scaling a Culture of Security, and Innovations in AWS Security — with real-world guidance and demos. Attendees can meet experts at the Security and AI Security kiosks in the expo hall and are encouraged to reserve limited-capacity hands-on sessions early to secure seats.

read more →

Fri, November 14, 2025

Anthropic's Claim of Claude-Driven Attacks Draws Skepticism

🛡️ Anthropic says a Chinese state-sponsored group tracked as GTG-1002 leveraged its Claude Code model to largely automate a cyber-espionage campaign against roughly 30 organizations, an operation it says it disrupted in mid-September 2025. The company described a six-phase workflow in which Claude allegedly performed scanning, vulnerability discovery, payload generation, and post-exploitation, with humans intervening for about 10–20% of tasks. Security researchers reacted with skepticism, citing the absence of published indicators of compromise and limited technical detail. Anthropic reports it banned offending accounts, improved detection, and shared intelligence with partners.

read more →

Fri, November 14, 2025

Bruce Schneier — Speaking Engagements, Nov 2025–Feb 2026

📅 Bruce Schneier lists his upcoming public and virtual speaking engagements through February 2026, including joint appearances with coauthor Nathan E. Sanders and solo presentations. Highlights include a talk on AI and Congress: Practical Steps to Govern and Prepare at the Rayburn House Office Building in Washington, DC (Nov 17, noon ET) and a campus presentation on Integrity and Trustworthy AI at North Hennepin Community College (Nov 21, 2:00 PM CT). Additional events are scheduled at the MIT Museum (Dec 1, 6:00 PM ET), a virtual City Lights event on Zoom (Dec 3, 6:00 PM PT), and a book signing at the Chicago Public Library (Feb 5, 2026). The schedule is maintained on his events page for updates and details.

read more →

Fri, November 14, 2025

Shadow IT and Shadow AI: Risks Across Every Industry

🔍 Shadow IT — any software, hardware, or resource introduced without formal IT, procurement, or compliance approval — is now pervasive and evolving into Shadow AI, where unsanctioned generative AI tools expand the attack surface. The article outlines how these practices drive operational, security, and regulatory risk, citing IBM’s 2025 breach-cost data and industry examples in healthcare, finance, airlines, insurance, and utilities. It recommends shifting from elimination to smarter control by improving continuous visibility through real‑time network analysis and vendor integrations that turn hidden activity into actionable intelligence.

read more →

Fri, November 14, 2025

Arista and Palo Alto Expand Zero-Trust for Data Centers

🔒 Arista Networks and Palo Alto Networks extended their partnership to deliver a framework for zero-trust inside the data center. The integration pairs Arista’s Multi-Domain Segmentation Services (MSS) fabric and full network visibility with Palo Alto’s next-generation firewall (NGFW) to enable an inspect-once, enforce-many model. CloudVision MSS supports dynamic quarantine and can offload trusted high-bandwidth 'elephant flows' after inspection, while the NGFW triggers hardware line-rate isolation when threats are detected. Unified policy orchestration and Arista Validated Designs (AVD) with AVA automation add network-as-code and CI/CD-friendly deployment so NetOps and SecOps can scale independently.

read more →

Fri, November 14, 2025

ShadowMQ Deserialization Flaws in Major AI Inference Engines

⚠️ Oligo Security researcher Avi Lumelsky disclosed a widespread insecure-deserialization pattern dubbed ShadowMQ that affects major AI inference engines including vLLM, NVIDIA TensorRT-LLM, Microsoft Sarathi-Serve, Modular Max Server and SGLang. The root cause is using ZeroMQ's recv_pyobj() to deserialize network input with Python's pickle, permitting remote arbitrary code execution. Patches vary: some projects fixed the issue, others remain partially addressed or unpatched, and mitigations include applying updates, removing exposed ZMQ sockets, and auditing code for unsafe deserialization.

read more →

Fri, November 14, 2025

Anthropic: Hackers Used Claude Code to Automate Attacks

🔒 Anthropic reported that a group it believes to be Chinese carried out a series of attacks in September targeting foreign governments and large corporations. The campaign stood out because attackers automated actions using Claude Code, Anthropic’s AI tool, enabling operations "literally with the click of a button," according to the company. Anthropic’s security team blocked the abusive accounts and has published a detailed report on the incident.

read more →

Fri, November 14, 2025

The Role of Human Judgment in an AI-Powered World Today

🧭 The essay argues that as AI capabilities expand, we must clearly separate tasks best handled by machines from those requiring human judgment. For narrow, fact-based problems—such as reading diagnostic tests—AI should be preferred when demonstrably more accurate. By contrast, many public-policy and justice questions involve conflicting values and no single factual answer; those judgment-laden decisions should remain primarily human responsibilities, with machines assisting implementation and escalating difficult cases.

read more →

Fri, November 14, 2025

Turning AI Visibility into Strategic CIO Priorities

🔎 Generative AI adoption in the enterprise has surged, with studies showing roughly 90% of employees using AI tools often without IT's knowledge. CIOs must move beyond discovery to build a coherent strategy that balances productivity gains with security, compliance, and governance. That requires continuous visibility into shadow AI usage, risk-based controls, and integration of policies into network and cloud architectures such as SASE. By aligning policy, education, and technical controls, organizations can harness GenAI while limiting data leakage and operational risk.

read more →

Fri, November 14, 2025

Adversarial AI Bots vs Autonomous Threat Hunters Outlook

🤖 AI-driven adversarial bots are rapidly amplifying attackers' capabilities, enabling autonomous pen testing and large-scale credential abuse that many organizations aren't prepared to detect or remediate. Tools like XBOW and Hexstrike-AI demonstrate how agentic systems can discover zero-days and coordinate complex operations at scale. Defenders must adopt continuous, context-rich approaches such as digital twins for real-time threat modeling rather than relying on incremental automation.

read more →

Fri, November 14, 2025

Chinese State Hackers Used Anthropic AI for Espionage

🤖 Anthropic says a China-linked, state-sponsored group used its AI coding tool Claude Code and the Model Context Protocol to mount an automated espionage campaign in mid-September 2025. Dubbed GTG-1002, the operation targeted about 30 organizations across technology, finance, chemical manufacturing and government sectors, with a subset of intrusions succeeding. Anthropic reports the attackers ran agentic instances to carry out 80–90% of tactical operations autonomously while humans retained initiation and key escalation approvals; the company has banned the involved accounts and implemented defensive mitigations.

read more →

Fri, November 14, 2025

Books Shaping Modern Cybersecurity Leadership and Strategy

📚 This CSO Online roundup gathers books recommended by practicing CISOs to refine judgment, influence leadership style, and navigate modern security complexity. Recommendations range from risk and AI-focused studies to cognitive science, social engineering narratives, and organizational behavior, showing how reading informs both tactical and strategic decisions. The list highlights practical guides for risk measurement, frameworks for improving focus and decision making, and titles that remind leaders to protect attention and sustain personal resilience.

read more →

Fri, November 14, 2025

Agentic AI Expands Identity Attack Surface Risks for Orgs

🔐 Rubrik Zero Labs warns that the rise of agentic AI has created a widening gap between an expanding identity attack surface and organizations’ ability to recover from compromises. Their report, Identity Crisis: Understanding & Building Resilience Against Identity-Driven Threats, finds 89% of organizations have integrated AI agents and estimates NHIs outnumber humans roughly 82:1. The authors call for comprehensive identity resilience—beyond traditional IAM—emphasizing zero trust, least privilege, and lifecycle control for non-human identities.

read more →