Tag Banner

All news with #ai security tag

Fri, November 14, 2025

Bruce Schneier — Speaking Engagements, Nov 2025–Feb 2026

📅 Bruce Schneier lists his upcoming public and virtual speaking engagements through February 2026, including joint appearances with coauthor Nathan E. Sanders and solo presentations. Highlights include a talk on AI and Congress: Practical Steps to Govern and Prepare at the Rayburn House Office Building in Washington, DC (Nov 17, noon ET) and a campus presentation on Integrity and Trustworthy AI at North Hennepin Community College (Nov 21, 2:00 PM CT). Additional events are scheduled at the MIT Museum (Dec 1, 6:00 PM ET), a virtual City Lights event on Zoom (Dec 3, 6:00 PM PT), and a book signing at the Chicago Public Library (Feb 5, 2026). The schedule is maintained on his events page for updates and details.

read more →

Fri, November 14, 2025

Shadow IT and Shadow AI: Risks Across Every Industry

🔍 Shadow IT — any software, hardware, or resource introduced without formal IT, procurement, or compliance approval — is now pervasive and evolving into Shadow AI, where unsanctioned generative AI tools expand the attack surface. The article outlines how these practices drive operational, security, and regulatory risk, citing IBM’s 2025 breach-cost data and industry examples in healthcare, finance, airlines, insurance, and utilities. It recommends shifting from elimination to smarter control by improving continuous visibility through real‑time network analysis and vendor integrations that turn hidden activity into actionable intelligence.

read more →

Fri, November 14, 2025

Arista and Palo Alto Expand Zero-Trust for Data Centers

🔒 Arista Networks and Palo Alto Networks extended their partnership to deliver a framework for zero-trust inside the data center. The integration pairs Arista’s Multi-Domain Segmentation Services (MSS) fabric and full network visibility with Palo Alto’s next-generation firewall (NGFW) to enable an inspect-once, enforce-many model. CloudVision MSS supports dynamic quarantine and can offload trusted high-bandwidth 'elephant flows' after inspection, while the NGFW triggers hardware line-rate isolation when threats are detected. Unified policy orchestration and Arista Validated Designs (AVD) with AVA automation add network-as-code and CI/CD-friendly deployment so NetOps and SecOps can scale independently.

read more →

Fri, November 14, 2025

ShadowMQ Deserialization Flaws in Major AI Inference Engines

⚠️ Oligo Security researcher Avi Lumelsky disclosed a widespread insecure-deserialization pattern dubbed ShadowMQ that affects major AI inference engines including vLLM, NVIDIA TensorRT-LLM, Microsoft Sarathi-Serve, Modular Max Server and SGLang. The root cause is using ZeroMQ's recv_pyobj() to deserialize network input with Python's pickle, permitting remote arbitrary code execution. Patches vary: some projects fixed the issue, others remain partially addressed or unpatched, and mitigations include applying updates, removing exposed ZMQ sockets, and auditing code for unsafe deserialization.

read more →

Fri, November 14, 2025

Anthropic: Hackers Used Claude Code to Automate Attacks

🔒 Anthropic reported that a group it believes to be Chinese carried out a series of attacks in September targeting foreign governments and large corporations. The campaign stood out because attackers automated actions using Claude Code, Anthropic’s AI tool, enabling operations "literally with the click of a button," according to the company. Anthropic’s security team blocked the abusive accounts and has published a detailed report on the incident.

read more →

Fri, November 14, 2025

The Role of Human Judgment in an AI-Powered World Today

🧭 The essay argues that as AI capabilities expand, we must clearly separate tasks best handled by machines from those requiring human judgment. For narrow, fact-based problems—such as reading diagnostic tests—AI should be preferred when demonstrably more accurate. By contrast, many public-policy and justice questions involve conflicting values and no single factual answer; those judgment-laden decisions should remain primarily human responsibilities, with machines assisting implementation and escalating difficult cases.

read more →

Fri, November 14, 2025

Turning AI Visibility into Strategic CIO Priorities

🔎 Generative AI adoption in the enterprise has surged, with studies showing roughly 90% of employees using AI tools often without IT's knowledge. CIOs must move beyond discovery to build a coherent strategy that balances productivity gains with security, compliance, and governance. That requires continuous visibility into shadow AI usage, risk-based controls, and integration of policies into network and cloud architectures such as SASE. By aligning policy, education, and technical controls, organizations can harness GenAI while limiting data leakage and operational risk.

read more →

Fri, November 14, 2025

Adversarial AI Bots vs Autonomous Threat Hunters Outlook

🤖 AI-driven adversarial bots are rapidly amplifying attackers' capabilities, enabling autonomous pen testing and large-scale credential abuse that many organizations aren't prepared to detect or remediate. Tools like XBOW and Hexstrike-AI demonstrate how agentic systems can discover zero-days and coordinate complex operations at scale. Defenders must adopt continuous, context-rich approaches such as digital twins for real-time threat modeling rather than relying on incremental automation.

read more →

Fri, November 14, 2025

Chinese State Hackers Used Anthropic AI for Espionage

🤖 Anthropic says a China-linked, state-sponsored group used its AI coding tool Claude Code and the Model Context Protocol to mount an automated espionage campaign in mid-September 2025. Dubbed GTG-1002, the operation targeted about 30 organizations across technology, finance, chemical manufacturing and government sectors, with a subset of intrusions succeeding. Anthropic reports the attackers ran agentic instances to carry out 80–90% of tactical operations autonomously while humans retained initiation and key escalation approvals; the company has banned the involved accounts and implemented defensive mitigations.

read more →

Fri, November 14, 2025

Books Shaping Modern Cybersecurity Leadership and Strategy

📚 This CSO Online roundup gathers books recommended by practicing CISOs to refine judgment, influence leadership style, and navigate modern security complexity. Recommendations range from risk and AI-focused studies to cognitive science, social engineering narratives, and organizational behavior, showing how reading informs both tactical and strategic decisions. The list highlights practical guides for risk measurement, frameworks for improving focus and decision making, and titles that remind leaders to protect attention and sustain personal resilience.

read more →

Fri, November 14, 2025

Agentic AI Expands Identity Attack Surface Risks for Orgs

🔐 Rubrik Zero Labs warns that the rise of agentic AI has created a widening gap between an expanding identity attack surface and organizations’ ability to recover from compromises. Their report, Identity Crisis: Understanding & Building Resilience Against Identity-Driven Threats, finds 89% of organizations have integrated AI agents and estimates NHIs outnumber humans roughly 82:1. The authors call for comprehensive identity resilience—beyond traditional IAM—emphasizing zero trust, least privilege, and lifecycle control for non-human identities.

read more →

Thu, November 13, 2025

AI Sidebar Spoofing Targets Comet and Atlas Browsers

⚠️ Security researchers disclosed a novel attack called AI sidebar spoofing that allows malicious browser extensions to place counterfeit in‑page AI assistants that visually mimic legitimate sidebars. Demonstrated against Comet and confirmed for Atlas, the extension injects JavaScript, forwards queries to a real LLM when requested, and selectively alters replies to inject phishing links, malicious OAuth prompts, or harmful terminal commands. Users who install extensions without scrutiny face a tangible risk.

read more →

Thu, November 13, 2025

Google Announces Unified Security Recommended Program

🔒 Google Cloud is launching the Google Unified Security Recommended program to validate deep integrations between its security portfolio and third-party vendors. Inaugural partners CrowdStrike, Fortinet, and Wiz bring endpoint, network, and multicloud CNAPP capabilities into Google Security Operations. Partners commit to cross-product technical integration, a collaborative support model, and investment in AI initiatives such as the model context protocol (MCP). Qualified solutions will be available via Google Cloud Marketplace for simplified procurement and consolidated billing.

read more →

Thu, November 13, 2025

Rogue MCP Servers Can Compromise Cursor's Embedded Browser

⚠️ Security researchers demonstrated that a rogue Model Context Protocol (MCP) server can inject JavaScript into the built-in browser of Cursor, an AI-powered code editor, replacing pages with attacker-controlled content to harvest credentials. The injected code can run without URL changes and may access session cookies. Because Cursor is a Visual Studio Code fork without the same integrity checks, MCP servers inherit IDE privileges, enabling broader workstation compromise.

read more →

Thu, November 13, 2025

Google Cloud expands Hugging Face support for AI developers

🤝 Google Cloud and Hugging Face are deepening their partnership to speed developer workflows and strengthen enterprise model deployments. A new gateway will cache Hugging Face models and datasets on Google Cloud so downloads take minutes, not hours, across Vertex AI and Google Kubernetes Engine. The collaboration adds native TPU support for open models and integrates Google Cloud’s threat intelligence and Mandiant scanning for models served through Vertex AI.

read more →

Thu, November 13, 2025

Machine-Speed Security: Patching Faster Than Attacks

⚡ Attackers are weaponizing many newly disclosed CVEs within hours, forcing defenders to close the gap by moving beyond manual triage to automated remediation. Drawing on 2025 industry reports and CISA and Mandiant observations, the article notes roughly 50–61% of new vulnerabilities see exploit code within 48 hours. It urges adoption of policy-driven automation, controlled rollback, and streamlined change processes to shorten exposure windows while preserving operational stability.

read more →

Thu, November 13, 2025

ThreatsDay Bulletin: Key Cybersecurity Developments

🔐 This ThreatsDay Bulletin surveys major cyber activity shaping November 2025, from exploited Cisco zero‑days and active malware campaigns to regulatory moves and AI-related leaks. Highlights include CISA's emergency directive after some Cisco updates remained vulnerable, a large-scale study finding 65% of AI firms leaked secrets on GitHub, and a prolific phishing operation abusing Facebook Business Suite. The roundup stresses practical mitigations—verify patch versions, enable secret scanning, and strengthen incident reporting and red‑teaming practices.

read more →

Thu, November 13, 2025

What CISOs Should Know About Securing MCP Servers Now

🔒 The Model Context Protocol (MCP) enables AI agents to connect to data sources, but early specifications lacked robust protections, leaving deployments exposed to prompt injection, token theft, and tool poisoning. Recent protocol updates — including OAuth, third‑party identity provider support, and an official MCP registry — plus vendor tooling from hyperscalers and startups have improved defenses. Still, authentication remains optional and gaps persist, so organizations should apply zero trust and least‑privilege controls, enforce strong secrets management and logging, and consider specialist MCP security solutions before production rollout.

read more →

Thu, November 13, 2025

From Vulnerability Management to Exposure Platform

🛡️ CrowdStrike argues legacy vulnerability management cannot keep pace with AI-accelerated adversaries. Their Falcon Exposure Management platform leverages a single lightweight sensor to deliver continuous, native visibility across endpoints, cloud, and network assets. It pairs adversary-aware risk prioritization with agentic automation and Charlotte Agentic SOAR to reduce manual triage and remediate high-risk exposures quickly. The emphasis is on speeding effective action, cutting tool sprawl, and focusing teams on the small subset of issues that drive most breach risk.

read more →

Thu, November 13, 2025

Smashing Security Ep. 443: Tinder, Buffett Deepfake

🎧 In episode 443 of Smashing Security, host Graham Cluley and guest Ron Eddings examine Tinder’s proposal to scan users’ camera rolls and the emergence of convincing Warren Buffett deepfakes offering investment advice. They discuss the privacy, consent and fraud implications of platform-level image analysis and the risks posed by synthetic media. The conversation also covers whether agentic AI could replace human co-hosts, the idea of EDR for robots, and practical steps to mitigate these threats. Cultural topics such as Lily Allen’s new album and the release of Claude Code round out the episode.

read more →