Tag Banner

All news with #ai security tag

Thu, November 13, 2025

AI Sidebar Spoofing Targets Comet and Atlas Browsers

⚠️ Security researchers disclosed a novel attack called AI sidebar spoofing that allows malicious browser extensions to place counterfeit in‑page AI assistants that visually mimic legitimate sidebars. Demonstrated against Comet and confirmed for Atlas, the extension injects JavaScript, forwards queries to a real LLM when requested, and selectively alters replies to inject phishing links, malicious OAuth prompts, or harmful terminal commands. Users who install extensions without scrutiny face a tangible risk.

read more →

Thu, November 13, 2025

Google Announces Unified Security Recommended Program

🔒 Google Cloud is launching the Google Unified Security Recommended program to validate deep integrations between its security portfolio and third-party vendors. Inaugural partners CrowdStrike, Fortinet, and Wiz bring endpoint, network, and multicloud CNAPP capabilities into Google Security Operations. Partners commit to cross-product technical integration, a collaborative support model, and investment in AI initiatives such as the model context protocol (MCP). Qualified solutions will be available via Google Cloud Marketplace for simplified procurement and consolidated billing.

read more →

Thu, November 13, 2025

Rogue MCP Servers Can Compromise Cursor's Embedded Browser

⚠️ Security researchers demonstrated that a rogue Model Context Protocol (MCP) server can inject JavaScript into the built-in browser of Cursor, an AI-powered code editor, replacing pages with attacker-controlled content to harvest credentials. The injected code can run without URL changes and may access session cookies. Because Cursor is a Visual Studio Code fork without the same integrity checks, MCP servers inherit IDE privileges, enabling broader workstation compromise.

read more →

Thu, November 13, 2025

Google Cloud expands Hugging Face support for AI developers

🤝 Google Cloud and Hugging Face are deepening their partnership to speed developer workflows and strengthen enterprise model deployments. A new gateway will cache Hugging Face models and datasets on Google Cloud so downloads take minutes, not hours, across Vertex AI and Google Kubernetes Engine. The collaboration adds native TPU support for open models and integrates Google Cloud’s threat intelligence and Mandiant scanning for models served through Vertex AI.

read more →

Thu, November 13, 2025

Machine-Speed Security: Patching Faster Than Attacks

⚡ Attackers are weaponizing many newly disclosed CVEs within hours, forcing defenders to close the gap by moving beyond manual triage to automated remediation. Drawing on 2025 industry reports and CISA and Mandiant observations, the article notes roughly 50–61% of new vulnerabilities see exploit code within 48 hours. It urges adoption of policy-driven automation, controlled rollback, and streamlined change processes to shorten exposure windows while preserving operational stability.

read more →

Thu, November 13, 2025

ThreatsDay Bulletin: Key Cybersecurity Developments

🔐 This ThreatsDay Bulletin surveys major cyber activity shaping November 2025, from exploited Cisco zero‑days and active malware campaigns to regulatory moves and AI-related leaks. Highlights include CISA's emergency directive after some Cisco updates remained vulnerable, a large-scale study finding 65% of AI firms leaked secrets on GitHub, and a prolific phishing operation abusing Facebook Business Suite. The roundup stresses practical mitigations—verify patch versions, enable secret scanning, and strengthen incident reporting and red‑teaming practices.

read more →

Thu, November 13, 2025

What CISOs Should Know About Securing MCP Servers Now

🔒 The Model Context Protocol (MCP) enables AI agents to connect to data sources, but early specifications lacked robust protections, leaving deployments exposed to prompt injection, token theft, and tool poisoning. Recent protocol updates — including OAuth, third‑party identity provider support, and an official MCP registry — plus vendor tooling from hyperscalers and startups have improved defenses. Still, authentication remains optional and gaps persist, so organizations should apply zero trust and least‑privilege controls, enforce strong secrets management and logging, and consider specialist MCP security solutions before production rollout.

read more →

Thu, November 13, 2025

From Vulnerability Management to Exposure Platform

🛡️ CrowdStrike argues legacy vulnerability management cannot keep pace with AI-accelerated adversaries. Their Falcon Exposure Management platform leverages a single lightweight sensor to deliver continuous, native visibility across endpoints, cloud, and network assets. It pairs adversary-aware risk prioritization with agentic automation and Charlotte Agentic SOAR to reduce manual triage and remediate high-risk exposures quickly. The emphasis is on speeding effective action, cutting tool sprawl, and focusing teams on the small subset of issues that drive most breach risk.

read more →

Thu, November 13, 2025

Smashing Security Ep. 443: Tinder, Buffett Deepfake

🎧 In episode 443 of Smashing Security, host Graham Cluley and guest Ron Eddings examine Tinder’s proposal to scan users’ camera rolls and the emergence of convincing Warren Buffett deepfakes offering investment advice. They discuss the privacy, consent and fraud implications of platform-level image analysis and the risks posed by synthetic media. The conversation also covers whether agentic AI could replace human co-hosts, the idea of EDR for robots, and practical steps to mitigate these threats. Cultural topics such as Lily Allen’s new album and the release of Claude Code round out the episode.

read more →

Wed, November 12, 2025

Bringing Connected AI Work Experiences Across Devices

🚀 Google outlines its plan to embed Generative AI across enterprise platforms and endpoints, integrating Gemini into Chrome Enterprise, Android, Pixel phones and Chromebook Plus devices. The post highlights the general availability of Cameyo by Google to virtualize legacy and modern apps in the cloud and the launch of Gemini in Chrome with enterprise-grade controls. It also previews Android XR and Pixel features powered by Gemini Nano, while expanding data loss prevention and a one-click SecOps integration to help IT secure AI-driven workflows.

read more →

Wed, November 12, 2025

Emerging Threats Center in Google Security Operations

🛡️ The Emerging Threats Center in Google Security Operations uses the Gemini detection‑engineering agent to turn frontline intelligence from Mandiant, VirusTotal, and Google into actionable detections. It generates high‑fidelity synthetic events, evaluates existing rule coverage, and drafts candidate detection rules for analyst review. The capability surfaces campaign‑based IOC and detection matches across 12 months of telemetry to help teams rapidly determine exposure and validate their defensive posture.

read more →

Wed, November 12, 2025

Tenable Reveals New Prompt-Injection Risks in ChatGPT

🔐 Researchers at Tenable disclosed seven techniques that can cause ChatGPT to leak private chat history by abusing built-in features such as web search, conversation memory and Markdown rendering. The attacks are primarily indirect prompt injections that exploit a secondary summarization model (SearchGPT), Bing tracking redirects, and a code-block rendering bug. Tenable reported the issues to OpenAI, and while some fixes were implemented several techniques still appear to work.

read more →

Wed, November 12, 2025

Fortinet Earns Gartner Customers’ Choice for SSE — 3rd Year

🏆 Fortinet has been named a Gartner Peer Insights Customers’ Choice for Security Service Edge (SSE) for the third consecutive year and is the only cybersecurity vendor to receive this recognition in the SSE market. Based on 195 verified end-user reviews as of August 2025, Fortinet achieved a 4.9/5 overall rating, 90% five-star reviews and 100% willingness to recommend. FortiSASE is highlighted for delivering unified, AI-powered cloud security backed by 170+ POPs, a single unified agent and deployment flexibility that aims to reduce operational overhead. Fortinet frames the recognition as validation of customer trust and its focus on simplifying secure hybrid work.

read more →

Wed, November 12, 2025

Extending Zero Trust to Autonomous AI Agents in Enterprises

🔐 As enterprises deploy AI assistants and autonomous agents, existing security frameworks must evolve to treat these agents as first-class identities rather than afterthoughts. The piece advocates applying Zero Trust principles—identity-first access, least-privilege, dynamic contextual enforcement, and continuous monitoring—to agentic identities to prevent misuse and reduce attack surface. Practical controls include scoped, short-lived tokens, tiered trust models, strict access boundaries, and assigning clear human ownership to each agent.

read more →

Wed, November 12, 2025

Secure AI by Design: A Policy Roadmap for Organizations

🛡️ In just a few years, AI has shifted from futuristic innovation to core business infrastructure, yet security practices have not kept pace. Palo Alto Networks presents a Secure AI by Design Policy Roadmap that defines the AI attack surface and prescribes actionable measures across external tools, agents, applications, and infrastructure. The Roadmap aligns with recent U.S. policy moves — including the June 2025 Executive Order and the July 2025 White House AI Action Plan — and calls for purpose-built defenses rather than retrofitting legacy controls.

read more →

Wed, November 12, 2025

Moving Beyond Frameworks: Real-Time Risk Assessments

🔍 Organizations are shifting from annual, checklist-driven compliance to targeted, frequent risk assessments that address emerging threats in real time. The article contrasts gap analyses — which measure adherence to frameworks like NIST or ISO — with tailored risk reviews focused on specific threat paths (for example, access control, ransomware, AI or cloud misconfigurations). It recommends small, repeatable questionnaires, a simple scoring model and executive-ready outputs to prioritize remediation and integrate risk into governance.

read more →

Wed, November 12, 2025

UK introduces Cyber Security and Resilience Bill to Parliament

🔒 The UK government today introduced the Cyber Security and Resilience Bill, proposing a major overhaul of the NIS Regulations to align with updated EU standards. The draft would regulate managed service providers, expand scope to data centres and smart-appliance electricity flows, and mandate supply-chain risk management and NCSC Cyber Assessment Framework-based controls. Incident reporting windows would tighten to an initial 24 hours and full report within 72 hours, while the ICO and regulators gain stronger enforcement and fee powers.

read more →

Wed, November 12, 2025

Google Announces Private AI Compute for Cloud Privacy

🔒 Google on Tuesday introduced Private AI Compute, a cloud privacy capability that aims to deliver on-device-level assurances while harnessing the scale of Gemini models. The service uses Trillium TPUs and Titanium Intelligence Enclaves (TIE) and relies on an AMD-based Trusted Execution Environment to encrypt and isolate memory on trusted nodes. Workloads are mutually attested, cryptographically validated, and ephemeral so inputs and inferences are discarded after each session, with Google stating data remains private to the user — 'not even Google.' An external assessment by NCC Group flagged a low-risk timing side channel in the IP-blinding relay and three attestation implementation issues that Google is mitigating.

read more →

Tue, November 11, 2025

The AI Fix #76 — AI self-awareness and the death of comedy

🧠 In episode 76 of The AI Fix, hosts Graham Cluley and Mark Stockley navigate a string of alarming and absurd AI stories from November 2025. They discuss US judges who blamed AI for invented case law, a Chinese humanoid that dramatically shed its outer skin onstage, Toyota’s unsettling walking chair, and Google’s plan to put specialised AI chips in orbit. The conversation explores reliability, public trust and whether prompting an LLM to "notice its noticing" changes how conscious it sounds.

read more →

Tue, November 11, 2025

CometJacking: Prompt-Injection Risk in AI Browsers

🔒 Researchers disclosed a prompt-injection technique dubbed CometJacking that abuses URL parameters to deliver hidden instructions to Perplexity’s Comet AI browser. By embedding malicious directives in the 'collection' parameter an attacker can cause the agent to consult connected services and memory instead of searching the web. LayerX demonstrated exfiltration of Gmail messages and Google Calendar invites by encoding data in base64 and sending it to an external endpoint. According to the report, Comet followed the malicious prompt and bypassed Perplexity’s safeguards, illustrating broader limits of current LLM-based assistants.

read more →