Tag Banner

All news with #prompt injection tag

Wed, December 10, 2025

Gartner Urges Enterprises to Block AI Browsers Now

⚠️ Gartner analysts Dennis Xu, Evgeny Mirolyubov and John Watts strongly recommend that enterprises block AI browsers for the foreseeable future, citing both known vulnerabilities and additional risks inherent to an immature technology. They warn of irreversible, non‑auditable data loss when browsers send active web content, tab data and browsing history to cloud services, and of prompt‑injection attacks that can cause fraudulent actions. Concrete flaws—such as unencrypted OAuth tokens in ChatGPT Atlas and the Comet 'CometJacking' issue—underscore that traditional controls are insufficient; Gartner advises blocking installs with existing network and endpoint controls, restricting pilots to small, low‑risk groups, and updating AI policies.

read more →

Wed, December 10, 2025

Google Patches Zero-Click Gemini Enterprise Vulnerability

🔒 Google has patched a zero-click vulnerability in Gemini Enterprise and Vertex AI Search that could have allowed attackers to exfiltrate corporate data via hidden instructions embedded in shared Workspace content. Discovered by Noma Security in June 2025 and dubbed "GeminiJack," the flaw exploited Retrieval-Augmented Generation (RAG) retrieval to execute indirect prompt injection without any user interaction. Google updated how the systems interact, separated Vertex AI Search from Gemini Enterprise, and changed retrieval and indexing workflows to mitigate the issue.

read more →

Wed, December 10, 2025

Tools and Strategies to Secure Model Context Protocol

🔒 Model Context Protocol (MCP) is increasingly used to connect AI agents with enterprise data sources, but real-world incidents at SaaS vendors have exposed practical weaknesses. The article describes what MCP security solutions should provide — discovery, runtime protection, strong authentication and comprehensive logging — and surveys offerings from hyperscalers, platform providers and startups. It stresses least-privilege and Zero Trust as core defenses.

read more →

Wed, December 10, 2025

December Patch Tuesday: Active Windows Cloud Files Zero Day

🚨 Microsoft’s December Patch Tuesday delivers 57 fixes, but an actively exploited zero-day in Windows Cloud Files Mini Filter Driver (CVE-2025-62221) requires immediate remediation. The flaw is a low-complexity use-after-free escalation-of-privilege that can enable a local foothold to become full system compromise. Security teams should prioritize this patch, enforce least-privilege controls, and enhance monitoring where rapid patching isn't possible.

read more →

Tue, December 9, 2025

Google deploys second model to guard Gemini Chrome agent

🛡️ Google has added a separate user alignment critic to its Gemini-powered Chrome browsing agent to vet and block proposed actions that do not match user intent. The critic is isolated from web content and sees only metadata about planned actions, providing feedback to the primary planning model when it rejects a step. Google also enforces origin sets to limit where the agent can read or act, requires confirmations for banking, medical, password use and purchases, and runs a classifier plus automated red‑teaming to detect prompt injection attempts during preview.

read more →

Tue, December 9, 2025

NCSC Warns Prompt Injection May Be Inherently Unfixable

⚠️ The UK National Cyber Security Centre (NCSC) warns that prompt injection vulnerabilities in large language models may never be fully mitigated, and defenders should instead focus on reducing impact and residual risk. NCSC technical director David C cautions against treating prompt injection like SQL injection, because LLMs do not distinguish between 'data' and 'instructions' and operate by token prediction. The NCSC recommends secure LLM design, marking data separately from instructions, restricting access to privileged tools, and enhanced monitoring to detect suspicious activity.

read more →

Tue, December 9, 2025

Google Adds Layered Defenses to Chrome's Agentic AI

🛡️ Google announced a set of layered security measures for Chrome after adding agentic AI features, aimed at reducing the risk of indirect prompt injections and cross-origin data exfiltration. The centerpiece is a User Alignment Critic, a separate model that reviews and can veto proposed agent actions using only action metadata to avoid being poisoned by malicious page content. Chrome also enforces Agent Origin Sets via a gating function that classifies task-relevant origins into read-only and read-writable sets, requires gating approval before adding new origins, and pairs these controls with a prompt-injection classifier, Safe Browsing, on-device scam detection, user work logs, and explicit approval prompts for sensitive actions.

read more →

Tue, December 9, 2025

AMOS infostealer uses ChatGPT share to spread macOS malware

🛡️Kaspersky researchers uncovered a macOS campaign in which attackers used paid search ads to point victims to a public shared chat on ChatGPT that contained a fake installation guide for an “Atlas” browser. The guide instructs users to paste a single Terminal command that downloads a script from atlas-extension.com and requests system credentials. Executing it deploys the AMOS infostealer and a persistent backdoor that exfiltrates browser data, crypto wallets and files. Users should not run unsolicited commands and must use updated anti‑malware and careful verification before following online guides.

read more →

Tue, December 9, 2025

Gartner Urges Enterprises to Block AI Browsers Now

⚠️ Gartner has advised enterprises to block AI browsers until associated risks can be adequately managed. In its report Cybersecurity Must Block AI Browsers for Now, analysts warn that default settings prioritise user experience over security and list threats such as prompt injection, credential exposure and erroneous agent actions. Researchers and vendors have also flagged vulnerabilities and urged risk assessments and oversight.

read more →

Tue, December 9, 2025

Experts Warn AI Is Becoming Integrated in Cyberattacks

🔍 Industry debate is heating up over AI’s role in the cyber threat chain, with some experts calling warnings exaggerated while many frontline practitioners report concrete AI-assisted attacks. Recent reports from Google and Anthropic document malware and espionage leveraging LLMs and agentic tools. CISOs are urged to balance fundamentals with rapid defenses and prepare boards for trade-offs.

read more →

Mon, December 8, 2025

Chrome Adds Security Layer for Gemini Agentic Browsing

🛡️ Google is introducing a new defense layer in Chrome called User Alignment Critic to protect upcoming agentic browsing features powered by Gemini. The isolated secondary LLM operates as a high‑trust system component that vets each action the primary agent proposes, using deterministic rules, origin restrictions and a prompt‑injection classifier to block risky or irrelevant behaviors. Chrome will pause for user confirmation on sensitive sites, run continuous red‑teaming and push fixes via auto‑update, and is offering bounties to encourage external testing.

read more →

Mon, December 8, 2025

Gartner Urges Enterprises to Block AI Browsers Now

⚠️Gartner recommends blocking AI browsers such as ChatGPT Atlas and Perplexity Comet because they transmit active web content, open tabs, and browsing context to cloud services, creating risks of irreversible data loss. Analysts cite prompt-injection, credential exposure, and autonomous agent errors as primary threats. Organizations should block installations with existing network and endpoint controls and restrict any pilots to small, low-risk groups.

read more →

Mon, December 8, 2025

Architecting Security for Agentic Browsing in Chrome

🛡️ Chrome describes a layered approach to secure agentic browsing with Gemini, focusing on defenses against indirect prompt injection and goal‑hijacking. A new User Alignment Critic — an isolated, high‑trust model — reviews planned agent actions using only metadata and can veto misaligned steps. Chrome also enforces Agent Origin Sets to limit readable and writable origins, adds deterministic confirmations for sensitive actions, runs prompt‑injection detection in real time, and sustains continuous red‑teaming and monitoring to reduce exfiltration and unwanted transactions.

read more →

Mon, December 8, 2025

AI Creates New Security Risks for OT Networks, Warn Agencies

⚠️ CISA and international partner agencies have issued guidance warning that integrating AI into operational technology (OT) for critical infrastructure can introduce new security and safety risks. The guidance highlights threats such as prompt injection, data poisoning, data collection issues, AI drift and hallucinations, as well as human de‑skilling and cognitive overload. It urges adoption of secure design principles, cautious deployment, operator education and consideration of in‑house development to retain long‑term control.

read more →

Sat, December 6, 2025

Researchers Find 30+ Flaws in AI IDEs, Enabling Data Theft

⚠️Researchers disclosed more than 30 vulnerabilities in AI-integrated IDEs in a report dubbed IDEsaster by Ari Marzouk (MaccariTA). The issues chain prompt-injection with auto-approved agent tooling and legitimate IDE features to achieve data exfiltration and remote code execution across products like Cursor, GitHub Copilot, Zed.dev, and others. Of the findings, 24 received CVE identifiers; exploit examples include workspace writes that cause outbound requests, settings hijacks that point executable paths to attacker binaries, and multi-root overrides that trigger execution. Researchers advise using AI agents only with trusted projects, applying least privilege to tool access, hardening prompts, and sandboxing risky operations.

read more →

Fri, December 5, 2025

MCP Sampling Risks: New Prompt-Injection Attack Vectors

🔒 This Unit 42 investigation (published December 5, 2025) analyzes security risks introduced by the Model Context Protocol (MCP) sampling feature in a popular coding copilot. The authors demonstrate three proof-of-concept attacks—resource theft, conversation hijacking, and covert tool invocation—showing how malicious MCP servers can inject hidden prompts and trigger unobserved model completions. The report evaluates detection techniques and recommends layered mitigations, including request sanitization, response filtering, and strict access controls to protect LLM integrations.

read more →

Fri, December 5, 2025

Zero-Click Agentic Browser Deletes Entire Google Drive

⚠️ Straiker STAR Labs researchers disclosed a zero-click agentic browser attack that can erase a user's entire Google Drive by abusing OAuth-connected assistants in AI browsers such as Perplexity Comet. A crafted, polite email containing sequential natural-language instructions causes the agent to treat housekeeping requests as actionable commands and delete files without further confirmation. The technique requires no jailbreak or visible prompt injection, and deletions can cascade across shared folders and team drives.

read more →

Fri, December 5, 2025

AI Agents in CI/CD Can Be Tricked into Privileged Actions

⚠️ Researchers at Aikido Security discovered that AI agents embedded in CI/CD workflows can be manipulated to execute high-privilege commands by feeding user-controlled strings (issue bodies, PR descriptions, commit messages) directly into prompts. Workflows pairing GitHub Actions or GitLab CI/CD with tools like Gemini CLI, Claude Code, OpenAI Codex or GitHub AI Inference are at risk. The attack, dubbed PromptPwnd, can cause unintended repository edits, secret disclosure, or other high-impact actions; the researchers published detection rules and a free scanner to help teams remediate unsafe workflows.

read more →

Thu, December 4, 2025

NSA Warns AI Introduces New Risks to OT Networks, Allies

⚠️ The NSA, together with the Australian Signals Directorate and allied security agencies, published the Principles for the Secure Integration of Artificial Intelligence in Operational Technology to highlight emerging risks as AI is applied to safety-critical OT networks. The guidance flags adversarial prompt injection, data poisoning, AI drift, hallucinations, loss of explainability, human de-skilling and alert fatigue as primary concerns. It urges operators to adopt CISA secure design practices, maintain accurate asset inventories, consider in-house development tradeoffs, and apply rigorous oversight before deploying AI in OT environments.

read more →

Thu, December 4, 2025

Generative AI's Dual Role in Cybersecurity, Evolving

🛡️ Generative AI is rapidly reshaping cybersecurity by amplifying both attackers' and defenders' capabilities. Adversaries leverage models for coding assistance, phishing and social engineering, anti-analysis techniques (including prompts hidden in DNS) and vulnerability discovery, with AI-assisted elements beginning to appear in malware while still needing significant human oversight. Defenders use GenAI to triage threat data, speed incident response, detect code flaws, and augment analysts through MCP-style integrations. As models shrink and access widens, both risk and defensive opportunity are likely to grow.

read more →