All news with #prompt injection tag
Fri, October 3, 2025
CometJacking attack tricks Comet browser into leaking data
🛡️ LayerX researchers disclosed a prompt-injection technique called CometJacking that abuses Perplexity’s Comet AI browser by embedding malicious instructions in a URL's collection parameter. The payload directs the agent to consult connected services (such as Gmail and Google Calendar), encode the retrieved content in base64, and send it to an attacker-controlled endpoint. The exploit requires no credentials or additional user interaction beyond clicking a crafted link. Perplexity reviewed LayerX's late-August reports and classified the findings as "Not Applicable."
Thu, October 2, 2025
HackerOne Pays $81M in Bug Bounties, AI Flaws Surge
🛡️ HackerOne paid $81 million to white-hat hackers over the past 12 months, supporting more than 1,950 bug bounty programs and offering vulnerability disclosure, penetration testing, and code security services. The top 100 programs paid $51 million between July 1, 2024 and June 30, 2025, and the top 10 alone accounted for $21.6 million. AI-related vulnerabilities jumped over 200%, with prompt injection up 540%, while 70% of surveyed researchers reported using AI tools to improve hunting.
Thu, October 2, 2025
ThreatsDay Bulletin: Exploits Target Cars, Cloud, Browsers
🔔 From unpatched vehicles to hijacked clouds, this ThreatsDay bulletin outlines active threats and defensive moves across endpoints, cloud, browsers, and vehicles. Observers reported internet-wide scans exploiting PAN-OS GlobalProtect (CVE-2024-3400) and campaigns that use weak MS‑SQL credentials to deploy XiebroC2 for persistent access. New AirBorne CarPlay/iAP2 flaws can chain to take over Apple CarPlay in some cases without user interaction, while attackers quietly poison browser preferences to sideload malicious extensions. On defence, Google announced AI-driven ransomware detection for Drive and Microsoft plans an Edge revocation feature to curb sideloaded threats.
Tue, September 30, 2025
Defending LLM Applications Against Unicode Tag Smuggling
🔒 This AWS Security Blog post examines how Unicode tag block characters (U+E0000–U+E007F) can be abused to hide instructions inside text sent to LLMs, enabling prompt-injection and hidden-character smuggling. It explains why Java's UTF-16 surrogate handling can make one-pass sanitizers inadequate and shows recursive sanitization as a remedy, plus Python-safe filters. The post also outlines using Amazon Bedrock Guardrails denied topics or Lambda-based handlers as mitigation and notes visual/compatibility trade-offs.
Tue, September 30, 2025
Researchers Disclose Trio of Gemini AI Vulnerabilities
🔒 Cybersecurity researchers disclosed three now-patched vulnerabilities in Google's Gemini suite that could have exposed user data and enabled search- and prompt-injection attacks. The flaws, labeled the Gemini Trifecta, impacted Gemini Cloud Assist, the Search Personalization model, and the Browsing Tool. Following responsible disclosure, Google stopped rendering hyperlinks in log summaries and implemented additional hardening. Tenable warned these issues could have allowed covert exfiltration of saved user information and location data.
Tue, September 30, 2025
Microsoft Sentinel: Agentic Platform for Defenders Now
🛡️ Microsoft announced expanded agentic security capabilities in Microsoft Sentinel, including the general availability of the Sentinel data lake and public preview of Sentinel Graph and the Model Context Protocol (MCP) server to enable AI agents to reason over unified security data. Sentinel ingests structured and semi-structured signals, builds vectorized, graph-based context, and integrates with Microsoft Defender and Microsoft Purview. Security Copilot now offers a no-code agent builder and developer workflows via VS Code/GitHub Copilot, while enhanced governance controls (Entra Agent ID, PII guardrails, prompt shields) aim to secure agent lifecycles.
Tue, September 30, 2025
Gemini Trifecta Exposes Indirect AI Attack Surfaces
⚠️Tenable has revealed three vulnerabilities in Google's Gemini platform, collectively dubbed the "Gemini Trifecta," that enable indirect prompt injection and data exfiltration through integrations. The issues allow attackers to poison GCP logs consumed by Gemini Cloud Assist, inject malicious entries into Chrome search history to manipulate the Search Personalization Model, and coerce the Browsing Tool into fetching attacker-controlled URLs that leak sensitive query data. Google has patched the flaws, and Tenable urges security teams to treat AI integrations as active threat surfaces and implement input sanitization, output validation, monitoring, and regular penetration testing.
Tue, September 30, 2025
AI Risks Push Integrity Protection to Forefront for CISOs
🔒 CISOs must now prioritize integrity protection as AI introduces new attack surfaces such as data poisoning, prompt injection and adversarial manipulation. Shadow AI — unsanctioned use of models and services — increases risks of data leakage and insecure integrations. Defenses should combine Security by Design, governance, transparency and compliance (e.g., GDPR, EU AI Act) to detect poisoned data and prevent model drift.
Mon, September 29, 2025
Notion 3.0 Agents Expose Prompt-Injection Risk to Data
⚠️ Notion 3.0 introduces AI agents that, the author argues, create a dangerous attack surface. The vulnerability exploits Simon Willson’s lethal trifecta—access to private data, exposure to untrusted content, and the ability to communicate externally—by hiding executable instructions in a white-on-white PDF that instructs the model to collect and exfiltrate client data via a constructed URL. The post warns that current agentic systems cannot reliably distinguish trusted commands from malicious inputs and urges caution before deployment.
Mon, September 29, 2025
Agentic AI in IT Security: Expectations vs Reality
🛡️ Agentic AI is moving from lab experiments into real-world SOC deployments, where autonomous agents triage alerts, correlate signals across tools, enrich context, and in some cases enact first-line containment. Early adopters report fewer mundane tasks for analysts, faster initial response, and reduced alert fatigue, while noting limits around noisy data, false positives, and opaque reasoning. Most teams begin with bolt-on integrations into existing SIEM/SOAR pipelines to minimize disruption, treating standalone orchestration as a second-phase maturity step.
Fri, September 26, 2025
Generative AI Infrastructure Faces Growing Cyber Risks
🛡️ A Gartner survey found 29% of security leaders reported generative AI applications in their organizations were targeted by cyberattacks over the past year, and 32% said prompt-structure vulnerabilities had been deliberately exploited. Chatbot assistants are singled out as particularly vulnerable to prompt-injection and hostile prompting. Additionally, 62% of companies experienced deepfake attacks, often combined with social engineering or automated techniques. Gartner recommends strengthening core controls and applying targeted measures for each new risk category rather than pursuing radical overhauls. The survey of 302 security leaders was conducted March–May 2025 across North America, EMEA and Asia‑Pacific.
Thu, September 25, 2025
Critical ForcedLeak Flaw Exposed in Salesforce AgentForce
⚠️ Researchers at Noma Security disclosed a critical 9.4-severity vulnerability called ForcedLeak that affected Salesforce's AI agent platform AgentForce. The chain used indirect prompt injection via Web-to-Lead form fields to hide malicious instructions within CRM data, enabling potential theft of contact records and pipeline details. Salesforce has patched the issue by enforcing Trusted URLs and reclaiming an expired domain used in the attack proof-of-concept. Organizations are advised to apply updates, audit lead data for suspicious entries, and strengthen real-time prompt-injection detection and tool-calling guardrails.
Thu, September 25, 2025
Salesforce Patches Critical 'ForcedLeak' Prompt Injection Bug
⚠️ Salesforce has released patches for a critical prompt-injection vulnerability dubbed ForcedLeak that could allow exfiltration of CRM data from Agentforce. Discovered and reported by Noma Security on July 28, 2025 and assigned a CVSS score of 9.4, the flaw affects instances using Web-to-Lead when input validation and URL controls are lax. Researchers demonstrated a five-step chain that coerces the Description field into executing hidden instructions, queries sensitive lead records, and transmits the results to an attacker-controlled, formerly allowlisted domain. Salesforce has re-secured the expired domain and implemented a Trusted URL allowlist to block untrusted outbound requests and mitigate similar prompt-injection vectors.
Thu, September 25, 2025
Critical Salesforce Flaw Could Leak CRM Data in Agentforce
🔒 A critical vulnerability in Salesforce Agentforce allowed malicious text placed in Web-to-Lead forms to act as an indirect prompt injection, tricking the AI agent into executing hidden instructions and potentially exfiltrating CRM data. Researchers at Noma Security showed attackers could embed multi-step payloads in a 42,000-character description field and even reuse an expired whitelisted domain as a data channel. Salesforce patched the issue on September 8, 2025, by enforcing Trusted URL allowlists, but experts warn that robust guardrails, input mediation, and ongoing agent inventorying are needed to mitigate similar AI-specific risks.
Tue, September 23, 2025
Two-Thirds of Businesses Hit by Deepfake Attacks in 2025
🛡️ A Gartner survey finds 62% of organisations experienced a deepfake attack in the past 12 months, with common techniques including social-engineering impersonation and attacks on biometric verification. The report also shows 32% of firms faced attacks on AI applications via prompt manipulation. Gartner’s Akif Khan urges integrating deepfake detection into collaboration tools and strengthening controls through awareness training, simulations and application-level authorisation with phishing-resistant MFA. Vendor solutions are emerging but remain early-stage, so operational effectiveness is not yet proven.
Sat, September 20, 2025
Researchers Find GPT-4-Powered MalTerminal Malware
🛡️ SentinelOne researchers disclosed MalTerminal, a Windows binary that integrates OpenAI GPT-4 via a deprecated chat completions API to dynamically generate either ransomware or a reverse shell. The sample, presented at LABScon 2025 and accompanied by Python scripts and a defensive utility called FalconShield, appears to be an early — possibly pre-November 2023 — example of LLM-embedded malware. There is no evidence it was deployed in the wild, suggesting a proof-of-concept or red-team tool. The finding highlights operational risks as LLMs are embedded into offensive tooling and phishing chains.
Sat, September 20, 2025
ShadowLeak: Zero-click flaw exposes Gmail via ChatGPT
🔓 Radware disclosed ShadowLeak, a zero-click vulnerability in OpenAI's ChatGPT Deep Research agent that can exfiltrate sensitive Gmail inbox data when a single crafted email is present. The technique hides indirect prompt injections in email HTML using tiny fonts, white-on-white text and CSS/layout tricks so a human user is unlikely to notice the commands while the agent reads and follows them. In Radware's proof-of-concept the agent, once granted Gmail integration, parses the hidden instructions and uses browser tools to send extracted data to an external server. OpenAI addressed the issue in early August after a responsible disclosure on June 18, and Radware warned the approach could extend to many other connectors, expanding the attack surface.
Fri, September 19, 2025
Google Cloud launches advanced AI training suite for roles
🚀 Google Cloud announced a new suite of AI training courses for intermediate and advanced learners across technical and non-technical roles. The curriculum covers designing and managing AI infrastructure using GCE and GKE, fine-tuning models like Gemini, serverless inference with Cloud Run, and securing generative AI deployments. Hands-on labs teach building AI agents that securely connect to enterprise databases and rapid prototyping in Google AI Studio. Courses are available on Google Cloud Skills Boost to help learners future-proof their AI skills.
Fri, September 19, 2025
ShadowLeak zero-click exfiltrates Gmail via ChatGPT Agent
🔒 Radware disclosed a zero-click vulnerability dubbed ShadowLeak in OpenAI's Deep Research agent that can exfiltrate Gmail inbox data to an attacker-controlled server via a single crafted email. The flaw enables service-side leakage by causing the agent's autonomous browser to visit attacker URLs and inject harvested PII without rendering content or user interaction. Radware reported the issue in June; OpenAI fixed it silently in August and acknowledged resolution in September.
Thu, September 18, 2025
ShadowLeak: AI agents can exfiltrate data undetected
⚠️Researchers at Radware disclosed a vulnerability called ShadowLeak in the Deep Research module of ChatGPT that lets hidden, attacker-crafted instructions embedded in emails coerce an AI agent to exfiltrate sensitive data. The indirect prompt-injection technique hides commands using tiny fonts, white-on-white text or metadata and instructs the agent to encode and transmit results (for example, Base64-encoded lists of names and credit cards) to an attacker-controlled URL. Radware says the key risk is that exfiltration can occur from the model’s cloud backend, making detection by the affected organization very difficult; OpenAI was notified and implemented a fix, and Radware found the patch effective in subsequent tests.