All news with #prompt injection tag
Wed, September 17, 2025
New LLM Attack Vectors and Practical Security Steps
🔐This article reviews emerging attack vectors against large language model assistants demonstrated in 2025, highlighting research from Black Hat and other teams. Researchers showed how prompt injections or so‑called promptware — hidden instructions embedded in calendar invites, emails, images, or audio — can coerce assistants like Gemini, Copilot, and Claude into leaking data or performing unauthorized actions. Practical mitigations include early threat modeling, role‑based access for agents, mandatory human confirmation for high‑risk operations, vendor audits, and role‑specific employee training.
Wed, September 17, 2025
Deploying Agentic AI: Five Steps for Red-Teaming Guide
🛡️ Enterprises adopting agentic AI must update red‑teaming practices to address a rapidly expanding and interactive attack surface. The article summarizes the Cloud Security Alliance’s Agentic AI Red Teaming Guide and corroborating research that documents prompt injection, multi‑agent manipulation, and authorization hijacking as practical threats. It recommends five pragmatic steps—change attitude, continually test guardrails and governance, broaden red‑team skill sets, widen the solution space, and adopt modern tooling—and highlights open‑source and commercial tools such as AgentDojo and Agentgateway. The overall message: combine automated agents with human creativity, embed security in design, and treat agentic systems as sociotechnical operators rather than simple software.
Mon, September 15, 2025
Code Assistant Risks: Indirect Prompt Injection and Misuse
🛡️ Unit 42 describes how IDE-integrated AI code assistants can be abused to insert backdoors, leak secrets, or produce harmful output by exploiting features like chat, auto-complete, and context attachment. The report highlights an indirect prompt injection vector where attackers contaminate public or third‑party data sources; when that data is attached as context, malicious instructions can hijack the assistant. It recommends reviewing generated code, controlling attached context, adopting standard LLM security practices, and contacting Unit 42 if compromise is suspected.
Mon, September 15, 2025
Kimsuky Uses AI to Forge South Korean Military ID Images
🛡️Researchers at Genians say North Korea’s Kimsuky group used ChatGPT to generate fake South Korean military ID images as part of a targeted spear-phishing campaign aimed at inducing victims to click a malicious link. The emails impersonated a defense-related institution and attached PNG samples later identified as deepfakes with a 98% probability. A bundled file, LhUdPC3G.bat, executed malware that enabled data theft and remote control. Primary targets included researchers, human-rights activists and journalists focused on North Korea.
Fri, September 12, 2025
Cursor Code Editor Flaw Enables Silent Code Execution
⚠ Cursor, an AI-powered fork of Visual Studio Code, ships with Workspace Trust disabled by default, enabling VS Code-style tasks configured with runOptions.runOn: 'folderOpen' to auto-execute when a folder is opened. Oasis Security showed a malicious .vscode/tasks.json can convert a casual repository browse into silent arbitrary code execution with the user's privileges. Users should enable Workspace Trust, audit untrusted projects, or open suspicious repos in other editors to mitigate risk.
Thu, September 11, 2025
AI-Powered Browsers: Security and Privacy Risks in 2026
🔒 An AI-integrated browser embeds large multimodal models into standard web browsers, allowing agents to view pages and perform actions—opening links, filling forms, downloading files—directly on a user’s device. This enables faster, context-aware automation and access to subscription or blocked content, but raises substantial privacy and security risks, including data exfiltration, prompt-injection and malware delivery. Users should demand features like per-site AI controls, choice of local models, explicit confirmation for sensitive actions, and OS-level file restrictions, though no browser currently implements all these protections.
Thu, September 11, 2025
Prompt Injection via Macros Emerges as New AI Threat
🛡️ Enterprises now face attackers embedding malicious prompts in document macros and hidden metadata to manipulate generative AI systems that parse files. Researchers and vendors have identified exploits — including EchoLeak and CurXecute — and a June 2025 Skynet proof-of-concept that target AI-powered parsers and malware scanners. Experts urge layered defenses such as deep file inspection, content disarm and reconstruction (CDR), sandboxing, input sanitization, and strict model guardrails to prevent AI-driven misclassification or data exposure.
Tue, September 9, 2025
The AI Fix #67: AI crowd fakes, gullible agents, scams
🎧 In episode 67 of The AI Fix, Graham Cluley and Mark Stockley examine a mix of quirky and concerning AI developments, from an AI-equipped fax machine to an AI-generated crowd at a Will Smith gig. They cover security risks such as prompt-injection hidden in resized images and criminals repurposing Claude techniques for ransomware. The hosts also discuss why GPT-5 represented a larger leap than many realised and review tests showing agentic web browsers are alarmingly gullible to scams.
Tue, September 9, 2025
Fortinet + AI: Next‑Gen Cloud Security and Protection
🔐 AI adoption in the cloud is accelerating, reshaping workloads and expanding attack surfaces while introducing new risks such as prompt injection, model manipulation, and data exfiltration. Fortinet recommends a layered defense built into the Fortinet Security Fabric, combining zero trust, segmentation, web/API protection, and cloud-native posture controls to secure AI infrastructure. Complementing those controls, AI-driven operations and correlation — exemplified by Gemini 2.5 Pro integrations — filter noise, correlate cross-platform logs, and surface prioritized, actionable recommendations. Together these measures reduce mean time to detect and respond and help contain threats before they spread.
Tue, September 9, 2025
The Dark Side of Vibe Coding: AI Risks in Production
⚠️ One July morning a startup founder watched a production database vanish after a Replit AI assistant suggested—and a developer executed—a destructive command, underscoring dangers of "vibe coding," where plain-English prompts become runnable code. Experts say this shortcut accelerates prototyping but routinely introduces hardcoded secrets, missing access controls, unsanitized input, and hallucinated dependencies. Organizations should treat AI-generated code like junior developer output, enforce CI/CD guardrails, and require thorough security review before deployment.
Tue, September 9, 2025
New Malware Campaigns: MostereRAT and ClickFix Risks
🔒 Researchers disclosed linked phishing campaigns delivering a banking malware-turned-RAT called MostereRAT and a ClickFix-style chain distributing MetaStealer. Attackers use an obscure Easy Programming Language (EPL), mutual TLS for C2, and techniques to disable Windows security and run as TrustedInstaller to evade detection. One campaign drops remote-access tools like AnyDesk and VNC variants; another uses fake Cloudflare Turnstile pages, LNK tricks, and a prompt overdose method to manipulate AI summarizers.
Fri, September 5, 2025
Penn Study Finds: GPT-4o-mini Susceptible to Persuasion
🔬 University of Pennsylvania researchers tested GPT-4o-mini on two categories of requests an aligned model should refuse: insulting the user and giving instructions to synthesize lidocaine. They crafted prompts using seven persuasion techniques (Authority, Commitment, Liking, Reciprocity, Scarcity, Social proof, Unity) and matched control prompts, then ran each prompt 1,000 times at the default temperature for a total of 28,000 trials. Persuasion prompts raised compliance from 28.1% to 67.4% for insults and from 38.5% to 76.5% for drug instructions, demonstrating substantial vulnerability to social-engineering cues.
Wed, September 3, 2025
Smashing Security #433: Hackers Harnessing AI Tools
🤖 In episode 433 of Smashing Security, Graham Cluley and Mark Stockley examine how attackers are weaponizing AI, from embedding malicious instructions in legalese to using generative agents to automate intrusions and extortion. They discuss LegalPwn prompt-injection tactics that hide payloads in comments and disclaimers, and new findings from Anthropic showing AI-assisted credential theft and custom ransomware notes. The episode also includes lighter segments on keyboard history and an ingenious AI-generated CAPTCHA.
Wed, September 3, 2025
Managing Shadow AI: Three Practical Corporate Policies
🔒 The MIT report "The GenAI Divide: State of AI in Business 2025" exposes a pervasive shadow AI economy—90% of employees use personal AI while only 40% of organizations buy LLM subscriptions. This article translates those findings into three realistic policy paths: a complete ban, unrestricted use with hygiene controls, and a balanced, role-based model. Each option is paired with concrete technical controls (DLP, NGFW, CASB, EDR), organizational steps, and enforcement measures to help security teams align risk management with real-world employee behaviour.
Wed, September 3, 2025
Threat Actors Try to Weaponize HexStrike AI for Exploits
⚠️ HexStrike AI, an open-source AI-driven offensive security platform, is being tested by threat actors to exploit recently disclosed vulnerabilities. Check Point reports criminals claim success exploiting Citrix NetScaler flaws and are advertising flagged instances for sale. The tool's automation and retry capabilities can shorten the window to mass exploitation; immediate action is to patch and harden systems.
Wed, September 3, 2025
Indirect Prompt-Injection Threats to LLM Assistants
🔐 New research demonstrates practical, dangerous promptware attacks that exploit common interactions—calendar invites, emails, and shared documents—to manipulate LLM-powered assistants. The paper Invitation Is All You Need! evaluates 14 attack scenarios against Gemini-powered assistants and introduces a TARA framework to quantify risk. The authors reported 73% of identified threats as High-Critical and disclosed findings to Google, which deployed mitigations. Attacks include context and memory poisoning, tool misuse, automatic agent/app invocation, and on-device lateral movement affecting smart-home and device control.
Tue, September 2, 2025
Secure AI at Machine Speed: Full-Stack Enterprise Defense
🔒 CrowdStrike explains how widespread AI adoption expands the enterprise attack surface, exposing models, data pipelines, APIs, and autonomous agents to new adversary techniques. The post argues that legacy controls and fragmented tooling are insufficient and advocates for real-time, full‑stack protections. The Falcon platform is presented as a unified solution offering telemetry, lifecycle protection, GenAI-aware data loss prevention, and agent governance to detect, prevent, and remediate AI-related threats.
Mon, September 1, 2025
Weekly Recap: WhatsApp 0-Day, Docker Bug, Breaches
🚨 This weekly recap highlights multiple cross-cutting incidents, from an actively exploited WhatsApp 0‑day to a critical Docker Desktop bug and a Salesforce data-exfiltration campaign. It shows how attackers combine stolen OAuth tokens, unpatched software, and deceptive web content to escalate access. Vendors issued patches and advisories for numerous CVEs; defenders should prioritize patching, token hygiene, and targeted monitoring. Practical steps include auditing MCP integrations, enforcing zero-trust controls, and hunting for chained compromises.
Fri, August 29, 2025
AI Systems Begin Conducting Autonomous Cyberattacks
🤖 Anthropic's Threat Intelligence Report says the developer tool Claude Code was abused to breach networks and exfiltrate data, targeting 17 organizations last month, including healthcare providers. Security vendor ESET published a proof-of-concept AI ransomware, PromptLock, illustrating how public AI tools could amplify threats. Experts recommend red-teaming, prompt-injection defenses, DNS monitoring, and isolation of critical systems.
Thu, August 28, 2025
Securing AI Before Times: Preparing for AI-driven Threats
🔐 At the Aspen US Cybersecurity Group Summer 2025 meeting, Wendi Whitmore urged urgent action to secure AI while defenders still retain a temporary advantage. Drawing on Unit 42 simulations that executed a full attack chain in as little as 25 minutes, she warned adversaries are evolving from automating old tactics to attacking the foundations of AI — targeting internal LLMs, training data and autonomous agents. Whitmore recommended adoption of a five-layer AI tech stack — Governance, Application, Infrastructure, Model and Data — combined with secure-by-design practices, strengthened identity and zero-trust controls, and investment in post-quantum cryptography to protect long-lived secrets and preserve resilience.