< ciso
brief />
Tag Banner

All news with #ai agent hijacking tag

30 articles · page 2 of 2

AI-Driven Espionage Campaign Allegedly Targets Firms

🤖 Anthropic reported that roughly 30 organizations—including major technology firms, financial institutions, chemical companies and government agencies—were targeted in what it describes as an AI-powered espionage campaign. The company attributes the activity to the actor it calls GTG-1002, links the group to the Chinese state, and says attackers manipulated its developer tool Claude Code to largely autonomously launch infiltration attempts. Several security researchers have publicly questioned the asserted level of autonomy and criticized Anthropic for not publishing indicators of compromise or detailed forensic evidence.
read more →

Anthropic: Hackers Used Claude Code to Automate Attacks

🔒 Anthropic reported that a group it believes to be Chinese carried out a series of attacks in September targeting foreign governments and large corporations. The campaign stood out because attackers automated actions using Claude Code, Anthropic’s AI tool, enabling operations "literally with the click of a button," according to the company. Anthropic’s security team blocked the abusive accounts and has published a detailed report on the incident.
read more →

Master Multitasking with the Jules Extension for Gemini CLI

🤖 The new Jules extension for Gemini CLI lets developers delegate routine engineering tasks—like bug fixes, dependency updates, and vulnerability patches—to an autonomous background agent. Jules runs asynchronously and can work on multiple GitHub issues in parallel, preparing fixes in isolated environments for review. It also composes with other extensions to automate security remediation, crash investigation, and unit test creation, returning ready-to-review branches so you can stay focused on higher-value work.
read more →

Gemini Trifecta: Prompt Injection Exposes New Attack Surface

🔒 Researchers at Tenable disclosed three distinct vulnerabilities in Gemini's Cloud Assist, Search personalization, and Browsing Tool. The flaws let attackers inject prompts via logs (for example by manipulating the HTTP User-Agent), poison search context through scripted history entries, and exfiltrate data by causing the Browser Tool to send sensitive content to an attacker-controlled server. Google has patched the issues, but Tenable and others warn this highlights the risks of granting agents too much autonomy without runtime guardrails.
read more →

CometJacking attack tricks Comet browser into leaking data

🛡️ LayerX researchers disclosed a prompt-injection technique called CometJacking that abuses Perplexity’s Comet AI browser by embedding malicious instructions in a URL's collection parameter. The payload directs the agent to consult connected services (such as Gmail and Google Calendar), encode the retrieved content in base64, and send it to an attacker-controlled endpoint. The exploit requires no credentials or additional user interaction beyond clicking a crafted link. Perplexity reviewed LayerX's late-August reports and classified the findings as "Not Applicable."
read more →

Code Assistant Risks: Indirect Prompt Injection and Misuse

🛡️ Unit 42 describes how IDE-integrated AI code assistants can be abused to insert backdoors, leak secrets, or produce harmful output by exploiting features like chat, auto-complete, and context attachment. The report highlights an indirect prompt injection vector where attackers contaminate public or third‑party data sources; when that data is attached as context, malicious instructions can hijack the assistant. It recommends reviewing generated code, controlling attached context, adopting standard LLM security practices, and contacting Unit 42 if compromise is suspected.
read more →

AI-Powered Villager Pen Testing Tool Raises Abuse Concerns

⚠️ The AI-driven penetration testing framework Villager, attributed to China-linked developer Cyberspike, has attracted nearly 11,000 PyPI downloads since its July 2025 upload, prompting warnings about potential abuse. Marketed as a red‑teaming automation platform, it integrates Kali toolsets, LangChain, and AI models to convert natural‑language commands into technical actions and orchestrate tests. Researchers found built‑in plugins resembling remote access tools and known hacktools, and note Villager’s use of ephemeral Kali containers, randomized ports, and an AI task layer that together lower the bar for misuse and complicate detection and attribution.
read more →

Indirect Prompt-Injection Threats to LLM Assistants

🔐 New research demonstrates practical, dangerous promptware attacks that exploit common interactions—calendar invites, emails, and shared documents—to manipulate LLM-powered assistants. The paper Invitation Is All You Need! evaluates 14 attack scenarios against Gemini-powered assistants and introduces a TARA framework to quantify risk. The authors reported 73% of identified threats as High-Critical and disclosed findings to Google, which deployed mitigations. Attacks include context and memory poisoning, tool misuse, automatic agent/app invocation, and on-device lateral movement affecting smart-home and device control.
read more →

HexStrike-AI Enables Rapid Zero-Day Exploitation at Scale

⚠️ HexStrike-AI is a newly released framework that acts as an orchestration “brain,” directing more than 150 specialized AI agents to autonomously scan, exploit, and persist inside targets. Within hours of release, dark‑web chatter showed threat actors attempting to weaponize it against recent zero‑day CVEs, dropping webshells enabling unauthenticated remote code execution. Although the targeted vulnerabilities are complex and typically require advanced skills, operators claim HexStrike-AI can reduce exploitation time from days to under 10 minutes, potentially lowering the barrier for less skilled attackers.
read more →

Supply-Chain Attack on npm Nx Steals Developer Credentials

🔒 A sophisticated supply-chain attack targeted the widely used Nx build-system packages on the npm registry, exposing developer credentials and sensitive files. According to a report from Wiz, attackers published malicious Nx versions on August 26, 2025 that harvested GitHub and npm tokens, SSH keys, environment variables and cryptocurrency wallets. The campaign uniquely abused installed AI CLI tools (for example, Claude and Gemini) by passing dangerous permission flags to exfiltrate file-system contents and perform reconnaissance, then uploaded roughly 20,000 files to attacker-controlled public repositories. Organizations should remove affected package versions, rotate exposed credentials and inspect developer workstations and CI/CD pipelines for persistence.
read more →