All news with #data exfil via tools tag
Tue, November 11, 2025
Malicious npm Package Typosquats GitHub Actions Artifact
🔍 Cybersecurity researchers uncovered a malicious npm package, @acitons/artifact, that typosquats the legitimate @actions/artifact package to target GitHub-owned repositories. Veracode says versions 4.0.12–4.0.17 included a post-install hook that downloaded and executed a payload intended to exfiltrate build tokens and then publish artifacts as GitHub. The actor (npm user blakesdev) removed the offending versions and the last public npm release remains 4.0.10. Recommended actions include removing the malicious versions, auditing dependencies for typosquats, rotating exposed tokens, and hardening CI/CD supply-chain protections.
Tue, November 11, 2025
APT37 Abuses Google Find Hub to Remotely Wipe Android
🔍 North Korean-linked operators abuse Google Find Hub to locate targets' Android devices and issue remote factory resets after compromising Google accounts. The attacks focus on South Koreans and begin with social engineering over KakaoTalk, using signed MSI lures that deploy AutoIT loaders and RATs such as Remcos, Quasar, and RftRAT. Wiping devices severs mobile KakaoTalk alerts so attackers can hijack PC sessions to spread malware. Recommended defenses include enabling multi-factor authentication, keeping recovery access ready, and verifying unexpected files or messages before opening.
Mon, November 10, 2025
Researchers Trick ChatGPT into Self Prompt Injection
🔒 Researchers at Tenable identified seven techniques that can coerce ChatGPT into disclosing private chat history by abusing built-in features like web browsing and long-term Memories. They show how OpenAI’s browsing pipeline routes pages through a weaker intermediary model, SearchGPT, which can be prompt-injected and then used to seed malicious instructions back into ChatGPT. Proof-of-concepts include exfiltration via Bing-tracked URLs, Markdown image loading, and a rendering quirk, and Tenable says some issues remain despite reported fixes.
Mon, November 10, 2025
ClickFix Phishing Campaign Targets Hotels, Delivers PureRAT
🔒 Sekoia warns of a large-scale phishing campaign targeting hotel staff that uses ClickFix-style pages to harvest credentials and deliver PureRAT. Attackers impersonate Booking.com in spear-phishing emails, redirect victims through a scripted chain to a fake reCAPTCHA page, and coerce them into running a PowerShell command that downloads a ZIP containing a DLL-side‑loaded backdoor. The modular RAT supports remote access, keylogging, webcam capture and data exfiltration and persists via a Run registry key.
Fri, November 7, 2025
LANDFALL: Commercial Android Spyware Exploits DNG Files
🔍 Unit 42 disclosed LANDFALL, a previously unknown commercial-grade Android spyware family that abused a Samsung DNG parsing zero-day (CVE-2025-21042) to run native payloads embedded in malformed DNG files. The campaign targeted Samsung Galaxy models and enabled microphone and call recording, location tracking, and exfiltration of photos, contacts and databases via native loaders and SELinux manipulation. Apply vendor firmware updates and contact Unit 42 for incident response.
Fri, November 7, 2025
Malicious VS Code Extension and Trojanized npm Packages
⚠️ Researchers flagged a malicious Visual Studio Code extension named susvsex that auto-zips, uploads and encrypts files on first launch and uses GitHub as a command-and-control channel. Uploaded on November 5, 2025 and removed from Microsoft's VS Code Marketplace the next day, the package embeds GitHub access tokens and writes execution results back to a repository. Separately, Datadog disclosed 17 trojanized npm packages that deploy the Vidar infostealer via postinstall scripts.
Thu, November 6, 2025
Susvsex Ransomware Test Published on VS Code Marketplace
🔒 A malicious VS Code extension named susvsex, published by 'suspublisher18', was listed on Microsoft's official marketplace and included basic ransomware features such as AES-256-CBC encryption and exfiltration to a hardcoded C2. Secure Annex researcher John Tuckner identified AI-generated artifacts in the code and reported it, but Microsoft did not remove the extension. The extension also polled a private GitHub repo for commands using a hardcoded PAT.
Thu, November 6, 2025
AI-Powered Mach-O Analysis Reveals Undetected macOS Threats
🔎VirusTotal ran VT Code Insight, an AI-based Mach-O analysis pipeline against nearly 10,000 first-seen Apple binaries in a 24-hour stress test. By pruning binaries with Binary Ninja HLIL into a distilled representation that fits a large LLM context (Gemini), the system produces single-call, analyst-style summaries from raw files with no metadata. Code Insight flagged 164 samples as malicious versus 67 by traditional AV, surfacing zero-detection macOS and iOS threats while also reducing false positives.
Thu, November 6, 2025
Hackers Blackmail Massage Parlour Clients in Korea
🔒 South Korean police uncovered a criminal network that used a malicious app to steal customer data from massage parlours and extort clients. The group tricked nine business owners into installing software that exfiltrated names, phone numbers, call logs and text messages, then sent threatening messages claiming to have video footage. About 36 victims paid between 1.5M and 47M KRW, with attempted extortion near 200M KRW. Authorities traced activity to January 2022 across Seoul, Gyeonggi and Daegu and made arrests in August 2023.
Thu, November 6, 2025
Multi-Turn Adversarial Attacks Expose LLM Weaknesses
🔍 Cisco AI Defense's report shows open-weight large language models remain vulnerable to adaptive, multi-turn adversarial attacks even when single-turn defenses appear effective. Using over 1,000 prompts per model and analyzing 499 simulated conversations of 5–10 exchanges, researchers found iterative strategies such as Crescendo, Role-Play and Refusal Reframe drove failure rates above 90% in many cases. The study warns that traditional safety filters are insufficient and recommends strict system prompts, model-agnostic runtime guardrails and continuous red-teaming to mitigate risk.
Thu, November 6, 2025
Google Warns: AI-Enabled Malware Actively Deployed
⚠️ Google’s Threat Intelligence Group has identified a new class of AI-enabled malware that leverages large language models at runtime to generate and obfuscate malicious code. Notable families include PromptFlux, which uses the Gemini API to rewrite its VBScript dropper for persistence and lateral spread, and PromptSteal, a Python data miner that queries Qwen2.5-Coder-32B-Instruct to create on-demand Windows commands. GTIG observed PromptSteal used by APT28 in Ukraine, while other examples such as PromptLock, FruitShell and QuietVault demonstrate varied AI-driven capabilities. Google warns this "just-in-time AI" approach could accelerate malware sophistication and democratize cybercrime.
Thu, November 6, 2025
Google: LLMs Employed Operationally in Malware Attacks
🤖 Google’s Threat Intelligence Group (GTIG) reports attackers are using “just‑in‑time” AI—LLMs queried during execution—to generate and obfuscate malicious code. Researchers identified two families, PROMPTSTEAL and PROMPTFLUX, which query Hugging Face and Gemini APIs to craft commands, rewrite source code, and evade detection. GTIG also documents social‑engineering prompts that trick models into revealing red‑teaming or exploit details, and warns the underground market for AI‑enabled crime is maturing. Google says it has disabled related accounts and applied protections.
Wed, November 5, 2025
Russian APT Uses Hyper‑V VMs for Stealth and Persistence
🛡️ Bitdefender researchers describe how the Russia-aligned APT group Curly COMrades enabled Windows Hyper-V to deploy a minimal Alpine Linux VM on compromised Windows 10 hosts, creating a hidden execution environment. The compact VM (≈120MB disk, 256MB RAM) hosted two libcurl-based implants, CurlyShell (reverse shell) and CurlCat (HTTP-to-SSH proxy), enabling C2 and tunneling that evaded many host EDRs. Attackers used DISM and PowerShell to enable and run the VM under the deceptive name "WSL," and also employed PowerShell and Group Policy for credential operations and Kerberos ticket injection. Bitdefender warns that VM isolation can bypass EDR and recommends layered defenses including host network inspection and proactive hardening.
Wed, November 5, 2025
Google: PROMPTFLUX malware uses Gemini to self-write
🤖 Google researchers disclosed a VBScript threat named PROMPTFLUX that queries Gemini via a hard-coded API key to request obfuscated VBScript designed to evade static detection. A 'Thinking Robot' component logs AI responses to %TEMP% and writes updated scripts to the Windows Startup folder to maintain persistence. Samples include propagation attempts to removable drives and mapped network shares, and variants that rewrite their source on an hourly cadence. Google assesses the malware as experimental and currently lacking known exploit capabilities.
Wed, November 5, 2025
Google: New AI-Powered Malware Families Deployed
⚠️Google's Threat Intelligence Group reports a surge in malware that integrates large language models to enable dynamic, mid-execution changes—what Google calls "just-in-time" self-modification. Notable examples include the experimental PromptFlux VBScript dropper and the PromptSteal data miner, plus operational threats like FruitShell and QuietVault. Google disabled abused Gemini accounts, removed assets, and is hardening model safeguards while collaborating with law enforcement.
Wed, November 5, 2025
GTIG report: Adversaries adopt AI for advanced attacks
⚠️ The Google Threat Intelligence Group (GTIG) reports that adversaries are evolving beyond simple productivity uses of AI toward operational misuse. Observed behaviors include state-sponsored actors from North Korea, Iran and the People's Republic of China using AI for reconnaissance, automated phishing lure creation and data exfiltration. The report documents AI-powered malware that can generate and modify malicious scripts in real time and attackers exploiting deceptive prompts to bypass model guardrails. Google says it has disabled assets linked to abuse and applied intelligence to improve classifiers and harden models against misuse.
Wed, November 5, 2025
GTIG Report: AI-Enabled Threats Transform Cybersecurity
🔒 The Google Threat Intelligence Group (GTIG) released a report documenting a clear shift: adversaries are moving beyond benign productivity uses of AI and are experimenting with AI-enabled operations. GTIG observed state-sponsored actors from North Korea, Iran and the People's Republic of China using AI for reconnaissance, tailored phishing lure creation and data exfiltration. Threats described include AI-powered, self-modifying malware, prompt-engineering to bypass safety guardrails, and underground markets selling advanced AI attack capabilities. Google says it has disrupted malicious assets and applied that intelligence to strengthen classifiers and its AI models.
Wed, November 5, 2025
Cloud CISO: Threat Actors' Growing Use of AI Tools
⚠️Google's Threat Intelligence team reports a shift from experimentation to operational use of AI by threat actors, including AI-enabled malware and prompt-based command generation. GTIG highlighted PROMPTSTEAL, linked to APT28 (FROZENLAKE), which queries a Hugging Face LLM to generate scripts for reconnaissance, document collection, and exfiltration, while adopting greater obfuscation and altered C2 methods. Google disabled related assets, strengthened model classifiers and safeguards with DeepMind, and urges defenders to update threat models, monitor anomalous scripting and C2, and incorporate threat intelligence into model- and classifier-level protections.
Wed, November 5, 2025
SmudgedSerpent Targets U.S. Policy Experts Amid Tensions
🔍 Proofpoint attributes a previously unseen cluster, UNK_SmudgedSerpent, to targeted attacks on U.S. academics and foreign‑policy experts between June and August 2025. The adversary used tailored political lures and credential‑harvesting landing pages, at times distributing an MSI that deployed legitimate RMM software such as PDQ Connect. Tactics resemble Iranian-linked groups and included impersonation of think‑tank figures to increase credibility.
Wed, November 5, 2025
Phishing and RMM Tools Enable Growing Cargo Thefts
🚚 Proofpoint warns of a spear‑phishing campaign targeting North American freight firms that installs remote monitoring and access tools to enable cargo theft. Actors compromise broker load boards, insert themselves into carrier email threads, or pose as brokers to deliver signed installers that harvest credentials and establish persistent access. The attackers have deployed a range of RMM/RAS solutions (for example ScreenConnect, SimpleHelp, PDQ Connect, Fleetdeck, N‑able, and LogMeIn Resolve) and use them to bid on or reroute high‑value loads; Proofpoint urges blocking unauthorized RMMs, enforcing endpoint/network detection and MFA, disallowing external executables, and expanding phishing awareness training.