All news with #ai security tag
Thu, November 6, 2025
Susvsex Ransomware Test Published on VS Code Marketplace
🔒 A malicious VS Code extension named susvsex, published by 'suspublisher18', was listed on Microsoft's official marketplace and included basic ransomware features such as AES-256-CBC encryption and exfiltration to a hardcoded C2. Secure Annex researcher John Tuckner identified AI-generated artifacts in the code and reported it, but Microsoft did not remove the extension. The extension also polled a private GitHub repo for commands using a hardcoded PAT.
Thu, November 6, 2025
AI-Powered Mach-O Analysis Reveals Undetected macOS Threats
🔎VirusTotal ran VT Code Insight, an AI-based Mach-O analysis pipeline against nearly 10,000 first-seen Apple binaries in a 24-hour stress test. By pruning binaries with Binary Ninja HLIL into a distilled representation that fits a large LLM context (Gemini), the system produces single-call, analyst-style summaries from raw files with no metadata. Code Insight flagged 164 samples as malicious versus 67 by traditional AV, surfacing zero-detection macOS and iOS threats while also reducing false positives.
Thu, November 6, 2025
Remember, Remember: AI Agents, Threat Intel, and Phishing
🔔 This edition of the Threat Source newsletter opens with Bonfire Night and the 1605 Gunpowder Plot as a narrative hook, tracing how Guy Fawkes' image became a symbol of protest and hacktivism. It spotlights Cisco Talos research, including a new Incident Response report and a notable internal phishing case where compromised O365 accounts abused inbox rules to hide malicious activity. The newsletter also features a Tool Talk demonstrating a proof-of-concept that equips autonomous AI agents with real-time threat intelligence via LangChain, OpenAI, and the Cisco Umbrella API to improve domain trust decisions.
Thu, November 6, 2025
IDC: Major Shift in Cloud Security Investment Trends
🔍 IDC’s latest research finds organizations averaged nine cloud security incidents in 2024, with 89% reporting year-over-year increases. The study identifies CNAPP as a top-three investment for 2025, rising CISO ownership of cloud security, and persistent tool sprawl that increases cost and risk. It also documents practical uses of generative AI for detection and response and a move toward integrated, autonomous SecOps platforms. Microsoft positions its integrated CNAPP and AI-driven threat intelligence as a way to unify protection across the application lifecycle.
Thu, November 6, 2025
Multi-Turn Adversarial Attacks Expose LLM Weaknesses
🔍 Cisco AI Defense's report shows open-weight large language models remain vulnerable to adaptive, multi-turn adversarial attacks even when single-turn defenses appear effective. Using over 1,000 prompts per model and analyzing 499 simulated conversations of 5–10 exchanges, researchers found iterative strategies such as Crescendo, Role-Play and Refusal Reframe drove failure rates above 90% in many cases. The study warns that traditional safety filters are insufficient and recommends strict system prompts, model-agnostic runtime guardrails and continuous red-teaming to mitigate risk.
Thu, November 6, 2025
November 2025 Fraud and Scams Advisory — Key Trends
🔔 Google’s Trust & Safety team published a November 2025 advisory describing rising online scam trends, attacker tactics, and recommended defenses. Analysts highlight key categories — online job scams, negative review extortion, AI product impersonation, malicious VPNs, fraud recovery scams, and seasonal holiday lures — and note increased misuse of AI to scale fraud. The advisory outlines impacts including financial theft, identity fraud, and device or network compromise, and recommends protections such as 2‑Step Verification, Gmail phishing defenses, Google Play Protect, and Safe Browsing Enhanced Protection.
Thu, November 6, 2025
Seeing Threats First: AI and Human Cyber Defense Insights
🔍 Check Point Research and External Risk Management experts explain how combining AI-driven analytics with seasoned human threat hunters enables organizations to detect and anticipate attacks before they strike. The AMA webinar, featuring leaders like Sergey Shykevich and Pedro Drimel Neto, detailed telemetry fusion, rapid malware analysis, and automated triage to act at machine speed. Speakers stressed continuous intelligence, cross-team collaboration, and proactive hunting to shorten dwell time. The approach blends scalable automation with human context to prevent large-scale incidents.
Thu, November 6, 2025
ThreatsDay Bulletin: Cybercrime Trends and Major Incidents
🛡️ This bulletin catalogues a broad set of 2025 incidents showing cybercrime’s increasing real-world impacts. Microsoft patched three Windows GDI flaws (CVE-2025-30388, CVE-2025-53766, CVE-2025-47984) rooted in gdiplus.dll and gdi32full.dll, while Check Point warned partial fixes can leave data leaks lingering. Threat actors expanded toolsets and infrastructure — from RondoDox’s new exploits and TruffleNet’s AWS abuse to FIN7’s SSH backdoor and sophisticated phishing campaigns — and law enforcement action ranged from large fraud takedowns to prison sentences and cross-border crackdowns.
Thu, November 6, 2025
Equipping Autonomous AI Agents with Cyber Hygiene Practices
🔐 This post demonstrates a proof-of-concept for teaching autonomous agents internet safety by integrating real-time threat intelligence. Using LangChain with OpenAI and the Cisco Umbrella API, the example shows how an agent can extract domains and query dispositions to decide whether to connect. The agent returns clear disposition reports and abstains when no domains are present. The approach emphasizes decision-making over hardblocking.
Thu, November 6, 2025
AI-Powered Malware Emerges: Google Details New Threats
🛡️ Google Threat Intelligence Group (GTIG) reports that cybercriminals are actively integrating large language models into malware campaigns, moving beyond mere tooling to generate, obfuscate, and adapt malicious code. GTIG documents new families — including PROMPTSTEAL, PROMPTFLUX, FRUITSHELL, and PROMPTLOCK — that query commercial APIs to produce or rewrite payloads and evade detection. Researchers also note attackers use social‑engineering prompts to trick LLMs into revealing sensitive guidance and that underground marketplaces increasingly offer AI-enabled “malware-as-a-service,” lowering the bar for less skilled threat actors.
Thu, November 6, 2025
Google Warns: AI-Enabled Malware Actively Deployed
⚠️ Google’s Threat Intelligence Group has identified a new class of AI-enabled malware that leverages large language models at runtime to generate and obfuscate malicious code. Notable families include PromptFlux, which uses the Gemini API to rewrite its VBScript dropper for persistence and lateral spread, and PromptSteal, a Python data miner that queries Qwen2.5-Coder-32B-Instruct to create on-demand Windows commands. GTIG observed PromptSteal used by APT28 in Ukraine, while other examples such as PromptLock, FruitShell and QuietVault demonstrate varied AI-driven capabilities. Google warns this "just-in-time AI" approach could accelerate malware sophistication and democratize cybercrime.
Thu, November 6, 2025
Digital Health Needs Security at Its Core to Scale AI
🔒 The article argues that AI-driven digital health initiatives proved essential during COVID-19 but simultaneously exposed critical cybersecurity gaps that threaten pandemic preparedness. It warns that expansive data ecosystems, IoT devices and cloud pipelines multiply attack surfaces and that subtle AI-specific threats — including data poisoning, model inversion and adversarial inputs — can undermine public-health decisions. The author urges security by design, including zero-trust architectures, data provenance, encryption, model governance and cross-disciplinary drills so AI can deliver trustworthy, resilient public health systems.
Thu, November 6, 2025
Google: Cyber-Physical Attacks to Rise in Europe 2026
🚨 Google Cloud Security's Cybersecurity Forecast 2026 warns of a rise in cyber-physical attacks across EMEA targeting energy grids, transport and digital infrastructure. The report highlights increased state-sponsored espionage from Russia and China and anticipates these operations may form hybrid warfare combined with information operations to erode public trust. It also flags supply-chain compromises of managed service providers and software dependencies, and notes that cybercrime — including ransomware aimed at ERP systems — will remain a major disruptive threat to ICS/OT. Analysts further expect adversaries to increasingly leverage AI and multimodal deepfakes.
Thu, November 6, 2025
Google: LLMs Employed Operationally in Malware Attacks
🤖 Google’s Threat Intelligence Group (GTIG) reports attackers are using “just‑in‑time” AI—LLMs queried during execution—to generate and obfuscate malicious code. Researchers identified two families, PROMPTSTEAL and PROMPTFLUX, which query Hugging Face and Gemini APIs to craft commands, rewrite source code, and evade detection. GTIG also documents social‑engineering prompts that trick models into revealing red‑teaming or exploit details, and warns the underground market for AI‑enabled crime is maturing. Google says it has disabled related accounts and applied protections.
Thu, November 6, 2025
Forrester's 2026 Predictions: CIOs and CISOs on Alert
🔍 Forrester warns that 2026 will demand precision, resilience and strategic foresight from CIOs and CISOs as volatility persists and the AI hype phase gives way to a results-driven era. Leaders will face rising pressure to deliver measurable, secure outcomes from AI initiatives while managing vendor promises, postponements and tighter financial scrutiny. Neocloud growth, talent bottlenecks and accelerating quantum risk will further complicate planning and force cross-functional governance.
Wed, November 5, 2025
Vertex AI Agent Builder: Build, Scale, Govern Agents
🚀 Vertex AI Agent Builder is Google Cloud's integrated platform to build, scale, and govern production AI agents. The update expands the Agent Development Kit (ADK) and Agent Engine with configurable context layers to reduce token usage, an adaptable plugins framework, and new language SDK support including Go. Production features include observability, evaluation tools, simplified deployment via the ADK CLI, and strengthened governance with native agent identities and Model Armor protections.
Wed, November 5, 2025
Google: PROMPTFLUX malware uses Gemini to self-write
🤖 Google researchers disclosed a VBScript threat named PROMPTFLUX that queries Gemini via a hard-coded API key to request obfuscated VBScript designed to evade static detection. A 'Thinking Robot' component logs AI responses to %TEMP% and writes updated scripts to the Windows Startup folder to maintain persistence. Samples include propagation attempts to removable drives and mapped network shares, and variants that rewrite their source on an hourly cadence. Google assesses the malware as experimental and currently lacking known exploit capabilities.
Wed, November 5, 2025
Google: New AI-Powered Malware Families Deployed
⚠️Google's Threat Intelligence Group reports a surge in malware that integrates large language models to enable dynamic, mid-execution changes—what Google calls "just-in-time" self-modification. Notable examples include the experimental PromptFlux VBScript dropper and the PromptSteal data miner, plus operational threats like FruitShell and QuietVault. Google disabled abused Gemini accounts, removed assets, and is hardening model safeguards while collaborating with law enforcement.
Wed, November 5, 2025
GTIG Report: AI-Enabled Threats Transform Cybersecurity
🔒 The Google Threat Intelligence Group (GTIG) released a report documenting a clear shift: adversaries are moving beyond benign productivity uses of AI and are experimenting with AI-enabled operations. GTIG observed state-sponsored actors from North Korea, Iran and the People's Republic of China using AI for reconnaissance, tailored phishing lure creation and data exfiltration. Threats described include AI-powered, self-modifying malware, prompt-engineering to bypass safety guardrails, and underground markets selling advanced AI attack capabilities. Google says it has disrupted malicious assets and applied that intelligence to strengthen classifiers and its AI models.
Wed, November 5, 2025
GTIG report: Adversaries adopt AI for advanced attacks
⚠️ The Google Threat Intelligence Group (GTIG) reports that adversaries are evolving beyond simple productivity uses of AI toward operational misuse. Observed behaviors include state-sponsored actors from North Korea, Iran and the People's Republic of China using AI for reconnaissance, automated phishing lure creation and data exfiltration. The report documents AI-powered malware that can generate and modify malicious scripts in real time and attackers exploiting deceptive prompts to bypass model guardrails. Google says it has disabled assets linked to abuse and applied intelligence to improve classifiers and harden models against misuse.