Category Banner

All news in category "AI and Security Pulse"

Sat, November 8, 2025

Microsoft Reveals Whisper Leak: Streaming LLM Side-Channel

🔒 Microsoft has disclosed a novel side-channel called Whisper Leak that can let a passive observer infer the topic of conversations with streaming language models by analyzing encrypted packet sizes and timings. Researchers at Microsoft (Bar Or, McDonald and the Defender team) demonstrate classifiers that distinguish targeted topics from background traffic with high accuracy across vendors including OpenAI, Mistral and xAI. Providers have deployed mitigations such as random-length response padding; Microsoft recommends avoiding sensitive topics on untrusted networks, using VPNs, or preferring non-streaming models and providers that implemented fixes.

read more →

Fri, November 7, 2025

Whisper Leak: Side-Channel Attack on Remote LLM Services

🔍 Microsoft researchers disclosed "Whisper Leak", a new side-channel that can infer conversation topics from encrypted, streamed language model responses by analyzing packet sizes and timings. The study demonstrates high classifier accuracy on a proof-of-concept sensitive topic and shows risk increases with more training data or repeated interactions. Industry partners including OpenAI, Mistral, Microsoft Azure, and xAI implemented streaming obfuscation mitigations that Microsoft validated as substantially reducing practical risk.

read more →

Fri, November 7, 2025

Leak: Google Gemini 3 Pro and Nano Banana 2 Launch Plans

🤖 Google appears set to release two new models: Gemini 3 Pro, optimized for coding and general use, and Nano Banana 2 (codenamed GEMPIX2), focused on realistic image generation. Gemini 3 Pro was listed on Vertex AI as "gemini-3-pro-preview-11-2025" and is expected to begin rolling out in November with a reported 1 million token context window. Nano Banana 2 was also spotted on the Gemini site and could ship as early as December 2025.

read more →

Fri, November 7, 2025

Defending Digital Identity from Computer-Using Agents (CUAs)

🔐 Computer-using agents (CUAs) — AI systems that perceive screens and act like humans — are poised to scale phishing and credential-stuffing attacks by automating UI interactions, adapting to layout changes, and bypassing anti-bot defenses. Organizations should move beyond passwords and shared-secret MFA to device-bound, cryptographic authentication such as FIDO2 passkeys and PKI-based certificates to reduce large-scale compromise. SaaS vendors must integrate with identity platforms that support phishing-resistant credentials to strengthen overall security.

read more →

Fri, November 7, 2025

AI-Generated Receipts Spur New Detection Arms Race

🔍 AI can now produce highly convincing receipts that reproduce paper texture, detailed itemization, and forged signatures, making manual review unreliable. Expense platforms and employers are deploying AI-driven detectors that analyze image metadata and transactional patterns to flag likely fakes. Simple countermeasures—users photographing or screenshotting generated images to remove provenance data—undermine those checks, so vendors also examine contextual signals like repeated server names, timing anomalies, and broader travel details, fueling an ongoing security arms race.

read more →

Thu, November 6, 2025

CIO’s First Principles: A Reference Guide to Securing AI

🔐 Enterprises must redesign security as AI moves from experimentation to production, and CIOs need a prevention-first, unified approach. This guide reframes Confidentiality, Integrity and Availability for AI, stressing rigorous access controls, end-to-end data lineage, adversarial testing and a defensible supply chain to prevent poisoning, prompt injection and model hijacking. Palo Alto Networks advocates embedding security across MLOps, real-time visibility of models and agents, and executive accountability to eliminate shadow AI and ensure resilient, auditable AI deployments.

read more →

Thu, November 6, 2025

AI-Powered Mach-O Analysis Reveals Undetected macOS Threats

🔎VirusTotal ran VT Code Insight, an AI-based Mach-O analysis pipeline against nearly 10,000 first-seen Apple binaries in a 24-hour stress test. By pruning binaries with Binary Ninja HLIL into a distilled representation that fits a large LLM context (Gemini), the system produces single-call, analyst-style summaries from raw files with no metadata. Code Insight flagged 164 samples as malicious versus 67 by traditional AV, surfacing zero-detection macOS and iOS threats while also reducing false positives.

read more →

Thu, November 6, 2025

Multi-Turn Adversarial Attacks Expose LLM Weaknesses

🔍 Cisco AI Defense's report shows open-weight large language models remain vulnerable to adaptive, multi-turn adversarial attacks even when single-turn defenses appear effective. Using over 1,000 prompts per model and analyzing 499 simulated conversations of 5–10 exchanges, researchers found iterative strategies such as Crescendo, Role-Play and Refusal Reframe drove failure rates above 90% in many cases. The study warns that traditional safety filters are insufficient and recommends strict system prompts, model-agnostic runtime guardrails and continuous red-teaming to mitigate risk.

read more →

Thu, November 6, 2025

Seeing Threats First: AI and Human Cyber Defense Insights

🔍 Check Point Research and External Risk Management experts explain how combining AI-driven analytics with seasoned human threat hunters enables organizations to detect and anticipate attacks before they strike. The AMA webinar, featuring leaders like Sergey Shykevich and Pedro Drimel Neto, detailed telemetry fusion, rapid malware analysis, and automated triage to act at machine speed. Speakers stressed continuous intelligence, cross-team collaboration, and proactive hunting to shorten dwell time. The approach blends scalable automation with human context to prevent large-scale incidents.

read more →

Thu, November 6, 2025

Equipping Autonomous AI Agents with Cyber Hygiene Practices

🔐 This post demonstrates a proof-of-concept for teaching autonomous agents internet safety by integrating real-time threat intelligence. Using LangChain with OpenAI and the Cisco Umbrella API, the example shows how an agent can extract domains and query dispositions to decide whether to connect. The agent returns clear disposition reports and abstains when no domains are present. The approach emphasizes decision-making over hardblocking.

read more →

Thu, November 6, 2025

AI-Powered Malware Emerges: Google Details New Threats

🛡️ Google Threat Intelligence Group (GTIG) reports that cybercriminals are actively integrating large language models into malware campaigns, moving beyond mere tooling to generate, obfuscate, and adapt malicious code. GTIG documents new families — including PROMPTSTEAL, PROMPTFLUX, FRUITSHELL, and PROMPTLOCK — that query commercial APIs to produce or rewrite payloads and evade detection. Researchers also note attackers use social‑engineering prompts to trick LLMs into revealing sensitive guidance and that underground marketplaces increasingly offer AI-enabled “malware-as-a-service,” lowering the bar for less skilled threat actors.

read more →

Thu, November 6, 2025

Google Warns: AI-Enabled Malware Actively Deployed

⚠️ Google’s Threat Intelligence Group has identified a new class of AI-enabled malware that leverages large language models at runtime to generate and obfuscate malicious code. Notable families include PromptFlux, which uses the Gemini API to rewrite its VBScript dropper for persistence and lateral spread, and PromptSteal, a Python data miner that queries Qwen2.5-Coder-32B-Instruct to create on-demand Windows commands. GTIG observed PromptSteal used by APT28 in Ukraine, while other examples such as PromptLock, FruitShell and QuietVault demonstrate varied AI-driven capabilities. Google warns this "just-in-time AI" approach could accelerate malware sophistication and democratize cybercrime.

read more →

Thu, November 6, 2025

Digital Health Needs Security at Its Core to Scale AI

🔒 The article argues that AI-driven digital health initiatives proved essential during COVID-19 but simultaneously exposed critical cybersecurity gaps that threaten pandemic preparedness. It warns that expansive data ecosystems, IoT devices and cloud pipelines multiply attack surfaces and that subtle AI-specific threats — including data poisoning, model inversion and adversarial inputs — can undermine public-health decisions. The author urges security by design, including zero-trust architectures, data provenance, encryption, model governance and cross-disciplinary drills so AI can deliver trustworthy, resilient public health systems.

read more →

Thu, November 6, 2025

Google: LLMs Employed Operationally in Malware Attacks

🤖 Google’s Threat Intelligence Group (GTIG) reports attackers are using “just‑in‑time” AI—LLMs queried during execution—to generate and obfuscate malicious code. Researchers identified two families, PROMPTSTEAL and PROMPTFLUX, which query Hugging Face and Gemini APIs to craft commands, rewrite source code, and evade detection. GTIG also documents social‑engineering prompts that trick models into revealing red‑teaming or exploit details, and warns the underground market for AI‑enabled crime is maturing. Google says it has disabled related accounts and applied protections.

read more →

Wed, November 5, 2025

Google: PROMPTFLUX malware uses Gemini to self-write

🤖 Google researchers disclosed a VBScript threat named PROMPTFLUX that queries Gemini via a hard-coded API key to request obfuscated VBScript designed to evade static detection. A 'Thinking Robot' component logs AI responses to %TEMP% and writes updated scripts to the Windows Startup folder to maintain persistence. Samples include propagation attempts to removable drives and mapped network shares, and variants that rewrite their source on an hourly cadence. Google assesses the malware as experimental and currently lacking known exploit capabilities.

read more →

Wed, November 5, 2025

Google: New AI-Powered Malware Families Deployed

⚠️Google's Threat Intelligence Group reports a surge in malware that integrates large language models to enable dynamic, mid-execution changes—what Google calls "just-in-time" self-modification. Notable examples include the experimental PromptFlux VBScript dropper and the PromptSteal data miner, plus operational threats like FruitShell and QuietVault. Google disabled abused Gemini accounts, removed assets, and is hardening model safeguards while collaborating with law enforcement.

read more →

Wed, November 5, 2025

GTIG Report: AI-Enabled Threats Transform Cybersecurity

🔒 The Google Threat Intelligence Group (GTIG) released a report documenting a clear shift: adversaries are moving beyond benign productivity uses of AI and are experimenting with AI-enabled operations. GTIG observed state-sponsored actors from North Korea, Iran and the People's Republic of China using AI for reconnaissance, tailored phishing lure creation and data exfiltration. Threats described include AI-powered, self-modifying malware, prompt-engineering to bypass safety guardrails, and underground markets selling advanced AI attack capabilities. Google says it has disrupted malicious assets and applied that intelligence to strengthen classifiers and its AI models.

read more →

Wed, November 5, 2025

Lack of AI Training Becoming a Major Security Risk

⚠️ A majority of German employees already use AI at work, with 62% reporting daily use of generative tools such as ChatGPT. Adoption has been largely grassroots—31% began using AI independently and nearly half learned via videos or informal study. Although 85% deem training on AI and data protection essential, 25% report no security training and 47% received only informal guidance, leaving clear operational and data risks.

read more →

Wed, November 5, 2025

Researchers Find ChatGPT Vulnerabilities in GPT-4o/5

🛡️ Cybersecurity researchers disclosed seven vulnerabilities in OpenAI's GPT-4o and GPT-5 models that enable indirect prompt injection attacks to exfiltrate user data from chat histories and stored memories. Tenable researchers Moshe Bernstein and Liv Matan describe zero-click search exploits, one-click query execution, conversation and memory poisoning, a markdown rendering bug, and a safety bypass using allow-listed Bing links. OpenAI has mitigated some issues, but experts warn that connecting LLMs to external tools broadens the attack surface and that robust safeguards and URL-sanitization remain essential.

read more →

Wed, November 5, 2025

Cloud CISO: Threat Actors' Growing Use of AI Tools

⚠️Google's Threat Intelligence team reports a shift from experimentation to operational use of AI by threat actors, including AI-enabled malware and prompt-based command generation. GTIG highlighted PROMPTSTEAL, linked to APT28 (FROZENLAKE), which queries a Hugging Face LLM to generate scripts for reconnaissance, document collection, and exfiltration, while adopting greater obfuscation and altered C2 methods. Google disabled related assets, strengthened model classifiers and safeguards with DeepMind, and urges defenders to update threat models, monitor anomalous scripting and C2, and incorporate threat intelligence into model- and classifier-level protections.

read more →