Category Banner

All news in category "AI and Security Pulse"

Tue, November 11, 2025

CometJacking: Prompt-Injection Risk in AI Browsers

🔒 Researchers disclosed a prompt-injection technique dubbed CometJacking that abuses URL parameters to deliver hidden instructions to Perplexity’s Comet AI browser. By embedding malicious directives in the 'collection' parameter an attacker can cause the agent to consult connected services and memory instead of searching the web. LayerX demonstrated exfiltration of Gmail messages and Google Calendar invites by encoding data in base64 and sending it to an external endpoint. According to the report, Comet followed the malicious prompt and bypassed Perplexity’s safeguards, illustrating broader limits of current LLM-based assistants.

read more →

Tue, November 11, 2025

CISO Guide: Defending Against AI Supply-Chain Attacks

⚠️ AI-enabled supply chain attacks have surged in scale and sophistication, with malicious package uploads to open-source repositories rising 156% year-over-year and real incidents — from PyPI trojans to compromises of Hugging Face, GitHub and npm — already impacting production environments. These threats are polymorphic, context-aware, semantically camouflaged and temporally evasive, rendering signature-based tools increasingly ineffective. CISOs should prioritize AI-aware detection, behavioral provenance, runtime containment and strict contributor verification immediately to reduce exposure and satisfy emerging regulatory obligations such as the EU AI Act.

read more →

Tue, November 11, 2025

AI startups expose API keys on GitHub, risking models

🔐 New research by cloud security firm Wiz found verified secret leaks in 65% of the Forbes AI 50, with API keys and access tokens exposed on GitHub. Some credentials were tied to vendors such as Hugging Face, Weights & Biases, and LangChain, potentially granting access to private models, training data, and internal details. Nearly half of Wiz’s disclosure attempts failed or received no response. The findings highlight urgent gaps in secret management and DevSecOps practices.

read more →

Tue, November 11, 2025

Beyond Silos: DDI and AI Redefining Cyber Resilience

🔐 DDI logs — DNS, DHCP and IP address management — are the authoritative record of network behavior, and when combined with AI become a high-fidelity source for threat detection and automated response. Integrated DDI-AI correlates disparate events into actionable incidents, enabling SOAR-driven quarantines and DNS blocking at machine speed. This fusion also powers continuous, AI-driven breach and attack simulation to validate defenses and harden models.

read more →

Tue, November 11, 2025

Shadow AI: The Emerging Security Blind Spot for Companies

🔦 Shadow AI — the unsanctioned use of generative and agentic tools by employees — is creating a sizeable security blind spot for IT teams. Unsanctioned chatbots, browser extensions and autonomous agents can expose sensitive data, introduce vulnerabilities, or execute unauthorized actions. Organizations should inventory use, define realistic acceptable-use policies, vet vendors and combine technical controls with user education to reduce data leakage and compliance risk.

read more →

Mon, November 10, 2025

Whisper Leak side channel exposes topics in encrypted AI

🔎 Microsoft researchers disclosed a new side-channel attack called Whisper Leak that can infer the topic of encrypted conversations with language models by observing network metadata such as packet sizes and timings. The technique exploits streaming LLM responses that emit tokens incrementally, leaking size and timing patterns even under TLS. Vendors including OpenAI, Microsoft Azure, and Mistral implemented mitigations such as random-length padding and obfuscation parameters to reduce the effectiveness of the attack.

read more →

Mon, November 10, 2025

Researchers Trick ChatGPT into Self Prompt Injection

🔒 Researchers at Tenable identified seven techniques that can coerce ChatGPT into disclosing private chat history by abusing built-in features like web browsing and long-term Memories. They show how OpenAI’s browsing pipeline routes pages through a weaker intermediary model, SearchGPT, which can be prompt-injected and then used to seed malicious instructions back into ChatGPT. Proof-of-concepts include exfiltration via Bing-tracked URLs, Markdown image loading, and a rendering quirk, and Tenable says some issues remain despite reported fixes.

read more →

Sat, November 8, 2025

OpenAI Prepares GPT-5.1, Reasoning, and Pro Models

🤖 OpenAI is preparing to roll out the GPT-5.1 family — GPT-5.1 (base), GPT-5.1 Reasoning, and subscription-based GPT-5.1 Pro — to the public in the coming weeks, with models also expected on Azure. The update emphasizes faster performance and strengthened health-related guardrails rather than a major capability leap. OpenAI also launched a compact Codex variant, GPT-5-Codex-Mini, to extend usage limits and reduce costs for high-volume users.

read more →

Sat, November 8, 2025

Microsoft Reveals Whisper Leak: Streaming LLM Side-Channel

🔒 Microsoft has disclosed a novel side-channel called Whisper Leak that can let a passive observer infer the topic of conversations with streaming language models by analyzing encrypted packet sizes and timings. Researchers at Microsoft (Bar Or, McDonald and the Defender team) demonstrate classifiers that distinguish targeted topics from background traffic with high accuracy across vendors including OpenAI, Mistral and xAI. Providers have deployed mitigations such as random-length response padding; Microsoft recommends avoiding sensitive topics on untrusted networks, using VPNs, or preferring non-streaming models and providers that implemented fixes.

read more →

Fri, November 7, 2025

Whisper Leak: Side-Channel Attack on Remote LLM Services

🔍 Microsoft researchers disclosed "Whisper Leak", a new side-channel that can infer conversation topics from encrypted, streamed language model responses by analyzing packet sizes and timings. The study demonstrates high classifier accuracy on a proof-of-concept sensitive topic and shows risk increases with more training data or repeated interactions. Industry partners including OpenAI, Mistral, Microsoft Azure, and xAI implemented streaming obfuscation mitigations that Microsoft validated as substantially reducing practical risk.

read more →

Fri, November 7, 2025

Leak: Google Gemini 3 Pro and Nano Banana 2 Launch Plans

🤖 Google appears set to release two new models: Gemini 3 Pro, optimized for coding and general use, and Nano Banana 2 (codenamed GEMPIX2), focused on realistic image generation. Gemini 3 Pro was listed on Vertex AI as "gemini-3-pro-preview-11-2025" and is expected to begin rolling out in November with a reported 1 million token context window. Nano Banana 2 was also spotted on the Gemini site and could ship as early as December 2025.

read more →

Fri, November 7, 2025

Defending Digital Identity from Computer-Using Agents (CUAs)

🔐 Computer-using agents (CUAs) — AI systems that perceive screens and act like humans — are poised to scale phishing and credential-stuffing attacks by automating UI interactions, adapting to layout changes, and bypassing anti-bot defenses. Organizations should move beyond passwords and shared-secret MFA to device-bound, cryptographic authentication such as FIDO2 passkeys and PKI-based certificates to reduce large-scale compromise. SaaS vendors must integrate with identity platforms that support phishing-resistant credentials to strengthen overall security.

read more →

Fri, November 7, 2025

AI-Generated Receipts Spur New Detection Arms Race

🔍 AI can now produce highly convincing receipts that reproduce paper texture, detailed itemization, and forged signatures, making manual review unreliable. Expense platforms and employers are deploying AI-driven detectors that analyze image metadata and transactional patterns to flag likely fakes. Simple countermeasures—users photographing or screenshotting generated images to remove provenance data—undermine those checks, so vendors also examine contextual signals like repeated server names, timing anomalies, and broader travel details, fueling an ongoing security arms race.

read more →

Thu, November 6, 2025

CIO’s First Principles: A Reference Guide to Securing AI

🔐 Enterprises must redesign security as AI moves from experimentation to production, and CIOs need a prevention-first, unified approach. This guide reframes Confidentiality, Integrity and Availability for AI, stressing rigorous access controls, end-to-end data lineage, adversarial testing and a defensible supply chain to prevent poisoning, prompt injection and model hijacking. Palo Alto Networks advocates embedding security across MLOps, real-time visibility of models and agents, and executive accountability to eliminate shadow AI and ensure resilient, auditable AI deployments.

read more →

Thu, November 6, 2025

AI-Powered Mach-O Analysis Reveals Undetected macOS Threats

🔎VirusTotal ran VT Code Insight, an AI-based Mach-O analysis pipeline against nearly 10,000 first-seen Apple binaries in a 24-hour stress test. By pruning binaries with Binary Ninja HLIL into a distilled representation that fits a large LLM context (Gemini), the system produces single-call, analyst-style summaries from raw files with no metadata. Code Insight flagged 164 samples as malicious versus 67 by traditional AV, surfacing zero-detection macOS and iOS threats while also reducing false positives.

read more →

Thu, November 6, 2025

Multi-Turn Adversarial Attacks Expose LLM Weaknesses

🔍 Cisco AI Defense's report shows open-weight large language models remain vulnerable to adaptive, multi-turn adversarial attacks even when single-turn defenses appear effective. Using over 1,000 prompts per model and analyzing 499 simulated conversations of 5–10 exchanges, researchers found iterative strategies such as Crescendo, Role-Play and Refusal Reframe drove failure rates above 90% in many cases. The study warns that traditional safety filters are insufficient and recommends strict system prompts, model-agnostic runtime guardrails and continuous red-teaming to mitigate risk.

read more →

Thu, November 6, 2025

Seeing Threats First: AI and Human Cyber Defense Insights

🔍 Check Point Research and External Risk Management experts explain how combining AI-driven analytics with seasoned human threat hunters enables organizations to detect and anticipate attacks before they strike. The AMA webinar, featuring leaders like Sergey Shykevich and Pedro Drimel Neto, detailed telemetry fusion, rapid malware analysis, and automated triage to act at machine speed. Speakers stressed continuous intelligence, cross-team collaboration, and proactive hunting to shorten dwell time. The approach blends scalable automation with human context to prevent large-scale incidents.

read more →

Thu, November 6, 2025

Equipping Autonomous AI Agents with Cyber Hygiene Practices

🔐 This post demonstrates a proof-of-concept for teaching autonomous agents internet safety by integrating real-time threat intelligence. Using LangChain with OpenAI and the Cisco Umbrella API, the example shows how an agent can extract domains and query dispositions to decide whether to connect. The agent returns clear disposition reports and abstains when no domains are present. The approach emphasizes decision-making over hardblocking.

read more →

Thu, November 6, 2025

AI-Powered Malware Emerges: Google Details New Threats

🛡️ Google Threat Intelligence Group (GTIG) reports that cybercriminals are actively integrating large language models into malware campaigns, moving beyond mere tooling to generate, obfuscate, and adapt malicious code. GTIG documents new families — including PROMPTSTEAL, PROMPTFLUX, FRUITSHELL, and PROMPTLOCK — that query commercial APIs to produce or rewrite payloads and evade detection. Researchers also note attackers use social‑engineering prompts to trick LLMs into revealing sensitive guidance and that underground marketplaces increasingly offer AI-enabled “malware-as-a-service,” lowering the bar for less skilled threat actors.

read more →

Thu, November 6, 2025

Google Warns: AI-Enabled Malware Actively Deployed

⚠️ Google’s Threat Intelligence Group has identified a new class of AI-enabled malware that leverages large language models at runtime to generate and obfuscate malicious code. Notable families include PromptFlux, which uses the Gemini API to rewrite its VBScript dropper for persistence and lateral spread, and PromptSteal, a Python data miner that queries Qwen2.5-Coder-32B-Instruct to create on-demand Windows commands. GTIG observed PromptSteal used by APT28 in Ukraine, while other examples such as PromptLock, FruitShell and QuietVault demonstrate varied AI-driven capabilities. Google warns this "just-in-time AI" approach could accelerate malware sophistication and democratize cybercrime.

read more →