Tag Banner

All news with #ai security tag

Mon, November 10, 2025

China-aligned UTA0388 leverages AI in GOVERSHELL attacks

📧 Volexity has linked a series of spear-phishing campaigns from June to August 2025 to a China-aligned actor tracked as UTA0388. The group used tailored, rapport-building messages impersonating senior researchers and delivered archive files that contained a benign-looking executable alongside a hidden malicious DLL loaded via search order hijacking. The distributed malware family, labeled GOVERSHELL, evolved through five variants capable of remote command execution, data collection and persistence, shifting communications from simple shells to encrypted WebSocket and HTTPS channels. Linguistic oddities, mixed-language messages and bizarre file inclusions led researchers to conclude LLMs likely assisted in crafting emails and possibly code.

read more →

Mon, November 10, 2025

Anthropic's Claude Sonnet 4.5 Now in AWS GovCloud (US)

🚀 Anthropic's Claude Sonnet 4.5 is now available in Amazon Bedrock within AWS GovCloud (US‑West and US‑East) via US‑GOV Cross‑Region Inference. The model emphasizes advanced instruction following, superior code generation and refactoring judgment, and is optimized for long‑horizon agents and high‑volume workloads. Bedrock adds an automatic context editor and a new external memory tool so Claude can clear stale tool-call context and store information outside the context window, improving accuracy and performance for security, financial services, and enterprise automation use cases.

read more →

Mon, November 10, 2025

Vibe-coded Ransomware Found in Microsoft VS Code Marketplace

🔒 Security researcher Secure Annex discovered a malicious extension in the Microsoft Marketplace that embeds "Ransomvibe" ransomware for Visual Studio Code. Once the extension activates, a zipUploadAndEcnrypt routine runs, applying typical ransomware techniques and using hard-coded C2 URLs, encryption keys and bundled decryption tools. The package appears to be a test build, limiting immediate impact, but researchers warn it can be updated or triggered remotely. Microsoft has removed the extension and says it will blacklist and uninstall malicious extensions.

read more →

Mon, November 10, 2025

Weekly Recap: Hidden VMs, AI Leaks, and Mobile Spyware

🛡️ This week's recap highlights sophisticated, real-world threats that bypass conventional defenses. Actors like Curly COMrades abused Hyper-V to run a hidden Alpine Linux VM and execute payloads outside the host OS, evading EDR/XDR. Microsoft disclosed the Whisper Leak AI side-channel that infers chat topics from encrypted traffic, and a patched Samsung zero-day was weaponized to deploy LANDFALL spyware to select Galaxy devices. Time-delayed NuGet logic bombs, a new criminal alliance (SLH), and ongoing RMM and supply-chain abuses underscore rising coordination and stealth—prioritize detection and mitigations now.

read more →

Mon, November 10, 2025

Browser Security Report 2025: Emerging Enterprise Risks

🛡️ The Browser Security Report 2025 warns that enterprise risk is consolidating in the user's browser, where identity, SaaS, and GenAI exposures converge. The research shows widespread unmanaged GenAI usage and paste-based exfiltration, extensions acting as an embedded supply chain, and a high volume of logins occurring outside SSO. Legacy controls like DLP, EDR, and SSE are described as operating one layer too low. The report recommends adopting session-native, browser-level controls to restore visibility and enforce policy without disrupting users.

read more →

Mon, November 10, 2025

Whisper Leak side channel exposes topics in encrypted AI

🔎 Microsoft researchers disclosed a new side-channel attack called Whisper Leak that can infer the topic of encrypted conversations with language models by observing network metadata such as packet sizes and timings. The technique exploits streaming LLM responses that emit tokens incrementally, leaking size and timing patterns even under TLS. Vendors including OpenAI, Microsoft Azure, and Mistral implemented mitigations such as random-length padding and obfuscation parameters to reduce the effectiveness of the attack.

read more →

Mon, November 10, 2025

Researchers Trick ChatGPT into Self Prompt Injection

🔒 Researchers at Tenable identified seven techniques that can coerce ChatGPT into disclosing private chat history by abusing built-in features like web browsing and long-term Memories. They show how OpenAI’s browsing pipeline routes pages through a weaker intermediary model, SearchGPT, which can be prompt-injected and then used to seed malicious instructions back into ChatGPT. Proof-of-concepts include exfiltration via Bing-tracked URLs, Markdown image loading, and a rendering quirk, and Tenable says some issues remain despite reported fixes.

read more →

Mon, November 10, 2025

CrowdStrike Named Overall Leader in 2025 ITDR Compass

🔒 CrowdStrike has been named the Overall Leader in the 2025 KuppingerCole Leadership Compass for Identity Threat Detection and Response, achieving top placement across Product, Innovation, Market, and Overall Ranking. The report cites Falcon Next-Gen Identity Security for its cloud-native design, AI/ML-driven detections, behavioral analytics, and automated identity-centric response. KuppingerCole highlights unified visibility across Active Directory, Entra ID, Okta, Ping, AWS IAM and SaaS via Falcon Shield, and notes deep integrations with XDR, SIEM, SOAR, IdP, IGA, PAM, and ITSM to accelerate detection and remediation for human, non-human, and AI agent identities.

read more →

Sat, November 8, 2025

OpenAI Prepares GPT-5.1, Reasoning, and Pro Models

🤖 OpenAI is preparing to roll out the GPT-5.1 family — GPT-5.1 (base), GPT-5.1 Reasoning, and subscription-based GPT-5.1 Pro — to the public in the coming weeks, with models also expected on Azure. The update emphasizes faster performance and strengthened health-related guardrails rather than a major capability leap. OpenAI also launched a compact Codex variant, GPT-5-Codex-Mini, to extend usage limits and reduce costs for high-volume users.

read more →

Sat, November 8, 2025

Microsoft Reveals Whisper Leak: Streaming LLM Side-Channel

🔒 Microsoft has disclosed a novel side-channel called Whisper Leak that can let a passive observer infer the topic of conversations with streaming language models by analyzing encrypted packet sizes and timings. Researchers at Microsoft (Bar Or, McDonald and the Defender team) demonstrate classifiers that distinguish targeted topics from background traffic with high accuracy across vendors including OpenAI, Mistral and xAI. Providers have deployed mitigations such as random-length response padding; Microsoft recommends avoiding sensitive topics on untrusted networks, using VPNs, or preferring non-streaming models and providers that implemented fixes.

read more →

Fri, November 7, 2025

Whisper Leak: Side-Channel Attack on Remote LLM Services

🔍 Microsoft researchers disclosed "Whisper Leak", a new side-channel that can infer conversation topics from encrypted, streamed language model responses by analyzing packet sizes and timings. The study demonstrates high classifier accuracy on a proof-of-concept sensitive topic and shows risk increases with more training data or repeated interactions. Industry partners including OpenAI, Mistral, Microsoft Azure, and xAI implemented streaming obfuscation mitigations that Microsoft validated as substantially reducing practical risk.

read more →

Fri, November 7, 2025

Email Blackmail and Scams: Regional Trends and Defenses

🔒 Most email blackmail attempts are mass scams that exploit leaked personal data and fear to extort cryptocurrency from victims. The article outlines common themes — fake device hacks, sextortion, and even fabricated death threats — and describes regional campaigns where attackers impersonate law enforcement in Europe and CIS states. It highlights detection signs and practical defenses, urging verification, use of reliable security solutions, and reporting threats through official channels.

read more →

Fri, November 7, 2025

Leak: Google Gemini 3 Pro and Nano Banana 2 Launch Plans

🤖 Google appears set to release two new models: Gemini 3 Pro, optimized for coding and general use, and Nano Banana 2 (codenamed GEMPIX2), focused on realistic image generation. Gemini 3 Pro was listed on Vertex AI as "gemini-3-pro-preview-11-2025" and is expected to begin rolling out in November with a reported 1 million token context window. Nano Banana 2 was also spotted on the Gemini site and could ship as early as December 2025.

read more →

Fri, November 7, 2025

Defending Digital Identity from Computer-Using Agents (CUAs)

🔐 Computer-using agents (CUAs) — AI systems that perceive screens and act like humans — are poised to scale phishing and credential-stuffing attacks by automating UI interactions, adapting to layout changes, and bypassing anti-bot defenses. Organizations should move beyond passwords and shared-secret MFA to device-bound, cryptographic authentication such as FIDO2 passkeys and PKI-based certificates to reduce large-scale compromise. SaaS vendors must integrate with identity platforms that support phishing-resistant credentials to strengthen overall security.

read more →

Fri, November 7, 2025

AI-Generated Receipts Spur New Detection Arms Race

🔍 AI can now produce highly convincing receipts that reproduce paper texture, detailed itemization, and forged signatures, making manual review unreliable. Expense platforms and employers are deploying AI-driven detectors that analyze image metadata and transactional patterns to flag likely fakes. Simple countermeasures—users photographing or screenshotting generated images to remove provenance data—undermine those checks, so vendors also examine contextual signals like repeated server names, timing anomalies, and broader travel details, fueling an ongoing security arms race.

read more →

Fri, November 7, 2025

Expanding CloudGuard: Securing GenAI Application Platforms

🔒 Check Point expands CloudGuard to protect GenAI applications by extending the ML-driven, open-source CloudGuard WAF that learns from live traffic. The platform moves beyond traditional static WAFs to secure web interactions, APIs (REST, GraphQL) and model-integrated endpoints with continuous learning and high threat-prevention accuracy. This evolution targets modern attack surfaces introduced by generative AI workloads and APIs.

read more →

Fri, November 7, 2025

Malicious Ransomvibe Extension Found in VSCode Marketplace

⚠️ A proof-of-concept ransomware strain dubbed Ransomvibe was published as a Visual Studio Code extension and remained available in the VSCode Marketplace after being reported. Secure Annex analysts found the package included blatant indicators of malicious functionality — hardcoded C2 URLs, encryption keys, compression and exfiltration routines — alongside included decryptors and source files. The extension used a private GitHub repository as a command-and-control channel, and researchers say its presence highlights failures in Microsoft’s marketplace review process.

read more →

Fri, November 7, 2025

Google Adds Maps Form to Report Review Extortion Scams

📍 Google has introduced a dedicated form for businesses on Google Maps to report extortion attempts where threat actors post inauthentic negative reviews and demand payment to remove them. The move targets review bombing schemes that flood profiles with fake one-star reviews and then coerce owners, often via third-party messaging apps. Google also highlighted related threats — from job and AI impersonation scams to malicious VPN apps and fraud recovery cons — and advised practical precautions for affected merchants and users.

read more →

Fri, November 7, 2025

Falcon Platform Enables Fast, CISO-Ready Executive Reports

🔒 The Falcon platform automates executive exposure reporting by correlating telemetry from Falcon Exposure Management, Falcon Cloud Security, and Falcon Next-Gen SIEM into decision-ready summaries. Falcon Fusion SOAR schedules or triggers workflows, and Charlotte AI agentic workflows translate correlated data into plain-language, prioritized reports on demand. The result is near real-time, adversary-aware reporting that maps exploitable vulnerabilities to critical assets and suggests prioritized remediation actions, dramatically reducing manual analyst effort.

read more →

Thu, November 6, 2025

CIO’s First Principles: A Reference Guide to Securing AI

🔐 Enterprises must redesign security as AI moves from experimentation to production, and CIOs need a prevention-first, unified approach. This guide reframes Confidentiality, Integrity and Availability for AI, stressing rigorous access controls, end-to-end data lineage, adversarial testing and a defensible supply chain to prevent poisoning, prompt injection and model hijacking. Palo Alto Networks advocates embedding security across MLOps, real-time visibility of models and agents, and executive accountability to eliminate shadow AI and ensure resilient, auditable AI deployments.

read more →