All news with #ai security tag
Tue, November 11, 2025
Global Cyber Attacks Surge in October 2025: Ransomware Rise
📈 Check Point Research found a continued uptick in global cyber assaults in October 2025, with organizations experiencing an average of 1,938 attacks per week. That represents a 2% increase from September and a 5% rise year‑over‑year. The report attributes the growth to an explosive expansion of ransomware operations and emerging risks tied to generative AI, while the education sector remained the most heavily targeted. Security teams are urged to strengthen detection, patching and access controls to counter increasingly automated and AI‑assisted threats.
Tue, November 11, 2025
Agent Sandbox: Kubernetes Enhancements for AI Agents
🛡️ Agent Sandbox is a new Kubernetes primitive designed to run AI agents with strong, kernel-level isolation. Built on gVisor with optional Kata Containers and developed in the Kubernetes community as a CNCF project, it reduces risks from agent-executed code. On GKE, managed gVisor, container-optimized compute and pre-warmed sandbox pools deliver sub-second startup latency and up to 90% cold-start improvement. A Python SDK and a simple API abstract YAML so AI engineers can manage sandbox lifecycles without deep infrastructure expertise; Agent Sandbox is open source and deployable on GKE today.
Tue, November 11, 2025
CISO Guide: Defending Against AI Supply-Chain Attacks
⚠️ AI-enabled supply chain attacks have surged in scale and sophistication, with malicious package uploads to open-source repositories rising 156% year-over-year and real incidents — from PyPI trojans to compromises of Hugging Face, GitHub and npm — already impacting production environments. These threats are polymorphic, context-aware, semantically camouflaged and temporally evasive, rendering signature-based tools increasingly ineffective. CISOs should prioritize AI-aware detection, behavioral provenance, runtime containment and strict contributor verification immediately to reduce exposure and satisfy emerging regulatory obligations such as the EU AI Act.
Tue, November 11, 2025
AI startups expose API keys on GitHub, risking models
🔐 New research by cloud security firm Wiz found verified secret leaks in 65% of the Forbes AI 50, with API keys and access tokens exposed on GitHub. Some credentials were tied to vendors such as Hugging Face, Weights & Biases, and LangChain, potentially granting access to private models, training data, and internal details. Nearly half of Wiz’s disclosure attempts failed or received no response. The findings highlight urgent gaps in secret management and DevSecOps practices.
Tue, November 11, 2025
Beyond Silos: DDI and AI Redefining Cyber Resilience
🔐 DDI logs — DNS, DHCP and IP address management — are the authoritative record of network behavior, and when combined with AI become a high-fidelity source for threat detection and automated response. Integrated DDI-AI correlates disparate events into actionable incidents, enabling SOAR-driven quarantines and DNS blocking at machine speed. This fusion also powers continuous, AI-driven breach and attack simulation to validate defenses and harden models.
Tue, November 11, 2025
Shadow AI: The Emerging Security Blind Spot for Companies
🔦 Shadow AI — the unsanctioned use of generative and agentic tools by employees — is creating a sizeable security blind spot for IT teams. Unsanctioned chatbots, browser extensions and autonomous agents can expose sensitive data, introduce vulnerabilities, or execute unauthorized actions. Organizations should inventory use, define realistic acceptable-use policies, vet vendors and combine technical controls with user education to reduce data leakage and compliance risk.
Mon, November 10, 2025
Gemini Code Assist adds persistent memory for reviews
🧠 Gemini Code Assist on GitHub now supports persistent memory that learns from merged pull request interactions to capture a team's coding standards, style, and best practices. The memory is stored securely in a Google-managed project specific to each installation and is applied selectively to relevant reviews. It infers reusable rules from review threads and uses them both to shape initial analysis and to filter draft suggestions so the agent adapts over time and reduces repetitive feedback.
Mon, November 10, 2025
Microsoft Secure Future Initiative — November 2025 Report
🔐 Microsoft’s November 2025 progress report on the Secure Future Initiative outlines governance expansion, engineering milestones, and product hardening across Azure, Microsoft 365, Windows, Surface, and Microsoft Security. The update highlights measurable gains — a nine-point rise in security sentiment, 95% employee completion of AI-attack training, 99.6% phishing-resistant MFA enforcement, and 99.5% live-secrets detection and remediation. It also introduces AI-first security capabilities, new detections, and 10 actionable SFI patterns to help customers improve posture.
Mon, November 10, 2025
65% of Top Private AI Firms Exposed Secrets on GitHub
🔒 A Wiz analysis of 50 private companies from the Forbes AI 50 found that 65% had exposed verified secrets such as API keys, tokens and credentials across GitHub and related repositories. Researchers employed a Depth, Perimeter and Coverage approach to examine commit histories, deleted forks, gists and contributors' personal repos, revealing secrets standard scanners often miss. Affected firms are collectively valued at over $400bn.
Mon, November 10, 2025
China-aligned UTA0388 leverages AI in GOVERSHELL attacks
📧 Volexity has linked a series of spear-phishing campaigns from June to August 2025 to a China-aligned actor tracked as UTA0388. The group used tailored, rapport-building messages impersonating senior researchers and delivered archive files that contained a benign-looking executable alongside a hidden malicious DLL loaded via search order hijacking. The distributed malware family, labeled GOVERSHELL, evolved through five variants capable of remote command execution, data collection and persistence, shifting communications from simple shells to encrypted WebSocket and HTTPS channels. Linguistic oddities, mixed-language messages and bizarre file inclusions led researchers to conclude LLMs likely assisted in crafting emails and possibly code.
Mon, November 10, 2025
Anthropic's Claude Sonnet 4.5 Now in AWS GovCloud (US)
🚀 Anthropic's Claude Sonnet 4.5 is now available in Amazon Bedrock within AWS GovCloud (US‑West and US‑East) via US‑GOV Cross‑Region Inference. The model emphasizes advanced instruction following, superior code generation and refactoring judgment, and is optimized for long‑horizon agents and high‑volume workloads. Bedrock adds an automatic context editor and a new external memory tool so Claude can clear stale tool-call context and store information outside the context window, improving accuracy and performance for security, financial services, and enterprise automation use cases.
Mon, November 10, 2025
Vibe-coded Ransomware Found in Microsoft VS Code Marketplace
🔒 Security researcher Secure Annex discovered a malicious extension in the Microsoft Marketplace that embeds "Ransomvibe" ransomware for Visual Studio Code. Once the extension activates, a zipUploadAndEcnrypt routine runs, applying typical ransomware techniques and using hard-coded C2 URLs, encryption keys and bundled decryption tools. The package appears to be a test build, limiting immediate impact, but researchers warn it can be updated or triggered remotely. Microsoft has removed the extension and says it will blacklist and uninstall malicious extensions.
Mon, November 10, 2025
Weekly Recap: Hidden VMs, AI Leaks, and Mobile Spyware
🛡️ This week's recap highlights sophisticated, real-world threats that bypass conventional defenses. Actors like Curly COMrades abused Hyper-V to run a hidden Alpine Linux VM and execute payloads outside the host OS, evading EDR/XDR. Microsoft disclosed the Whisper Leak AI side-channel that infers chat topics from encrypted traffic, and a patched Samsung zero-day was weaponized to deploy LANDFALL spyware to select Galaxy devices. Time-delayed NuGet logic bombs, a new criminal alliance (SLH), and ongoing RMM and supply-chain abuses underscore rising coordination and stealth—prioritize detection and mitigations now.
Mon, November 10, 2025
Browser Security Report 2025: Emerging Enterprise Risks
🛡️ The Browser Security Report 2025 warns that enterprise risk is consolidating in the user's browser, where identity, SaaS, and GenAI exposures converge. The research shows widespread unmanaged GenAI usage and paste-based exfiltration, extensions acting as an embedded supply chain, and a high volume of logins occurring outside SSO. Legacy controls like DLP, EDR, and SSE are described as operating one layer too low. The report recommends adopting session-native, browser-level controls to restore visibility and enforce policy without disrupting users.
Mon, November 10, 2025
Whisper Leak side channel exposes topics in encrypted AI
🔎 Microsoft researchers disclosed a new side-channel attack called Whisper Leak that can infer the topic of encrypted conversations with language models by observing network metadata such as packet sizes and timings. The technique exploits streaming LLM responses that emit tokens incrementally, leaking size and timing patterns even under TLS. Vendors including OpenAI, Microsoft Azure, and Mistral implemented mitigations such as random-length padding and obfuscation parameters to reduce the effectiveness of the attack.
Mon, November 10, 2025
Researchers Trick ChatGPT into Self Prompt Injection
🔒 Researchers at Tenable identified seven techniques that can coerce ChatGPT into disclosing private chat history by abusing built-in features like web browsing and long-term Memories. They show how OpenAI’s browsing pipeline routes pages through a weaker intermediary model, SearchGPT, which can be prompt-injected and then used to seed malicious instructions back into ChatGPT. Proof-of-concepts include exfiltration via Bing-tracked URLs, Markdown image loading, and a rendering quirk, and Tenable says some issues remain despite reported fixes.
Mon, November 10, 2025
CrowdStrike Named Overall Leader in 2025 ITDR Compass
🔒 CrowdStrike has been named the Overall Leader in the 2025 KuppingerCole Leadership Compass for Identity Threat Detection and Response, achieving top placement across Product, Innovation, Market, and Overall Ranking. The report cites Falcon Next-Gen Identity Security for its cloud-native design, AI/ML-driven detections, behavioral analytics, and automated identity-centric response. KuppingerCole highlights unified visibility across Active Directory, Entra ID, Okta, Ping, AWS IAM and SaaS via Falcon Shield, and notes deep integrations with XDR, SIEM, SOAR, IdP, IGA, PAM, and ITSM to accelerate detection and remediation for human, non-human, and AI agent identities.
Sat, November 8, 2025
OpenAI Prepares GPT-5.1, Reasoning, and Pro Models
🤖 OpenAI is preparing to roll out the GPT-5.1 family — GPT-5.1 (base), GPT-5.1 Reasoning, and subscription-based GPT-5.1 Pro — to the public in the coming weeks, with models also expected on Azure. The update emphasizes faster performance and strengthened health-related guardrails rather than a major capability leap. OpenAI also launched a compact Codex variant, GPT-5-Codex-Mini, to extend usage limits and reduce costs for high-volume users.
Sat, November 8, 2025
Microsoft Reveals Whisper Leak: Streaming LLM Side-Channel
🔒 Microsoft has disclosed a novel side-channel called Whisper Leak that can let a passive observer infer the topic of conversations with streaming language models by analyzing encrypted packet sizes and timings. Researchers at Microsoft (Bar Or, McDonald and the Defender team) demonstrate classifiers that distinguish targeted topics from background traffic with high accuracy across vendors including OpenAI, Mistral and xAI. Providers have deployed mitigations such as random-length response padding; Microsoft recommends avoiding sensitive topics on untrusted networks, using VPNs, or preferring non-streaming models and providers that implemented fixes.
Fri, November 7, 2025
Whisper Leak: Side-Channel Attack on Remote LLM Services
🔍 Microsoft researchers disclosed "Whisper Leak", a new side-channel that can infer conversation topics from encrypted, streamed language model responses by analyzing packet sizes and timings. The study demonstrates high classifier accuracy on a proof-of-concept sensitive topic and shows risk increases with more training data or repeated interactions. Industry partners including OpenAI, Mistral, Microsoft Azure, and xAI implemented streaming obfuscation mitigations that Microsoft validated as substantially reducing practical risk.