All news with #ai security tag
Wed, November 12, 2025
Bringing Connected AI Work Experiences Across Devices
🚀 Google outlines its plan to embed Generative AI across enterprise platforms and endpoints, integrating Gemini into Chrome Enterprise, Android, Pixel phones and Chromebook Plus devices. The post highlights the general availability of Cameyo by Google to virtualize legacy and modern apps in the cloud and the launch of Gemini in Chrome with enterprise-grade controls. It also previews Android XR and Pixel features powered by Gemini Nano, while expanding data loss prevention and a one-click SecOps integration to help IT secure AI-driven workflows.
Wed, November 12, 2025
Emerging Threats Center in Google Security Operations
🛡️ The Emerging Threats Center in Google Security Operations uses the Gemini detection‑engineering agent to turn frontline intelligence from Mandiant, VirusTotal, and Google into actionable detections. It generates high‑fidelity synthetic events, evaluates existing rule coverage, and drafts candidate detection rules for analyst review. The capability surfaces campaign‑based IOC and detection matches across 12 months of telemetry to help teams rapidly determine exposure and validate their defensive posture.
Wed, November 12, 2025
Tenable Reveals New Prompt-Injection Risks in ChatGPT
🔐 Researchers at Tenable disclosed seven techniques that can cause ChatGPT to leak private chat history by abusing built-in features such as web search, conversation memory and Markdown rendering. The attacks are primarily indirect prompt injections that exploit a secondary summarization model (SearchGPT), Bing tracking redirects, and a code-block rendering bug. Tenable reported the issues to OpenAI, and while some fixes were implemented several techniques still appear to work.
Wed, November 12, 2025
Fortinet Earns Gartner Customers’ Choice for SSE — 3rd Year
🏆 Fortinet has been named a Gartner Peer Insights Customers’ Choice for Security Service Edge (SSE) for the third consecutive year and is the only cybersecurity vendor to receive this recognition in the SSE market. Based on 195 verified end-user reviews as of August 2025, Fortinet achieved a 4.9/5 overall rating, 90% five-star reviews and 100% willingness to recommend. FortiSASE is highlighted for delivering unified, AI-powered cloud security backed by 170+ POPs, a single unified agent and deployment flexibility that aims to reduce operational overhead. Fortinet frames the recognition as validation of customer trust and its focus on simplifying secure hybrid work.
Wed, November 12, 2025
Extending Zero Trust to Autonomous AI Agents in Enterprises
🔐 As enterprises deploy AI assistants and autonomous agents, existing security frameworks must evolve to treat these agents as first-class identities rather than afterthoughts. The piece advocates applying Zero Trust principles—identity-first access, least-privilege, dynamic contextual enforcement, and continuous monitoring—to agentic identities to prevent misuse and reduce attack surface. Practical controls include scoped, short-lived tokens, tiered trust models, strict access boundaries, and assigning clear human ownership to each agent.
Wed, November 12, 2025
Secure AI by Design: A Policy Roadmap for Organizations
🛡️ In just a few years, AI has shifted from futuristic innovation to core business infrastructure, yet security practices have not kept pace. Palo Alto Networks presents a Secure AI by Design Policy Roadmap that defines the AI attack surface and prescribes actionable measures across external tools, agents, applications, and infrastructure. The Roadmap aligns with recent U.S. policy moves — including the June 2025 Executive Order and the July 2025 White House AI Action Plan — and calls for purpose-built defenses rather than retrofitting legacy controls.
Wed, November 12, 2025
Moving Beyond Frameworks: Real-Time Risk Assessments
🔍 Organizations are shifting from annual, checklist-driven compliance to targeted, frequent risk assessments that address emerging threats in real time. The article contrasts gap analyses — which measure adherence to frameworks like NIST or ISO — with tailored risk reviews focused on specific threat paths (for example, access control, ransomware, AI or cloud misconfigurations). It recommends small, repeatable questionnaires, a simple scoring model and executive-ready outputs to prioritize remediation and integrate risk into governance.
Wed, November 12, 2025
UK introduces Cyber Security and Resilience Bill to Parliament
🔒 The UK government today introduced the Cyber Security and Resilience Bill, proposing a major overhaul of the NIS Regulations to align with updated EU standards. The draft would regulate managed service providers, expand scope to data centres and smart-appliance electricity flows, and mandate supply-chain risk management and NCSC Cyber Assessment Framework-based controls. Incident reporting windows would tighten to an initial 24 hours and full report within 72 hours, while the ICO and regulators gain stronger enforcement and fee powers.
Wed, November 12, 2025
Google Announces Private AI Compute for Cloud Privacy
🔒 Google on Tuesday introduced Private AI Compute, a cloud privacy capability that aims to deliver on-device-level assurances while harnessing the scale of Gemini models. The service uses Trillium TPUs and Titanium Intelligence Enclaves (TIE) and relies on an AMD-based Trusted Execution Environment to encrypt and isolate memory on trusted nodes. Workloads are mutually attested, cryptographically validated, and ephemeral so inputs and inferences are discarded after each session, with Google stating data remains private to the user — 'not even Google.' An external assessment by NCC Group flagged a low-risk timing side channel in the IP-blinding relay and three attestation implementation issues that Google is mitigating.
Tue, November 11, 2025
The AI Fix #76 — AI self-awareness and the death of comedy
🧠 In episode 76 of The AI Fix, hosts Graham Cluley and Mark Stockley navigate a string of alarming and absurd AI stories from November 2025. They discuss US judges who blamed AI for invented case law, a Chinese humanoid that dramatically shed its outer skin onstage, Toyota’s unsettling walking chair, and Google’s plan to put specialised AI chips in orbit. The conversation explores reliability, public trust and whether prompting an LLM to "notice its noticing" changes how conscious it sounds.
Tue, November 11, 2025
CometJacking: Prompt-Injection Risk in AI Browsers
🔒 Researchers disclosed a prompt-injection technique dubbed CometJacking that abuses URL parameters to deliver hidden instructions to Perplexity’s Comet AI browser. By embedding malicious directives in the 'collection' parameter an attacker can cause the agent to consult connected services and memory instead of searching the web. LayerX demonstrated exfiltration of Gmail messages and Google Calendar invites by encoding data in base64 and sending it to an external endpoint. According to the report, Comet followed the malicious prompt and bypassed Perplexity’s safeguards, illustrating broader limits of current LLM-based assistants.
Tue, November 11, 2025
Global Cyber Attacks Surge in October 2025: Ransomware Rise
📈 Check Point Research found a continued uptick in global cyber assaults in October 2025, with organizations experiencing an average of 1,938 attacks per week. That represents a 2% increase from September and a 5% rise year‑over‑year. The report attributes the growth to an explosive expansion of ransomware operations and emerging risks tied to generative AI, while the education sector remained the most heavily targeted. Security teams are urged to strengthen detection, patching and access controls to counter increasingly automated and AI‑assisted threats.
Tue, November 11, 2025
Agent Sandbox: Kubernetes Enhancements for AI Agents
🛡️ Agent Sandbox is a new Kubernetes primitive designed to run AI agents with strong, kernel-level isolation. Built on gVisor with optional Kata Containers and developed in the Kubernetes community as a CNCF project, it reduces risks from agent-executed code. On GKE, managed gVisor, container-optimized compute and pre-warmed sandbox pools deliver sub-second startup latency and up to 90% cold-start improvement. A Python SDK and a simple API abstract YAML so AI engineers can manage sandbox lifecycles without deep infrastructure expertise; Agent Sandbox is open source and deployable on GKE today.
Tue, November 11, 2025
CISO Guide: Defending Against AI Supply-Chain Attacks
⚠️ AI-enabled supply chain attacks have surged in scale and sophistication, with malicious package uploads to open-source repositories rising 156% year-over-year and real incidents — from PyPI trojans to compromises of Hugging Face, GitHub and npm — already impacting production environments. These threats are polymorphic, context-aware, semantically camouflaged and temporally evasive, rendering signature-based tools increasingly ineffective. CISOs should prioritize AI-aware detection, behavioral provenance, runtime containment and strict contributor verification immediately to reduce exposure and satisfy emerging regulatory obligations such as the EU AI Act.
Tue, November 11, 2025
AI startups expose API keys on GitHub, risking models
🔐 New research by cloud security firm Wiz found verified secret leaks in 65% of the Forbes AI 50, with API keys and access tokens exposed on GitHub. Some credentials were tied to vendors such as Hugging Face, Weights & Biases, and LangChain, potentially granting access to private models, training data, and internal details. Nearly half of Wiz’s disclosure attempts failed or received no response. The findings highlight urgent gaps in secret management and DevSecOps practices.
Tue, November 11, 2025
Beyond Silos: DDI and AI Redefining Cyber Resilience
🔐 DDI logs — DNS, DHCP and IP address management — are the authoritative record of network behavior, and when combined with AI become a high-fidelity source for threat detection and automated response. Integrated DDI-AI correlates disparate events into actionable incidents, enabling SOAR-driven quarantines and DNS blocking at machine speed. This fusion also powers continuous, AI-driven breach and attack simulation to validate defenses and harden models.
Tue, November 11, 2025
Shadow AI: The Emerging Security Blind Spot for Companies
🔦 Shadow AI — the unsanctioned use of generative and agentic tools by employees — is creating a sizeable security blind spot for IT teams. Unsanctioned chatbots, browser extensions and autonomous agents can expose sensitive data, introduce vulnerabilities, or execute unauthorized actions. Organizations should inventory use, define realistic acceptable-use policies, vet vendors and combine technical controls with user education to reduce data leakage and compliance risk.
Mon, November 10, 2025
Gemini Code Assist adds persistent memory for reviews
🧠 Gemini Code Assist on GitHub now supports persistent memory that learns from merged pull request interactions to capture a team's coding standards, style, and best practices. The memory is stored securely in a Google-managed project specific to each installation and is applied selectively to relevant reviews. It infers reusable rules from review threads and uses them both to shape initial analysis and to filter draft suggestions so the agent adapts over time and reduces repetitive feedback.
Mon, November 10, 2025
Microsoft Secure Future Initiative — November 2025 Report
🔐 Microsoft’s November 2025 progress report on the Secure Future Initiative outlines governance expansion, engineering milestones, and product hardening across Azure, Microsoft 365, Windows, Surface, and Microsoft Security. The update highlights measurable gains — a nine-point rise in security sentiment, 95% employee completion of AI-attack training, 99.6% phishing-resistant MFA enforcement, and 99.5% live-secrets detection and remediation. It also introduces AI-first security capabilities, new detections, and 10 actionable SFI patterns to help customers improve posture.
Mon, November 10, 2025
65% of Top Private AI Firms Exposed Secrets on GitHub
🔒 A Wiz analysis of 50 private companies from the Forbes AI 50 found that 65% had exposed verified secrets such as API keys, tokens and credentials across GitHub and related repositories. Researchers employed a Depth, Perimeter and Coverage approach to examine commit histories, deleted forks, gists and contributors' personal repos, revealing secrets standard scanners often miss. Affected firms are collectively valued at over $400bn.