All news with #ai data leakage tag
Tue, August 26, 2025
Securing and Governing Autonomous AI Agents in Business
🔐 Microsoft outlines practical guidance for securing and governing the emerging class of autonomous agents. Igor Sakhnov explains how agents—now moving from experimentation into deployment—introduce risks such as task drift, Cross Prompt Injection Attacks (XPIA), hallucinations, and data exfiltration. Microsoft recommends starting with a unified agent inventory and layered controls across identity, access, data, posture, threat, network, and compliance. It introduces Entra Agent ID and an agent registry concept to enable auditable, just-in-time identities and improved observability.
Tue, August 26, 2025
Cloudflare CASB API Scanning for ChatGPT, Claude, Gemini
🔒 Cloudflare One users can now connect OpenAI's ChatGPT, Anthropic's Claude, and Google's Gemini to Cloudflare's API CASB to scan GenAI tenants for misconfigurations, DLP matches, data exposure, and compliance risks without installing endpoint agents. The API CASB provides out-of-band posture and DLP analysis, while Cloudflare Gateway delivers inline prompt controls and Shadow AI identification. Integrations are available in the dashboard or through your account manager.
Tue, August 26, 2025
Cloudflare Application Confidence Scores for AI Safety
🔒 Cloudflare introduces Application Confidence Scores to help enterprises assess the safety and data protection posture of third-party SaaS and Gen AI applications. Scores, delivered as part of Cloudflare’s AI Security Posture Management, use a transparent, public rubric and automated crawlers combined with human review. Vendors can submit evidence for rescoring, and scores will be applied per account tier to reflect differing controls across plans.
Tue, August 26, 2025
SASE Best Practices for Securing Generative AI Deployments
🔒 Cloudflare outlines practical steps to secure generative AI adoption using its SASE platform, combining SWG, CASB, Access, DLP, MCP controls and AI infrastructure. The post introduces new AI Security Posture Management (AI‑SPM) features — shadow AI reporting, provider confidence scoring, prompt protection, and API CASB integrations — to improve visibility, risk management, and data protection without blocking innovation. These controls are integrated into a single dashboard to simplify enforcement and protect internal and third‑party LLMs.
Mon, August 25, 2025
Unmasking Shadow AI: Visibility and Control with Cloudflare
🛡️ This post outlines the rise of Shadow AI—unsanctioned use of public AI services that can leak sensitive data—and presents how Cloudflare One surfaces and governs that activity. The Shadow IT Report classifies AI apps such as ChatGPT, GitHub Copilot, and Leonardo.ai, showing which users, locations, and bandwidth are involved. Under the hood, Gateway collects HTTP traffic and TimescaleDB with materialized views enables long-range analytics and fast queries. Administrators can proxy traffic, enable TLS inspection, set approval statuses, enforce DLP, block or isolate risky AI, and audit activity with Log Explorer.
Mon, August 25, 2025
AI Prompt Protection: Contextual Control for GenAI Use
🔒 Cloudflare introduces AI prompt protection inside its Data Loss Prevention (DLP) product on Cloudflare One, designed to detect and secure data entered into web-based GenAI tools like Google Gemini, ChatGPT, Claude, and Perplexity. The capability captures both prompts and AI responses, classifies content and intent, and enforces identity-aware guardrails to enable safe, productive AI use without blanket blocking. Encrypted logging with customer-provided keys provides auditable records while preserving confidentiality.
Sun, August 24, 2025
Cloudflare AI Week 2025: Securing AI, Protecting Content
🔒 Cloudflare this week outlines a multi-pronged plan to help organizations build secure, production-grade AI experiences while protecting original content and infrastructure. The company will roll out controls to detect Shadow AI, enforce approved AI toolchains, and harden models against poisoning or misuse. It is expanding Crawl Control for content owners and enhancing the AI Gateway with caching, observability, and framework integrations to reduce risk and operational cost.
Tue, August 19, 2025
GenAI-Enabled Phishing: Risks from AI Web Services
🚨 Unit 42 analyzes how rapid adoption of web-based generative AI is creating new phishing attack surfaces. Attackers are leveraging AI-powered website builders, writing assistants and chatbots to generate convincing phishing pages, clone brands and automate large-scale campaigns. Unit 42 observed real-world credential-stealing pages and misuse of trial accounts lacking guardrails. Customers are advised to use Advanced URL Filtering and Advanced DNS Security and report incidents to Unit 42 Incident Response.
Mon, August 18, 2025
EchoLink: Rise of Zero-Click AI Exploits in M365 Enterprise
⚠️ EchoLink is a newly identified zero-click vulnerability in Microsoft 365 Copilot that enables silent exfiltration of enterprise data without any user interaction. This class of attack bypasses traditional click- or download-based defenses and moves laterally at machine speed, making detection and containment difficult. Organizations relying solely on native tools or fragmented point solutions should urgently reassess their exposure and incident response readiness.
Mon, August 11, 2025
Preventing ML Data Leakage Through Strategic Splitting
🔐 CrowdStrike explains how inadvertent 'leakage' — when dependent or correlated observations are included in training — can inflate machine learning performance and undermine threat detection. The article shows that blocked or grouped data splits and blocked cross-validation produce more realistic performance estimates than random splits. It also highlights trade-offs, such as reduced predictor-space coverage and potential underfitting, and recommends careful partitioning and continuous evaluation to improve cybersecurity ML outcomes.
Tue, July 29, 2025
Defending Against Indirect Prompt Injection in LLMs
🔒 Microsoft outlines a layered defense-in-depth strategy to protect systems using LLMs from indirect prompt injection attacks. The approach pairs preventative controls such as hardened system prompts and Spotlighting (delimiting, datamarking, encoding) to isolate untrusted inputs with detection via Microsoft Prompt Shields, surfaced through Azure AI Content Safety and integrated with Defender for Cloud. Impact mitigation uses deterministic controls — fine-grained permissions, Microsoft Purview sensitivity labels, DLP policies, explicit user consent workflows, and blocking known exfiltration techniques — while ongoing research (TaskTracker, LLMail-Inject, FIDES) advances new design patterns and assurances.