All news with #ai data leakage tag
Wed, September 3, 2025
Managing Shadow AI: Three Practical Corporate Policies
🔒 The MIT report "The GenAI Divide: State of AI in Business 2025" exposes a pervasive shadow AI economy—90% of employees use personal AI while only 40% of organizations buy LLM subscriptions. This article translates those findings into three realistic policy paths: a complete ban, unrestricted use with hygiene controls, and a balanced, role-based model. Each option is paired with concrete technical controls (DLP, NGFW, CASB, EDR), organizational steps, and enforcement measures to help security teams align risk management with real-world employee behaviour.
Wed, September 3, 2025
How the Generative AI Boom Opens Privacy and Cyber Risks
🔒The rapid adoption of generative AI is prompting significant privacy and security concerns as vendors revise terms to use user data for model training. High-profile pushback — exemplified by WeTransfer’s reversal — revealed how unclear terms and live experimentation can expose corporate and personal information. Employees using consumer tools like ChatGPT for work tasks risk leaking secrets, and platforms such as Slack are explicitly reserving rights to leverage customer data. CISOs must balance strategic AI adoption with heightened compliance, governance and operational risk.
Tue, September 2, 2025
Secure AI at Machine Speed: Full-Stack Enterprise Defense
🔒 CrowdStrike explains how widespread AI adoption expands the enterprise attack surface, exposing models, data pipelines, APIs, and autonomous agents to new adversary techniques. The post argues that legacy controls and fragmented tooling are insufficient and advocates for real-time, full‑stack protections. The Falcon platform is presented as a unified solution offering telemetry, lifecycle protection, GenAI-aware data loss prevention, and agent governance to detect, prevent, and remediate AI-related threats.
Fri, August 29, 2025
Cloudflare data: AI bot crawling surges, referrals fall
🤖 Cloudflare's mid‑2025 dataset shows AI training crawlers now account for nearly 80% of AI bot activity, driving a surge in crawling while sending far fewer human referrals. Google referrals to news sites fell sharply in March–April 2025 as AI Overviews and Gemini upgrades reduced click-throughs. OpenAI’s GPTBot and Anthropic’s ClaudeBot increased crawling share while ByteDance’s Bytespider declined. The resulting crawl-to-refer imbalance — tens of thousands of crawls per human click for some platforms — threatens publisher revenue.
Wed, August 27, 2025
Five Essential Rules for Safe AI Adoption in Enterprises
🛡️ AI adoption is accelerating in enterprises, but many deployments lack the visibility, controls, and ongoing safeguards needed to manage risk. The article presents five practical rules: continuous AI discovery, contextual risk assessment, strong data protection, access controls aligned with zero trust, and continuous oversight. Together these measures help CISOs enable innovation while reducing exposure to breaches, data loss, and compliance failures.
Tue, August 26, 2025
Securing and Governing Autonomous AI Agents in Business
🔐 Microsoft outlines practical guidance for securing and governing the emerging class of autonomous agents. Igor Sakhnov explains how agents—now moving from experimentation into deployment—introduce risks such as task drift, Cross Prompt Injection Attacks (XPIA), hallucinations, and data exfiltration. Microsoft recommends starting with a unified agent inventory and layered controls across identity, access, data, posture, threat, network, and compliance. It introduces Entra Agent ID and an agent registry concept to enable auditable, just-in-time identities and improved observability.
Tue, August 26, 2025
Cloudflare CASB API Scanning for ChatGPT, Claude, Gemini
🔒 Cloudflare One users can now connect OpenAI's ChatGPT, Anthropic's Claude, and Google's Gemini to Cloudflare's API CASB to scan GenAI tenants for misconfigurations, DLP matches, data exposure, and compliance risks without installing endpoint agents. The API CASB provides out-of-band posture and DLP analysis, while Cloudflare Gateway delivers inline prompt controls and Shadow AI identification. Integrations are available in the dashboard or through your account manager.
Tue, August 26, 2025
Cloudflare Application Confidence Scores for AI Safety
🔒 Cloudflare introduces Application Confidence Scores to help enterprises assess the safety and data protection posture of third-party SaaS and Gen AI applications. Scores, delivered as part of Cloudflare’s AI Security Posture Management, use a transparent, public rubric and automated crawlers combined with human review. Vendors can submit evidence for rescoring, and scores will be applied per account tier to reflect differing controls across plans.
Tue, August 26, 2025
SASE Best Practices for Securing Generative AI Deployments
🔒 Cloudflare outlines practical steps to secure generative AI adoption using its SASE platform, combining SWG, CASB, Access, DLP, MCP controls and AI infrastructure. The post introduces new AI Security Posture Management (AI‑SPM) features — shadow AI reporting, provider confidence scoring, prompt protection, and API CASB integrations — to improve visibility, risk management, and data protection without blocking innovation. These controls are integrated into a single dashboard to simplify enforcement and protect internal and third‑party LLMs.
Mon, August 25, 2025
Unmasking Shadow AI: Visibility and Control with Cloudflare
🛡️ This post outlines the rise of Shadow AI—unsanctioned use of public AI services that can leak sensitive data—and presents how Cloudflare One surfaces and governs that activity. The Shadow IT Report classifies AI apps such as ChatGPT, GitHub Copilot, and Leonardo.ai, showing which users, locations, and bandwidth are involved. Under the hood, Gateway collects HTTP traffic and TimescaleDB with materialized views enables long-range analytics and fast queries. Administrators can proxy traffic, enable TLS inspection, set approval statuses, enforce DLP, block or isolate risky AI, and audit activity with Log Explorer.
Mon, August 25, 2025
AI Prompt Protection: Contextual Control for GenAI Use
🔒 Cloudflare introduces AI prompt protection inside its Data Loss Prevention (DLP) product on Cloudflare One, designed to detect and secure data entered into web-based GenAI tools like Google Gemini, ChatGPT, Claude, and Perplexity. The capability captures both prompts and AI responses, classifies content and intent, and enforces identity-aware guardrails to enable safe, productive AI use without blanket blocking. Encrypted logging with customer-provided keys provides auditable records while preserving confidentiality.
Sun, August 24, 2025
Cloudflare AI Week 2025: Securing AI, Protecting Content
🔒 Cloudflare this week outlines a multi-pronged plan to help organizations build secure, production-grade AI experiences while protecting original content and infrastructure. The company will roll out controls to detect Shadow AI, enforce approved AI toolchains, and harden models against poisoning or misuse. It is expanding Crawl Control for content owners and enhancing the AI Gateway with caching, observability, and framework integrations to reduce risk and operational cost.
Tue, August 19, 2025
GenAI-Enabled Phishing: Risks from AI Web Services
🚨 Unit 42 analyzes how rapid adoption of web-based generative AI is creating new phishing attack surfaces. Attackers are leveraging AI-powered website builders, writing assistants and chatbots to generate convincing phishing pages, clone brands and automate large-scale campaigns. Unit 42 observed real-world credential-stealing pages and misuse of trial accounts lacking guardrails. Customers are advised to use Advanced URL Filtering and Advanced DNS Security and report incidents to Unit 42 Incident Response.
Mon, August 18, 2025
EchoLink: Rise of Zero-Click AI Exploits in M365 Enterprise
⚠️ EchoLink is a newly identified zero-click vulnerability in Microsoft 365 Copilot that enables silent exfiltration of enterprise data without any user interaction. This class of attack bypasses traditional click- or download-based defenses and moves laterally at machine speed, making detection and containment difficult. Organizations relying solely on native tools or fragmented point solutions should urgently reassess their exposure and incident response readiness.
Mon, August 11, 2025
Preventing ML Data Leakage Through Strategic Splitting
🔐 CrowdStrike explains how inadvertent 'leakage' — when dependent or correlated observations are included in training — can inflate machine learning performance and undermine threat detection. The article shows that blocked or grouped data splits and blocked cross-validation produce more realistic performance estimates than random splits. It also highlights trade-offs, such as reduced predictor-space coverage and potential underfitting, and recommends careful partitioning and continuous evaluation to improve cybersecurity ML outcomes.
Tue, July 29, 2025
Defending Against Indirect Prompt Injection in LLMs
🔒 Microsoft outlines a layered defense-in-depth strategy to protect systems using LLMs from indirect prompt injection attacks. The approach pairs preventative controls such as hardened system prompts and Spotlighting (delimiting, datamarking, encoding) to isolate untrusted inputs with detection via Microsoft Prompt Shields, surfaced through Azure AI Content Safety and integrated with Defender for Cloud. Impact mitigation uses deterministic controls — fine-grained permissions, Microsoft Purview sensitivity labels, DLP policies, explicit user consent workflows, and blocking known exfiltration techniques — while ongoing research (TaskTracker, LLMail-Inject, FIDES) advances new design patterns and assurances.