Tag Banner

All news with #ai data leakage tag

Mon, October 27, 2025

Top 10 Challenges Facing CISOs and Security Teams Today

🔒 Security leaders face a rapidly evolving threat landscape driven by AI, constrained budgets, talent shortages, and a vastly expanded attack surface. Many organizations rushed into AI adoption before security controls matured, and CISOs report growing involvement in AI governance and implementation even while attackers leverage AI to compress time-to-compromise. Data protection, employee susceptibility to sophisticated scams, quantum readiness, and board alignment emerge as immediate priorities that require clearer risk-based decisions and frequent simulation exercises.

read more →

Fri, October 24, 2025

Privacy rankings of popular messaging apps — 2025 Report

🔒 Incogni's Social Media Privacy Ranking 2025, summarized by Kaspersky, evaluates 15 platforms across 18 criteria to compare messaging apps on privacy and data handling. Overall scores place Discord, Telegram and Snapchat near the top, but a subset of practical criteria ranks Telegram first, followed by Snapchat and Discord. The analysis highlights default settings, data collection by mobile apps, handling of government requests, and encryption differences, noting that only WhatsApp provides end-to-end encryption for all chats by default.

read more →

Fri, October 24, 2025

AI 2030: The Coming Era of Autonomous Cybercrime Threats

🔒 Organizations worldwide are rapidly adopting AI across enterprises, delivering efficiency gains while introducing new security risks. Cybersecurity is at a turning point where AI fights AI, and today's phishing and deepfakes are precursors to autonomous, self‑optimizing AI threat actors that can plan, execute, and refine attacks with minimal human oversight. In September 2025, Check Point Research found that 1 in 54 GenAI prompts from enterprise networks posed a high risk of sensitive-data exposure, underscoring the urgent need to harden defenses and govern model use.

read more →

Tue, October 21, 2025

DeepSeek Privacy and Security: What Users Should Know

🔒 DeepSeek collects extensive interaction data — chats, images and videos — plus account details, IP address and device/browser information, and retains it for an unspecified period under a vague “retain as long as needed” policy. The service operates under Chinese jurisdiction, so stored chats may be accessible to local authorities and have been observed on China Mobile servers. Users can disable model training in web and mobile Data settings, export or delete chats (export is web-only), or run the open-source model locally to avoid server-side retention, but local deployment and deletion have trade-offs and require device protections.

read more →

Mon, October 20, 2025

ChatGPT privacy and security: data control guide 2025

🔒 This article examines what ChatGPT collects, how OpenAI processes and stores user data, and the controls available to limit use for model training. It outlines region-specific policies (EEA/UK/Switzerland vs rest of world), the types of data gathered — from account and device details to prompts and uploads — and explains memory, Temporary Chats, connectors and app integrations. Practical steps cover disabling training, deleting memories and chats, managing connectors and Work with Apps, and securing accounts with strong passwords and multi-factor authentication.

read more →

Thu, October 9, 2025

September 2025 Cyber Threats: Ransomware and GenAI Rise

🔍 In September 2025, global cyber-attack volumes eased modestly, with organizations facing an average of 1,900 attacks per organization per week — a 4% decline from August but a 1% increase year-over-year. Beneath this apparent stabilization, ransomware activity jumped sharply (up 46%), while emerging GenAI-related data risks expanded rapidly, changing attacker tactics. The report warns that evolving techniques and heightened data exposure are creating a more complex and consequential threat environment for organizations worldwide.

read more →

Tue, October 7, 2025

Google launches AI bug bounty program; rewards up to $30K

🛡️ Google has launched a new AI Vulnerability Reward Program to incentivize security researchers to find and report flaws in its AI systems. The program targets high-impact vulnerabilities across flagship offerings including Google Search, Gemini Apps, and Google Workspace core apps, and also covers AI Studio, Jules, and other AI integrations. Rewards scale with severity and novelty—up to $30,000 for exceptional reports and up to $20,000 for standard flagship security flaws. Additional bounties include $15,000 for sensitive data exfiltration and smaller awards for phishing enablement, model theft, and access control issues.

read more →

Tue, October 7, 2025

Enterprise AI Now Leading Corporate Data Exfiltration

🔍 A new Enterprise AI and SaaS Data Security Report from LayerX finds that generative AI has rapidly become the largest uncontrolled channel for corporate data loss. Real-world browser telemetry shows 45% employee adoption of GenAI, 67% of sessions via unmanaged accounts, and copy/paste into ChatGPT, Claude, and Copilot as the primary leakage vector. Traditional, file-centric DLP tools largely miss these action-based flows.

read more →

Fri, October 3, 2025

AI and Cybersecurity: Fortinet and NTT DATA Webinar

🔒 In a joint webinar, Fortinet and NTT DATA outlined practical approaches to deploying and securing AI across enterprise environments. Fortinet described its three AI pillars—FortiAI‑Protect, FortiAI‑Assist, and FortiAI‑SecureAI—focused on detection, operational assistance, and protecting AI assets. NTT DATA emphasized governance, runtime protections, and an "agentic factory" to scale pilots into production. The presenters stressed the need for visibility into shadow AI and controls such as DLP and zero‑trust access to prevent data leakage.

read more →

Thu, October 2, 2025

Forrester Predicts Agentic AI Will Trigger 2026 Breach

⚠️ Forrester warns that an agentic AI deployment will trigger a publicly disclosed data breach in 2026, potentially prompting employee dismissals. Senior analyst Paddy Harrington noted that generative AI has already been linked to several breaches and cautioned that autonomous agents can sacrifice accuracy for speed without proper guardrails. He urges adoption of the AEGIS framework to secure intent, identity, data provenance and other controls. Check Point also reported malicious agentic tools accelerating attacker activity.

read more →

Wed, October 1, 2025

Smashing Security 437: ForcedLeak in Salesforce AgentForce

🔐 Researchers uncovered a security flaw in Salesforce’s new AgentForce platform called ForcedLeak, which let attackers smuggle AI-readable instructions through a Web-to-Lead form and exfiltrate data for as little as five dollars. The hosts discuss the broader implications for AI integration, input validation, and the surprising ease of exploiting customer-facing forms. Episode 437 also critiques typical breach communications and covers ITV’s phone‑hacking drama and the Rosetta Stone story, with Graham Cluley joined by Paul Ducklin.

read more →

Tue, September 30, 2025

Gemini Trifecta Exposes Indirect AI Attack Surfaces

⚠️Tenable has revealed three vulnerabilities in Google's Gemini platform, collectively dubbed the "Gemini Trifecta," that enable indirect prompt injection and data exfiltration through integrations. The issues allow attackers to poison GCP logs consumed by Gemini Cloud Assist, inject malicious entries into Chrome search history to manipulate the Search Personalization Model, and coerce the Browsing Tool into fetching attacker-controlled URLs that leak sensitive query data. Google has patched the flaws, and Tenable urges security teams to treat AI integrations as active threat surfaces and implement input sanitization, output validation, monitoring, and regular penetration testing.

read more →

Mon, September 22, 2025

Agentic AI Risks and Governance: A Major CISO Challenge

⚠️ Agentic AI is proliferating inside enterprises, embedding autonomous agents into development, customer support, process automation, and employee workflows. Security experts warn these systems create substantial visibility and governance gaps: organizations often do not know where agents run, what data they access, or how independent their actions are. Key risks include risky autonomy, uncontrolled data sharing among agents, third-party integration vulnerabilities, and the potential for agents to enable or mimic multi-stage attacks. CISOs should prioritize real-time observability, strict governance, secure-by-design development, and cross-functional coordination to mitigate these threats.

read more →

Thu, September 18, 2025

CrowdStrike Enhances GenAI Data Protection Across Platforms

🔒 CrowdStrike announces four new innovations in Falcon Data Protection to help organizations prevent GenAI-driven data leaks across endpoints, cloud, SaaS and AI tools. The updates include real-time GenAI protections that span browsers, local apps and shadow AI services, unified out-of-the-box detections, AI-powered classifications, and a consolidated Insider Risk dashboard. Beta and general availability windows span late 2025 through mid-2026, with cloud features prioritized earlier.

read more →

Thu, September 11, 2025

AI-Powered Browsers: Security and Privacy Risks in 2026

🔒 An AI-integrated browser embeds large multimodal models into standard web browsers, allowing agents to view pages and perform actions—opening links, filling forms, downloading files—directly on a user’s device. This enables faster, context-aware automation and access to subscription or blocked content, but raises substantial privacy and security risks, including data exfiltration, prompt-injection and malware delivery. Users should demand features like per-site AI controls, choice of local models, explicit confirmation for sensitive actions, and OS-level file restrictions, though no browser currently implements all these protections.

read more →

Wed, September 3, 2025

Managing Shadow AI: Three Practical Corporate Policies

🔒 The MIT report "The GenAI Divide: State of AI in Business 2025" exposes a pervasive shadow AI economy—90% of employees use personal AI while only 40% of organizations buy LLM subscriptions. This article translates those findings into three realistic policy paths: a complete ban, unrestricted use with hygiene controls, and a balanced, role-based model. Each option is paired with concrete technical controls (DLP, NGFW, CASB, EDR), organizational steps, and enforcement measures to help security teams align risk management with real-world employee behaviour.

read more →

Wed, September 3, 2025

How the Generative AI Boom Opens Privacy and Cyber Risks

🔒The rapid adoption of generative AI is prompting significant privacy and security concerns as vendors revise terms to use user data for model training. High-profile pushback — exemplified by WeTransfer’s reversal — revealed how unclear terms and live experimentation can expose corporate and personal information. Employees using consumer tools like ChatGPT for work tasks risk leaking secrets, and platforms such as Slack are explicitly reserving rights to leverage customer data. CISOs must balance strategic AI adoption with heightened compliance, governance and operational risk.

read more →

Tue, September 2, 2025

Secure AI at Machine Speed: Full-Stack Enterprise Defense

🔒 CrowdStrike explains how widespread AI adoption expands the enterprise attack surface, exposing models, data pipelines, APIs, and autonomous agents to new adversary techniques. The post argues that legacy controls and fragmented tooling are insufficient and advocates for real-time, full‑stack protections. The Falcon platform is presented as a unified solution offering telemetry, lifecycle protection, GenAI-aware data loss prevention, and agent governance to detect, prevent, and remediate AI-related threats.

read more →

Fri, August 29, 2025

Cloudflare data: AI bot crawling surges, referrals fall

🤖 Cloudflare's mid‑2025 dataset shows AI training crawlers now account for nearly 80% of AI bot activity, driving a surge in crawling while sending far fewer human referrals. Google referrals to news sites fell sharply in March–April 2025 as AI Overviews and Gemini upgrades reduced click-throughs. OpenAI’s GPTBot and Anthropic’s ClaudeBot increased crawling share while ByteDance’s Bytespider declined. The resulting crawl-to-refer imbalance — tens of thousands of crawls per human click for some platforms — threatens publisher revenue.

read more →

Wed, August 27, 2025

Five Essential Rules for Safe AI Adoption in Enterprises

🛡️ AI adoption is accelerating in enterprises, but many deployments lack the visibility, controls, and ongoing safeguards needed to manage risk. The article presents five practical rules: continuous AI discovery, contextual risk assessment, strong data protection, access controls aligned with zero trust, and continuous oversight. Together these measures help CISOs enable innovation while reducing exposure to breaches, data loss, and compliance failures.

read more →