Tag Banner

All news with #ai data leakage tag

Thu, October 30, 2025

Shadow AI: One in Four Employees Use Unapproved Tools

🤖 1Password’s 2025 Annual Report finds shadow AI is now the second-most prevalent form of shadow IT, with 27% of employees admitting they used unauthorised AI tools and 37% saying they do not always follow company AI policies. The survey of 5,200 knowledge workers across six countries shows broad corporate encouragement of AI experimentation alongside frequent circumvention driven by convenience and perceived productivity gains. 1Password warns that freemium and browser-based AI tools can ingest sensitive data, violate compliance requirements and even act as malware vectors.

read more →

Thu, October 30, 2025

LinkedIn to Use EU, UK and Other Profiles for AI Training

🔒 Microsoft-owned LinkedIn will begin using profile details, public posts and feed activity from users in the UK, EU, Switzerland, Canada and Hong Kong to train generative AI models and to support personalised ads across Microsoft starting 3 November 2025. Private messages are excluded. Users can opt out via Settings & Privacy > Data Privacy and toggle Data for Generative AI Improvement to Off. Organisations should update social media policies and remind staff to review their advertising and data-sharing settings.

read more →

Wed, October 29, 2025

Social Media Privacy Ranking 2025: Platforms Compared

🔒 Incogni’s Social Media Privacy Ranking 2025 evaluates 15 major platforms across data collection, resale, AI training, privacy settings, and regulatory fines. The analysis identifies Pinterest and Quora as the most privacy-conscious, while TikTok and Facebook rank lowest, driven by extensive data use and historical penalties. The report highlights practical differences in opt-outs, data-sharing, and default settings and recommends users review privacy controls and use Kaspersky’s Privacy Checker.

read more →

Wed, October 29, 2025

AI-targeted Cloaking Tricks Agentic Browsers, Warns SPLX

⚠ Researchers report a new form of context-poisoning called AI-targeted cloaking that serves different content to agentic browsers and AI crawlers. SPLX shows attackers can use a trivial user-agent check to deliver alternate pages to crawlers from ChatGPT and Perplexity, turning retrieved content into manipulated ground truth. The technique mirrors search engine cloaking but targets AI overviews and autonomous reasoning, creating a potent misinformation vector. A concurrent hTAG analysis also found many agents execute risky actions with minimal safeguards, amplifying potential harm.

read more →

Wed, October 29, 2025

Aisuru Botnet Evolves from DDoS to Residential Proxies

🛡️ Aisuru, first identified in August 2024, has been retooled from launching record DDoS assaults to renting hundreds of thousands of compromised IoT devices as residential proxies. Researchers warn the change powers a massive proxy market that is being used to anonymize large-scale content scraping for AI training and other abuses. The botnet — roughly 700,000 devices strong — previously produced multi‑terabit attacks that disrupted ISPs and damaged router hardware. Industry and law enforcement are sharing blocklists and probing proxy reseller ecosystems tied to the infections.

read more →

Mon, October 27, 2025

Top 10 Challenges Facing CISOs and Security Teams Today

🔒 Security leaders face a rapidly evolving threat landscape driven by AI, constrained budgets, talent shortages, and a vastly expanded attack surface. Many organizations rushed into AI adoption before security controls matured, and CISOs report growing involvement in AI governance and implementation even while attackers leverage AI to compress time-to-compromise. Data protection, employee susceptibility to sophisticated scams, quantum readiness, and board alignment emerge as immediate priorities that require clearer risk-based decisions and frequent simulation exercises.

read more →

Fri, October 24, 2025

Privacy rankings of popular messaging apps — 2025 Report

🔒 Incogni's Social Media Privacy Ranking 2025, summarized by Kaspersky, evaluates 15 platforms across 18 criteria to compare messaging apps on privacy and data handling. Overall scores place Discord, Telegram and Snapchat near the top, but a subset of practical criteria ranks Telegram first, followed by Snapchat and Discord. The analysis highlights default settings, data collection by mobile apps, handling of government requests, and encryption differences, noting that only WhatsApp provides end-to-end encryption for all chats by default.

read more →

Fri, October 24, 2025

AI 2030: The Coming Era of Autonomous Cybercrime Threats

🔒 Organizations worldwide are rapidly adopting AI across enterprises, delivering efficiency gains while introducing new security risks. Cybersecurity is at a turning point where AI fights AI, and today's phishing and deepfakes are precursors to autonomous, self‑optimizing AI threat actors that can plan, execute, and refine attacks with minimal human oversight. In September 2025, Check Point Research found that 1 in 54 GenAI prompts from enterprise networks posed a high risk of sensitive-data exposure, underscoring the urgent need to harden defenses and govern model use.

read more →

Tue, October 21, 2025

DeepSeek Privacy and Security: What Users Should Know

🔒 DeepSeek collects extensive interaction data — chats, images and videos — plus account details, IP address and device/browser information, and retains it for an unspecified period under a vague “retain as long as needed” policy. The service operates under Chinese jurisdiction, so stored chats may be accessible to local authorities and have been observed on China Mobile servers. Users can disable model training in web and mobile Data settings, export or delete chats (export is web-only), or run the open-source model locally to avoid server-side retention, but local deployment and deletion have trade-offs and require device protections.

read more →

Mon, October 20, 2025

ChatGPT privacy and security: data control guide 2025

🔒 This article examines what ChatGPT collects, how OpenAI processes and stores user data, and the controls available to limit use for model training. It outlines region-specific policies (EEA/UK/Switzerland vs rest of world), the types of data gathered — from account and device details to prompts and uploads — and explains memory, Temporary Chats, connectors and app integrations. Practical steps cover disabling training, deleting memories and chats, managing connectors and Work with Apps, and securing accounts with strong passwords and multi-factor authentication.

read more →

Thu, October 9, 2025

September 2025 Cyber Threats: Ransomware and GenAI Rise

🔍 In September 2025, global cyber-attack volumes eased modestly, with organizations facing an average of 1,900 attacks per organization per week — a 4% decline from August but a 1% increase year-over-year. Beneath this apparent stabilization, ransomware activity jumped sharply (up 46%), while emerging GenAI-related data risks expanded rapidly, changing attacker tactics. The report warns that evolving techniques and heightened data exposure are creating a more complex and consequential threat environment for organizations worldwide.

read more →

Tue, October 7, 2025

Google launches AI bug bounty program; rewards up to $30K

🛡️ Google has launched a new AI Vulnerability Reward Program to incentivize security researchers to find and report flaws in its AI systems. The program targets high-impact vulnerabilities across flagship offerings including Google Search, Gemini Apps, and Google Workspace core apps, and also covers AI Studio, Jules, and other AI integrations. Rewards scale with severity and novelty—up to $30,000 for exceptional reports and up to $20,000 for standard flagship security flaws. Additional bounties include $15,000 for sensitive data exfiltration and smaller awards for phishing enablement, model theft, and access control issues.

read more →

Tue, October 7, 2025

Enterprise AI Now Leading Corporate Data Exfiltration

🔍 A new Enterprise AI and SaaS Data Security Report from LayerX finds that generative AI has rapidly become the largest uncontrolled channel for corporate data loss. Real-world browser telemetry shows 45% employee adoption of GenAI, 67% of sessions via unmanaged accounts, and copy/paste into ChatGPT, Claude, and Copilot as the primary leakage vector. Traditional, file-centric DLP tools largely miss these action-based flows.

read more →

Fri, October 3, 2025

AI and Cybersecurity: Fortinet and NTT DATA Webinar

🔒 In a joint webinar, Fortinet and NTT DATA outlined practical approaches to deploying and securing AI across enterprise environments. Fortinet described its three AI pillars—FortiAI‑Protect, FortiAI‑Assist, and FortiAI‑SecureAI—focused on detection, operational assistance, and protecting AI assets. NTT DATA emphasized governance, runtime protections, and an "agentic factory" to scale pilots into production. The presenters stressed the need for visibility into shadow AI and controls such as DLP and zero‑trust access to prevent data leakage.

read more →

Thu, October 2, 2025

Forrester Predicts Agentic AI Will Trigger 2026 Breach

⚠️ Forrester warns that an agentic AI deployment will trigger a publicly disclosed data breach in 2026, potentially prompting employee dismissals. Senior analyst Paddy Harrington noted that generative AI has already been linked to several breaches and cautioned that autonomous agents can sacrifice accuracy for speed without proper guardrails. He urges adoption of the AEGIS framework to secure intent, identity, data provenance and other controls. Check Point also reported malicious agentic tools accelerating attacker activity.

read more →

Wed, October 1, 2025

Smashing Security 437: ForcedLeak in Salesforce AgentForce

🔐 Researchers uncovered a security flaw in Salesforce’s new AgentForce platform called ForcedLeak, which let attackers smuggle AI-readable instructions through a Web-to-Lead form and exfiltrate data for as little as five dollars. The hosts discuss the broader implications for AI integration, input validation, and the surprising ease of exploiting customer-facing forms. Episode 437 also critiques typical breach communications and covers ITV’s phone‑hacking drama and the Rosetta Stone story, with Graham Cluley joined by Paul Ducklin.

read more →

Tue, September 30, 2025

Gemini Trifecta Exposes Indirect AI Attack Surfaces

⚠️Tenable has revealed three vulnerabilities in Google's Gemini platform, collectively dubbed the "Gemini Trifecta," that enable indirect prompt injection and data exfiltration through integrations. The issues allow attackers to poison GCP logs consumed by Gemini Cloud Assist, inject malicious entries into Chrome search history to manipulate the Search Personalization Model, and coerce the Browsing Tool into fetching attacker-controlled URLs that leak sensitive query data. Google has patched the flaws, and Tenable urges security teams to treat AI integrations as active threat surfaces and implement input sanitization, output validation, monitoring, and regular penetration testing.

read more →

Mon, September 22, 2025

Agentic AI Risks and Governance: A Major CISO Challenge

⚠️ Agentic AI is proliferating inside enterprises, embedding autonomous agents into development, customer support, process automation, and employee workflows. Security experts warn these systems create substantial visibility and governance gaps: organizations often do not know where agents run, what data they access, or how independent their actions are. Key risks include risky autonomy, uncontrolled data sharing among agents, third-party integration vulnerabilities, and the potential for agents to enable or mimic multi-stage attacks. CISOs should prioritize real-time observability, strict governance, secure-by-design development, and cross-functional coordination to mitigate these threats.

read more →

Thu, September 18, 2025

CrowdStrike Enhances GenAI Data Protection Across Platforms

🔒 CrowdStrike announces four new innovations in Falcon Data Protection to help organizations prevent GenAI-driven data leaks across endpoints, cloud, SaaS and AI tools. The updates include real-time GenAI protections that span browsers, local apps and shadow AI services, unified out-of-the-box detections, AI-powered classifications, and a consolidated Insider Risk dashboard. Beta and general availability windows span late 2025 through mid-2026, with cloud features prioritized earlier.

read more →

Thu, September 11, 2025

AI-Powered Browsers: Security and Privacy Risks in 2026

🔒 An AI-integrated browser embeds large multimodal models into standard web browsers, allowing agents to view pages and perform actions—opening links, filling forms, downloading files—directly on a user’s device. This enables faster, context-aware automation and access to subscription or blocked content, but raises substantial privacy and security risks, including data exfiltration, prompt-injection and malware delivery. Users should demand features like per-site AI controls, choice of local models, explicit confirmation for sensitive actions, and OS-level file restrictions, though no browser currently implements all these protections.

read more →