Tag Banner

All news with #ai data leakage tag

Thu, November 20, 2025

Gartner: Shadow AI to Cause Major Incidents by 2030

🛡️ Gartner warns that by 2030 more than 40% of organizations will experience security and compliance incidents caused by employees using unauthorized AI tools. A survey of security leaders found 69% have evidence or suspect public generative AI use at work, increasing risks such as IP loss and data exposure. Gartner urges CIOs to set enterprise-wide AI policies, audit for shadow AI activity and incorporate GenAI risk evaluation into SaaS assessments.

read more →

Thu, November 20, 2025

CrowdStrike Extends DSPM to Runtime for Cloud Data

🔒 CrowdStrike Falcon Data Protection for Cloud is now generally available, extending traditional DSPM into runtime to provide continuous visibility and protection for sensitive data in motion. Leveraging eBPF-powered monitoring, it detects unauthorized or risky data transfers across APIs, SaaS, containers, databases, and cloud storage without proxies or added infrastructure. The solution combines unified classification with integrated investigation and automated response, plus SIEM streaming and a lightweight Linux sensor for rapid deployment.

read more →

Thu, November 20, 2025

AI Risk Guide: Assessing GenAI, Vendors and Threats

⚠️ This guide outlines the principal risks generative AI (GenAI) poses to organizations, categorizing concerns into internal projects, third‑party solutions and malicious external use. It urges inventories of AI use, application of risk and deployment frameworks (including ISO, NIST and emerging EU standards), and continuous vendor due diligence. Practical steps include governance, scoring, staff training, basic cyber hygiene and incident readiness to protect data and trust.

read more →

Tue, November 18, 2025

Prisma AIRS Integration with Azure AI Foundry for Security

🔒 Palo Alto Networks announced that Prisma AIRS now integrates natively with Azure AI Foundry, enabling direct prompt and response scanning through the Prisma AIRS AI Runtime Security API. The integration provides real-time, model-agnostic threat detection for prompt injection, sensitive data leakage, malicious code and URLs, and toxic outputs, and supports custom topic filters. By embedding security into AI development workflows, teams gain production-grade protections without slowing innovation; the feature is available now via an early access program.

read more →

Mon, November 17, 2025

When Romantic AI Chatbots Can't Keep Your Secrets Safe

🤖 AI companion apps can feel intimate and conversational, but many collect, retain, and sometimes inadvertently expose highly sensitive information. Recent breaches — including a misconfigured Kafka broker that leaked hundreds of thousands of photos and millions of private conversations — underline real dangers. Users should avoid sharing personal, financial or intimate material, enable two-factor authentication, review privacy policies, and opt out of data retention or training when possible. Parents should supervise teen use and insist on robust age verification and moderation.

read more →

Wed, November 12, 2025

Tenable Reveals New Prompt-Injection Risks in ChatGPT

🔐 Researchers at Tenable disclosed seven techniques that can cause ChatGPT to leak private chat history by abusing built-in features such as web search, conversation memory and Markdown rendering. The attacks are primarily indirect prompt injections that exploit a secondary summarization model (SearchGPT), Bing tracking redirects, and a code-block rendering bug. Tenable reported the issues to OpenAI, and while some fixes were implemented several techniques still appear to work.

read more →

Tue, November 11, 2025

AI startups expose API keys on GitHub, risking models

🔐 New research by cloud security firm Wiz found verified secret leaks in 65% of the Forbes AI 50, with API keys and access tokens exposed on GitHub. Some credentials were tied to vendors such as Hugging Face, Weights & Biases, and LangChain, potentially granting access to private models, training data, and internal details. Nearly half of Wiz’s disclosure attempts failed or received no response. The findings highlight urgent gaps in secret management and DevSecOps practices.

read more →

Tue, November 11, 2025

Shadow AI: The Emerging Security Blind Spot for Companies

🔦 Shadow AI — the unsanctioned use of generative and agentic tools by employees — is creating a sizeable security blind spot for IT teams. Unsanctioned chatbots, browser extensions and autonomous agents can expose sensitive data, introduce vulnerabilities, or execute unauthorized actions. Organizations should inventory use, define realistic acceptable-use policies, vet vendors and combine technical controls with user education to reduce data leakage and compliance risk.

read more →

Mon, November 10, 2025

Whisper Leak side channel exposes topics in encrypted AI

🔎 Microsoft researchers disclosed a new side-channel attack called Whisper Leak that can infer the topic of encrypted conversations with language models by observing network metadata such as packet sizes and timings. The technique exploits streaming LLM responses that emit tokens incrementally, leaking size and timing patterns even under TLS. Vendors including OpenAI, Microsoft Azure, and Mistral implemented mitigations such as random-length padding and obfuscation parameters to reduce the effectiveness of the attack.

read more →

Mon, November 10, 2025

Researchers Trick ChatGPT into Self Prompt Injection

🔒 Researchers at Tenable identified seven techniques that can coerce ChatGPT into disclosing private chat history by abusing built-in features like web browsing and long-term Memories. They show how OpenAI’s browsing pipeline routes pages through a weaker intermediary model, SearchGPT, which can be prompt-injected and then used to seed malicious instructions back into ChatGPT. Proof-of-concepts include exfiltration via Bing-tracked URLs, Markdown image loading, and a rendering quirk, and Tenable says some issues remain despite reported fixes.

read more →

Fri, November 7, 2025

Whisper Leak: Side-Channel Attack on Remote LLM Services

🔍 Microsoft researchers disclosed "Whisper Leak", a new side-channel that can infer conversation topics from encrypted, streamed language model responses by analyzing packet sizes and timings. The study demonstrates high classifier accuracy on a proof-of-concept sensitive topic and shows risk increases with more training data or repeated interactions. Industry partners including OpenAI, Mistral, Microsoft Azure, and xAI implemented streaming obfuscation mitigations that Microsoft validated as substantially reducing practical risk.

read more →

Wed, November 5, 2025

Lack of AI Training Becoming a Major Security Risk

⚠️ A majority of German employees already use AI at work, with 62% reporting daily use of generative tools such as ChatGPT. Adoption has been largely grassroots—31% began using AI independently and nearly half learned via videos or informal study. Although 85% deem training on AI and data protection essential, 25% report no security training and 47% received only informal guidance, leaving clear operational and data risks.

read more →

Wed, November 5, 2025

Researchers Find ChatGPT Vulnerabilities in GPT-4o/5

🛡️ Cybersecurity researchers disclosed seven vulnerabilities in OpenAI's GPT-4o and GPT-5 models that enable indirect prompt injection attacks to exfiltrate user data from chat histories and stored memories. Tenable researchers Moshe Bernstein and Liv Matan describe zero-click search exploits, one-click query execution, conversation and memory poisoning, a markdown rendering bug, and a safety bypass using allow-listed Bing links. OpenAI has mitigated some issues, but experts warn that connecting LLMs to external tools broadens the attack surface and that robust safeguards and URL-sanitization remain essential.

read more →

Tue, November 4, 2025

CISO Predictions 2026: Resilience, AI, and Threats

🔐 Fortinet’s CISO Collective outlines priorities and risks CISOs will face in 2026. The briefing warns that AI will accelerate innovation while expanding attack surfaces, increasing LLM breaches, adversarial model attacks, and deepfake-enabled BEC. It highlights geopolitical and space-related threats such as GPS jamming and satellite interception, persistent regulatory pressure including NIS2 and DORA, and a chronic cybersecurity skills gap. Recommendations emphasize governed AI, identity hardening, quantum readiness, and resilience-driven leadership.

read more →

Thu, October 30, 2025

Five Generative AI Security Threats and Defensive Steps

🔒 Microsoft summarizes the top generative AI security risks and mitigation strategies in a new e-book, highlighting threats such as prompt injection, data poisoning, jailbreaks, and adaptive evasion. The post underscores cloud vulnerabilities, large-scale data exposure, and unpredictable model behavior that create new attack surfaces. It recommends unified defenses—such as CNAPP approaches—and presents Microsoft Defender for Cloud as an example that combines posture management with runtime detection to protect AI workloads.

read more →

Thu, October 30, 2025

Shadow AI: One in Four Employees Use Unapproved Tools

🤖 1Password’s 2025 Annual Report finds shadow AI is now the second-most prevalent form of shadow IT, with 27% of employees admitting they used unauthorised AI tools and 37% saying they do not always follow company AI policies. The survey of 5,200 knowledge workers across six countries shows broad corporate encouragement of AI experimentation alongside frequent circumvention driven by convenience and perceived productivity gains. 1Password warns that freemium and browser-based AI tools can ingest sensitive data, violate compliance requirements and even act as malware vectors.

read more →

Thu, October 30, 2025

LinkedIn to Use EU, UK and Other Profiles for AI Training

🔒 Microsoft-owned LinkedIn will begin using profile details, public posts and feed activity from users in the UK, EU, Switzerland, Canada and Hong Kong to train generative AI models and to support personalised ads across Microsoft starting 3 November 2025. Private messages are excluded. Users can opt out via Settings & Privacy > Data Privacy and toggle Data for Generative AI Improvement to Off. Organisations should update social media policies and remind staff to review their advertising and data-sharing settings.

read more →

Wed, October 29, 2025

Social Media Privacy Ranking 2025: Platforms Compared

🔒 Incogni’s Social Media Privacy Ranking 2025 evaluates 15 major platforms across data collection, resale, AI training, privacy settings, and regulatory fines. The analysis identifies Pinterest and Quora as the most privacy-conscious, while TikTok and Facebook rank lowest, driven by extensive data use and historical penalties. The report highlights practical differences in opt-outs, data-sharing, and default settings and recommends users review privacy controls and use Kaspersky’s Privacy Checker.

read more →

Wed, October 29, 2025

AI-targeted Cloaking Tricks Agentic Browsers, Warns SPLX

⚠ Researchers report a new form of context-poisoning called AI-targeted cloaking that serves different content to agentic browsers and AI crawlers. SPLX shows attackers can use a trivial user-agent check to deliver alternate pages to crawlers from ChatGPT and Perplexity, turning retrieved content into manipulated ground truth. The technique mirrors search engine cloaking but targets AI overviews and autonomous reasoning, creating a potent misinformation vector. A concurrent hTAG analysis also found many agents execute risky actions with minimal safeguards, amplifying potential harm.

read more →

Wed, October 29, 2025

Aisuru Botnet Evolves from DDoS to Residential Proxies

🛡️ Aisuru, first identified in August 2024, has been retooled from launching record DDoS assaults to renting hundreds of thousands of compromised IoT devices as residential proxies. Researchers warn the change powers a massive proxy market that is being used to anonymize large-scale content scraping for AI training and other abuses. The botnet — roughly 700,000 devices strong — previously produced multi‑terabit attacks that disrupted ISPs and damaged router hardware. Industry and law enforcement are sharing blocklists and probing proxy reseller ecosystems tied to the infections.

read more →