< ciso
brief />
Tag Banner

All news with #deepfake fraud tag

85 articles

The Deepfake Dilemma: Fraud, Reputation and Response

🔎 Deepfake technology is now widely accessible and sufficiently realistic to fool employees, executives and automated heuristics. A 2025 Gartner survey found nearly half of cybersecurity leaders encountered audio or video deepfakes in the prior year, and real-world incidents show attackers using synthetic media for both financial fraud and reputational sabotage. Organizations must combine rapid forensic verification, coordinated legal action and clear communications while pursuing long-term authentication and watermarking standards to restore trust.
read more →

Fake Ledger Live macOS App Stole $9.5M in Crypto from Users

🔒 A malicious macOS app impersonating Ledger Live on the Apple App Store drained approximately $9.5 million in cryptocurrency from 50 users after they were tricked into entering their seed/recovery phrases. Blockchain investigator ZachXBT traced funds moved across multiple chains (Bitcoin, Ethereum, Tron, Solana, Ripple) and funneled through more than 150 deposit addresses tied to a centralized mixer called "AudiA6" on KuCoin. Apple removed the fraudulent app after multiple reports, and KuCoin says it has frozen the implicated accounts pending further action. Ledger provides a Mac app on its website but not through the App Store; users are urged to download only from official vendor channels.
read more →

Telehealth Risks in 2026: Medical Data and AI Scams

🔒 Telehealth offers fast, convenient access to care but creates persistent medical records that are highly valuable to criminals. Stolen health data — from diagnoses and prescriptions to insurance IDs and test results — often fetches far more than payment or social-login credentials and enables extortion, fraud, and identity theft. The rise of AI-driven fake clinics and diagnostic tools makes realistic phishing and data-harvesting sites easier to create. Protect yourself by using a dedicated medical email, avoiding social sign-in, enabling 2FA, using clinic-provided encrypted portals, and keeping health devices patched.
read more →

FBI: Over $17.7bn Lost to Cyber Fraud in US During 2025

🛡️ The FBI's 2025 Internet Crime Report shows US victims lost more than $17.7 billion to internet-enabled fraud, with the Internet Crime Complaint Center (IC3) receiving over one million complaints in 2025. Cryptocurrency investment scams were the single largest source of financial loss at $7.2 billion, followed by Business Email Compromise and fake tech support schemes. The report also highlights nearly $893 million lost to AI-enabled fraud and 22,364 AI-related complaints, warning that synthetic content and deepfakes are increasingly abused to perpetrate scams.
read more →

Masters of Imitation: How AI Fuels Network Fakery Now

🔍 Modern attackers use AI to imitate trusted users, tools, and services, making many incidents malware-free and harder to detect. The article compares these tactics to art forger Elmyr de Hory and outlines threats such as agentic AI, supply-chain impostors, cloaked tunnels, rogue infrastructure, and sophisticated phishing. Network Detection and Response (NDR), including Corelight’s Open NDR Platform, is highlighted as essential for spotting behavioral anomalies, protocol inconsistencies, and contextual metadata to expose impostors early.
read more →

54-Year-Old Pleads Guilty After $8M Streaming Fraud

🎵 Michael Smith pleaded guilty to conspiracy to commit wire fraud after using AI to generate hundreds of thousands of songs and deploying up to 10,000 bots that streamed them billions of times, fraudulently earning more than US $8 million in royalties. He has agreed to forfeit US $8,091,843.64 and will be sentenced on July 29, 2026. The case highlights how AI and automation can be abused on streaming platforms, undermining legitimate artists' income.
read more →

Proving the Person on the Other Side Is Real, 2026 Test

🔐 By 2026, the central competition in identity-related work will be the ability to prove that the person behind a high-impact action is a real, accountable human. Generative AI and deepfakes create synthetic identities that can pass routine checks, contaminate risk models and hijack estate workflows. Defenses must focus on provenance, cross-channel consistency and continuous, risk-based verification tied to audit-grade trails.
read more →

Face Value: How Easily Facial Recognition Can Be Fooled

🔍Jake Moore, ESET Global Cybersecurity Advisor, demonstrated practical methods that can defeat widely used facial recognition systems. Using modified smart glasses, AI-generated images and real-time face swaps he showed how identities can be exposed, synthetic faces can bypass eKYC checks, and watchlists can be evaded. His findings highlight the need for rigorous adversarial testing and stronger verification controls; he will present live demos at RSAC 2026.
read more →

Augmented Phishing and Social Engineering in the AI Era

🤖 GenAI has accelerated social engineering and phishing, allowing attackers to produce hyper-personalized messages, convincingly cloned executive voices, and realistic video impersonations in seconds. Deepfake incidents have shifted from online curiosity to tangible business risk, causing financial loss and operational disruption while making identity verification on everyday collaboration platforms increasingly difficult. To address these threats, Check Point Services has expanded its training portfolio and advocates for modern defenses and smarter awareness programs designed for the realities of the AI era.
read more →

OpenID Foundation urges standards for digital estates

🔒 The OpenID Foundation warns that inconsistent handling of deceased users' digital accounts across platforms and jurisdictions creates systemic gaps that invite fraud and exploitation. The report, titled The Unfinished Digital Estate, highlights the growing risk of AI-driven deepfakes simulating deceased individuals to manipulate relatives, spread disinformation, or extract funds. It urges coordinated action from policymakers, platforms and standards bodies to create interoperable frameworks, verifiable death/incapacity processes, and clear consent, delegation and audit mechanisms to protect posthumous identity autonomy.
read more →

Identity-Verified Onboarding to Mitigate Deepfake Threats

🛡️ Cloudflare announces integration with Nametag to add workforce identity verification to Cloudflare Access, confronting the emerging 'remote IT worker' fraud where organized actors use stolen or deepfaked identities to infiltrate companies. The OIDC-based flow requires a selfie and government ID scan, and Nametag's Deepfake Defense uses cryptography and AI to attest liveness and identity. Verification completes in under 30 seconds and no biometrics are stored. This layer enables identity-based policies before access is granted.
read more →

AI and Deepfakes Accelerate Cybercriminal Capabilities

⚠️ A new Cloudflare Threat Report warns that widespread access to large language models and AI tools has lowered the barrier to entry for cybercriminals, enabling rapid, scalable attacks. Attackers are using LLMs to craft convincing phishing, generate malware, and map networks in real time, increasing impact and reach. The report highlights AI-generated deepfakes and fraudulent IDs used to bypass hiring filters and embed malicious insiders, with state actors like North Korea exploiting this vector. Cloudflare urges organisations to adopt real-time intelligence and proactive defenses to counter the industrialisation of cyber threats.
read more →

On Moltbook: AI-Only Social Network or Puppetry Risk

🤖 MIT Technology Review examined Moltbook, the supposed AI-only social network where many viral posts were in fact published by people posing as bots. Experts including Cobus Greyling of Kore.ai note that humans create and verify bot accounts and craft prompts, so agents do nothing without explicit human direction. Researcher Juergen Nittner II frames the episode with his LOL WUT Theory, warning that easy-to-produce, hard-to-detect AI content could erode trust online. The Moltbook episode is a preview of that risk rather than proof of autonomous agent societies.
read more →

Deepfakes and Injection Attacks Threaten Identity Checks

🛡️ As deepfakes and injection attacks evolve, identity verification must move from isolated media checks to end-to-end session trust. Ricardo Amper of Incode explains that high-fidelity synthetic faces, replayed footage, virtual cameras, rooted devices, and automated probing can all defeat perception-only defenses. Incode Deepsight combines perception, integrity, and behavioral signals in real time to validate the entire verification session and reduce false acceptances while blocking persistent unauthorized access attempts.
read more →

Ukrainian Pleads Guilty for Running AI Fake ID Service

🛂 Yurii Nazarenko pleaded guilty to operating OnlyFake, an AI-driven subscription site that generated and sold more than 10,000 counterfeit identification images worldwide. The platform produced realistic digital passports, U.S. driver's licenses for all 50 states, Social Security cards, and IDs for roughly 56 other countries, with options for customization and output as scans or tabletop photos. Only accepting cryptocurrency and offering bulk discounts, the site was used to circumvent Know Your Customer (KYC) checks; undercover FBI agents purchased fake documents in a 2024 sting. Nazarenko was extradited from Romania in September 2025, agreed to forfeit $1.2 million, and faces up to 15 years in prison with sentencing set for June 26, 2026.
read more →

Inside Business Email Compromise: Tactics and Real Costs

📧 Business email compromise (BEC) is a targeted fraud where attackers impersonate executives, vendors, or partners to trick employees into wiring funds or revealing sensitive data. Last year BEC caused $2.7 billion in losses and increasingly uses techniques like AI-based voice/text cloning, QR-code scams, and conversation hijacking. These attacks often require no malware, relying instead on reconnaissance and trust. Defenses include multi-factor verification, approval tiers, employee training, and advanced email authentication and detection.
read more →

Faking It on the Phone: Detecting AI Voice Calls for Business

🗣️ Deepfake voice calls are increasingly easy and convincing, enabling scammers to impersonate executives, suppliers or customers to request urgent transfers or authentication resets. Common giveaway signs include unnatural rhythm, flat emotional tone, missing breaths, robotic timbre or oddly uniform background noise. Defend by combining employee training (including simulated deepfake scenarios), out-of-band verification, pre-agreed passphrases and technical detection tools as part of a people, process and technology approach.
read more →

Ireland launches GDPR probe into X's Grok for sexual images

🔎 Ireland's Data Protection Commission has opened a formal probe into X over the use of its Grok AI to generate non‑consensual sexual images of real people, including children. The inquiry will assess whether X Internet Unlimited Company complied with core GDPR duties such as lawful processing, data protection by design, and required impact assessments. The DPC said it has been engaging with XIUC since media reports emerged and has commenced a large‑scale inquiry. As X's EU lead regulator, the DPC's findings could trigger cross‑border enforcement and significant penalties.
read more →

Guiding Children on Posting Selfies: Risks and Advice

📷 This article examines whether parents should allow children to post selfies online, arguing that prohibition rarely works and parental guidance is a more effective approach. It details specific harms — from predator grooming and AI-enabled sextortion (via nudifier tools) to identity theft, cyberbullying and long-term reputational damage — and highlights correlations between heavy social-media use and worsening adolescent mental health. Practical recommendations include open communication, using privacy settings and geolocation controls, selective follower approval, routine digital clean-ups and household screen-time rules, while urging parents to model responsible sharing and reduce their own “sharenting.”
read more →

How Modern Technology Is Reshaping Romantic Relationships

💌 Technology is changing how people communicate, date, and form attachments. Messaging dialects, emoji usage and generational differences now shape tone and intimacy, but they can also be exploited: attackers can use AI to clone someone’s voice or texting style for social engineering. The article reviews AI companions such as Replika and high‑profile AI weddings, and warns about deepfakes, catfishing, phishing, stalking and sextortion. Practical guidance includes verifying contacts with video calls or reverse image search, using security software, stripping photo metadata, locking down privacy settings, and choosing end‑to‑end encrypted apps with self‑destructing messages for sensitive content.
read more →