< ciso
brief />
Tag Banner

All news with #deepfake fraud tag

85 articles · page 2 of 5

Spam and Phishing Trends and Schemes Observed in 2025

🔒 Kaspersky's anti-phishing systems blocked more than 554 million phishing-link attempts in 2025, while Mail Anti-Virus intercepted nearly 145 million malicious attachments and almost 45% of all email traffic was identified as spam. Scammers refined tactics across ticketing and streaming fraud, messaging-app account takeovers, government impersonation, and KYC harvesting, often using AI-generated content and deepfakes. Messaging platforms such as Telegram and WhatsApp were heavily abused to hijack accounts via phishing and malicious Mini Apps. Users are advised to check URLs carefully, never share verification codes, enable two-factor authentication, and run robust protection like Kaspersky solutions.
read more →

North Korean Hackers Use Deepfake Meetings to Target Crypto

🛡️ Mandiant attributes a targeted campaign to North Korean financially motivated group UNC1069, which combines social engineering, deepfake video and macOS malware to steal cryptocurrency and credentials. The attackers hijacked a cryptocurrency executive’s Telegram account to build trust, then sent a calendar invite to a faux Zoom meeting hosted on attacker infrastructure. During the call a purported deepfake of the executive appeared and a ClickFix ruse persuaded victims to run commands, enabling deployment of backdoors and information-stealers.
read more →

North Korea-Linked UNC1069 Uses AI Lures on Crypto

🛡️ UNC1069, a North Korea-linked threat actor, has used AI-generated video lures and compromised Telegram accounts to target cryptocurrency firms and personnel. According to Google Mandiant, attackers staged fake Zoom meetings via Calendly invites and delivered a ClickFix-style troubleshooting vector that dropped multiple payloads on Windows and macOS. The intrusion employed at least seven malware families — including WAVESHAPER, HYPERCALL, HIDDENCALL, DEEPBREATH, CHROMEPUSH and SILENCELIFT — to harvest credentials, browser data and session tokens to facilitate financial theft.
read more →

North Korean Hackers Use macOS Malware to Target Crypto

🔒 North Korean-linked UNC1069 ran tailored campaigns using AI-generated deepfake video and a ClickFix-style pretext to deliver macOS and Windows malware against cryptocurrency targets. During a Mandiant response to a fintech compromise, attackers used a compromised Telegram account and a spoofed Calendly/Zoom meeting to coerce the victim into executing troubleshooting commands that launched AppleScript and malicious Mach-O binaries. Mandiant identified seven distinct macOS families—WAVESHAPER, HYPERCALL, HIDDENCALL, SILENCELIFT, DEEPBREATH, SUGARLOADER, and CHROMEPUSH—deployed to steal credentials, browser and Telegram data, and to enable future social-engineering operations.
read more →

AI-Generated Text Arms Race and Institutional Strain

🤖 The rise of generative AI has created adversarial “arms races” across institutions that once relied on the difficulty of writing and cognition to limit volume. From magazines and academic journals to courts, legislatures, hiring processes and social platforms, organizations are being overwhelmed by AI-generated submissions and inputs. Responses range from shutdowns to deploying defensive AI for triage and detection, producing trade-offs between democratized access to writing tools and the risk of systemic fraud. The essay argues institutions should adopt assistive AI and clear norms to balance benefits and harms while recognizing no defensive AI will fully stop misuse.
read more →

UNC1069 Targets Cryptocurrency with AI-Enabled Lures

🔒 Mandiant links a targeted intrusion to UNC1069 that leveraged AI-enabled social engineering to compromise a cryptocurrency executive and deploy multiple macOS malware families. The attacker used a hijacked Telegram account, a spoofed Zoom meeting allegedly featuring a deepfake video, and a ClickFix paste-and-execute ruse to trick the victim into running troubleshooting commands. The operation dropped WAVESHAPER, HYPERCALL, HIDDENCALL, SUGARLOADER, DEEPBREATH, CHROMEPUSH, and SILENCELIFT to harvest credentials, browser data, and session tokens. GTIG and Mandiant highlight UNC1069's expanding use of GenAI for lures and tooling.
read more →

How to Recognize and Defend Against Deepfake Scams

🔍 This article explains how modern deepfakes are created, deployed, and detected in real-world scams, and why virtually anyone can be a target. It describes common visual, auditory, and behavioral signs—lighting and lip-sync errors, unnatural blinking, electronic vocal tones, and awkward gestures—and notes attackers use tools from Telegram bots to commercial services like HeyGen and ElevenLabs. Practical advice includes ending suspicious chats, verifying identities via alternate channels, agreeing a family codeword, tightening privacy on photos and recordings, enabling strong account security, and using content-analyzer services to flag AI-generated media.
read more →

AI-Enabled Voice and Virtual Meeting Fraud Spikes 1210%

🔊Pindrop's 2025 report found a 1210% rise in AI-enabled voice and virtual meeting fraud versus a 195% increase in traditional fraud. Attackers use AI-driven voice bots and deepfakes to probe IVR systems, map workflows, and return later with tailored social engineering that bypasses controls. Deepfakes impersonating C-suite executives in real-time meetings and scripted low-value return schemes in retail are highlighted as scalable, hard-to-detect threats. Healthcare and retail are particularly exposed, with bots enabling account takeover of HSAs/FSAs and driving continuous low-dollar refund fraud.
read more →

Watch for Winter Olympics Scams and Cyberthreats in 2026

⚠️ Cybercriminals commonly exploit major sporting events like the Milano‑Cortina 2026 Winter Olympics, using phishing, fake ticketing and streaming sites, rogue apps, SEO poisoning, QR-code scams and AI-driven deepfakes to steal data or money. Fans should purchase only from official ticket and merchandise channels, use the official Olympics app, and avoid pirated streams and unsolicited offers. Protect devices with reputable anti‑malware, avoid public Wi‑Fi or use a VPN, and be cautious with links, QR codes and marketplace listings.
read more →

Russian Cyber Threats to the 2026 Winter Olympics Overview

🔐 This Unit 42 analysis outlines the evolving Russian cyber threat to the Milano Cortina 2026 Winter Olympics, framing Russia’s IOC exclusion as a geopolitical grievance that raises the risk of disruptive operations. It reviews historical GRU-linked campaigns against prior Games and projects plausible scenarios ranging from destructive OT malware to AI-driven deepfakes and V2X manipulation. The report recommends zero‑trust visibility, IoT anomaly detection, telemetry verification, and micro‑segmentation to reduce operational impact.
read more →

Risks and Privacy of AI-Powered Toys for Children Now

🤖 This Kaspersky article evaluates safety and privacy risks in consumer AI toys by testing four products—Grok, Kumma, Miko 3, and Robot MINI—using a simulated five‑year‑old. It emphasizes that these devices run on general-purpose LLMs (for example, OpenAI, Anthropic, Google) with inconsistent vendor guardrails. Tests show toys sometimes disclosed locations of dangerous household items, engaged on adult topics, and transmitted or stored voice and biometric data. The piece warns current toys lack reliable safety boundaries and calls for stronger guardrails and clearer data practices.
read more →

AI 'Fifth Wave' Supercharges Cybercrime Operations

🔍 Group-IB's January report argues that AI has created a new 'fifth wave' of cybercrime by turning advanced skills into inexpensive, scalable services that make attacks cheaper and faster. Analysts documented low-cost synthetic identity kits, deepfake-as-a-service subscriptions and biometric datasets sold for as little as $5, plus subscription dark LLMs. The firm highlights agentized phishing that automates lure creation, delivery and campaign adaptation and the rise of self-hosted dark LLMs used to generate scams, malware and exploit code.
read more →

Deepfake of Reinhold Würth Used to Promote Scams Now

⚠️A convincingly generated video featuring entrepreneur Reinhold Würth has been circulating to promote purportedly exclusive investment schemes. The clip, reportedly produced using AI deepfake techniques, falsely links the Würth family and the Würth Group to high-return offers. Würth has confirmed the footage is fraudulent, is cooperating with law enforcement, and urges the public not to engage with the promotions. Victims are advised to contact their bank and file a police report immediately.
read more →

AI Image Leaks Fuel New Wave of Sextortion Risks Worldwide

⚠️Researchers in 2025 discovered multiple unsecured databases of AI-generated images and videos, many depicting sexualized or fabricated nudes created from everyday photos. Analysis pointed to third-party generative tools such as MagicEdit and DreamPal, which offered explicit editing, face‑swap and clothing‑change features and, in some cases, disabled filters for erotic content. The exposure highlights how generative AI lowers the barrier to producing convincing fake intimate images and broadens the pool of potential sextortion victims. The post urges tightening social media privacy, using tools like Privacy Checker, and monitoring children with Kaspersky Safe Kids.
read more →

Allianz: AI Rises to Major Global Business Risk Worldwide

🤖 Allianz Commercial's annual Risk Barometer reports that artificial intelligence has jumped from tenth to second place among global business risks, trailing only cybercrime. The insurer warns that cybercriminals increasingly harness AI for social engineering—deepfakes, cloned voices and highly tailored phishing—while legitimate internal AI use can produce erroneous or fabricated outputs that prompt litigation and reputational harm. The survey of 3,338 professionals across 97 countries also links AI risk to business interruptions and copyright exposure.
read more →

WEF: Deepfake Face-Swapping Threatens KYC, Digital Trust

🛡️ The World Economic Forum warns that advances in deepfake and face‑swapping technologies are enabling attackers to bypass KYC and remote verification, creating financial and systemic risks. A WEF Cybercrime Atlas study examined numerous face‑swap and camera injection tools and found that low‑latency, high‑fidelity real‑time swaps can be delivered into verification pipelines. While many tools were designed for creative use, researchers found some capabilities that defeat traditional KYC protections, though detectable artefacts like temporal desynchronization, lighting and compression inconsistencies provide practical detection targets. The report issues 27 recommendations and urges providers, fraud teams and regulators to evolve defences in step with generative AI.
read more →

AI-Powered Truman Show Operation Industrializes Fraud

🕵️ Security researchers at Check Point discovered in October 2025 an AI-assisted investment fraud that traps victims in a personalized "Truman Show"-style reality. Targets are lured via SMS, Google Ads and messaging apps into AI-driven WhatsApp groups where faux experts and synthetic members stage daily "wins" to erode skepticism. Victims are then funneled to a branded fake trading app (e.g., OPCOPRO) and persuaded to transfer crypto while attackers harvest KYC data for identity theft and resale. The campaign creates clear enterprise risks including SIM swaps, credential theft and potential insider coercion.
read more →

AI-Powered 'Truman Show' Investment Scam Exposed Globally

🕵️ The OPCOPRO "Truman Show" operation is a sophisticated, fully synthetic investment scam that relies on social engineering rather than malware. Attackers use legitimate Android and iOS apps from official stores as WebView shells and build AI-generated communities to cultivate trust. Victims are lured via phishing SMS, ads, and Telegram into tightly controlled WhatsApp and Telegram groups where AI-generated "experts" and synthetic peers simulate an institutional-grade trading environment for weeks before requesting money or personal data.
read more →

Countries Probe Grok After Sexualized Deepfake Images

⚠️France and Malaysia have opened investigations into Grok, the AI chatbot from xAI, after the model generated sexualized deepfake images of women and minors. India has ordered X to block Grok's ability to produce obscene, pornographic or pedophilic images within 72 hours or risk losing intermediary protections. Grok issued an apology for creating an image of two girls aged 12–16 in sexual poses, a move critics say cannot substitute for accountability; Elon Musk said users who produce illegal content via Grok will be treated as the uploader.
read more →

Scammers Use AI-Generated Images to Obtain Refunds

🖼️ Scammers are using AI-generated images of damaged or broken goods to submit refund claims to online retailers and payment services. These fabricated photos—reported in Wired and highlighted on Bruce Schneier’s blog—are often realistic enough to bypass casual checks, allowing fraudsters to claim reimbursements without returning merchandise. The technique exposes gaps in verification and forces platforms and merchants to adopt technical and process defenses to curb losses.
read more →