< ciso
brief />
Tag Banner

All news with #deepfake fraud tag

85 articles · page 3 of 5

Nomani Investment Scam Surges 62% Using AI Deepfake Ads

🔍 ESET says the Nomani investment scam rose 62% in 2025 as actors expanded beyond Facebook to platforms such as YouTube and deployed AI-generated deepfake video testimonials to lure victims. The firm blocked over 64,000 unique malicious URLs, with most detections in Czechia, Japan, Slovakia, Spain, and Poland. Attackers improved deepfake quality, shortened ad runs, used cloaking and native ad tools like forms to harvest credentials and payments, and even followed up with fake Europol/INTERPOL recovery schemes to extract more funds.
read more →

SEC Charges Firms Over $14M AI-Themed Crypto Scam Alleged

⚖️ The U.S. Securities and Exchange Commission has filed charges alleging an elaborate cryptocurrency fraud that stole more than $14 million from retail investors. The complaint names trading platforms Morocoin Tech, Berge Blockchain, and Cirkor and investment clubs that lured victims with fake AI-generated investment tips on WhatsApp. Investors were steered into bogus Security Token Offerings and fake trading platforms that later froze accounts and demanded advance fees. The SEC is seeking injunctions, civil penalties, and repayment with prejudgment interest.
read more →

Scammers Use AI to Forge Art Documentation and Certificates

🖼️ Fraudsters are using AI and large language models to create highly convincing fake invoices, appraisal certificates and certificates of authenticity for artworks, making forgeries harder to detect. Brokers and appraisers, including Marsh, report that chatbots can invent plausible experts and documentation or hallucinate false references that owners accept as real. Insurers and valuation firms are now deploying AI-based metadata analysis and anomaly detection to flag manipulated provenance and guide human review.
read more →

The AI Fix #81: ChatGPT, Deepfakes and AI Agents Highlights

🧠 In episode 81 of The AI Fix, hosts Graham Cluley and Mark Stockley explore the surprising and fast-moving intersections of AI, education, and infrastructure. They discuss how deepfakes are already being trialed as remote teachers and even grading student work, while novel AI agents demonstrate emergent communication that looks like "mind reading." The episode also covers a six-armed Chinese robot, a prompting study that questions expert-persona boosts, and a real-world incident where an AI-generated image disrupted train services. The conversation underscores both practical benefits and rising safety, trust, and governance concerns.
read more →

Smashing Security 447 — AI Abuse, Stalking and Museum Heist

🤖 On episode 447 of the Smashing Security podcast Graham Cluley and guest Jenny Radcliffe explore how generative AI can enable stalking — reporting that Grok was used to doxx people, outline stalking strategies, and share revenge‑porn tips. They also recount the audacious Louvre crown jewels heist, where thieves abused assumptions about what ‘looks normal’. Graham additionally interviews Rob Edmondson about how Microsoft 365 misconfigurations and over‑privileged accounts create serious security exposures. The episode emphasizes practical lessons in threat modelling and access hygiene.
read more →

Gartner Urges Enterprises to Block AI Browsers Now

⚠️ Gartner analysts Dennis Xu, Evgeny Mirolyubov and John Watts strongly recommend that enterprises block AI browsers for the foreseeable future, citing both known vulnerabilities and additional risks inherent to an immature technology. They warn of irreversible, non‑auditable data loss when browsers send active web content, tab data and browsing history to cloud services, and of prompt‑injection attacks that can cause fraudulent actions. Concrete flaws—such as unencrypted OAuth tokens in ChatGPT Atlas and the Comet 'CometJacking' issue—underscore that traditional controls are insufficient; Gartner advises blocking installs with existing network and endpoint controls, restricting pilots to small, low‑risk groups, and updating AI policies.
read more →

FBI Alerts on AI-Assisted Fake Kidnapping Video Scams

⚠️ The FBI is warning of AI-assisted fake kidnapping scams that use fabricated images, video, and audio to extort victims. Criminal actors typically send texts claiming a loved one has been abducted and follow with multimedia that appears genuine but often contains subtle inaccuracies. Examples include missing tattoos, incorrect body proportions, and other mismatches, and attackers may use time-limited messages to pressure victims. Observers note the technique is currently of uncertain effectiveness but likely to be automated and scaled as AI tools improve.
read more →

FBI Warns of Virtual Kidnapping Scams Using Altered Photos

🔒 The FBI has issued a public service announcement warning that criminals are manipulating images shared on social media to support virtual kidnapping ransom schemes. Scammers contact victims by text, claim a relative has been abducted, and send altered photo or video proof-of-life, sometimes using timed messages to prevent scrutiny. The FBI urges vigilance: avoid sharing travel details, establish a family code word, and capture screenshots or recordings for investigators. BleepingComputer identified multiple social media examples and reports of number spoofing.
read more →

German fraud ring used fake celebrity ads for investments

🔍 Investigators say an alleged international fraud ring used fake celebrity advertising to market a purported 'secret financial product,' duping at least 120 people across Germany out of more than €1.3 million. Authorities carried out coordinated searches in Germany and Israel, focusing on Tel Aviv and Düsseldorf, and targeted publishers accused of running misleading campaigns. The scheme promoted AI-optimized investment strategies and automated crypto trading via large social-media campaigns and fake news sites, and victims were typically left with total loss of invested capital while seized evidence is analyzed.
read more →

Global Execs Rank Disinformation, AI and Cyber Risks

🧭 Business leaders across 116 economies told the World Economic Forum that misinformation/disinformation, cyber insecurity and the adverse outcomes of AI rank among the top near-term threats to national stability. The WEF’s Executive Opinion Survey 2025 canvassed 11,000 executives, who placed technological risks alongside economic and societal concerns. Respondents flagged AI-driven deepfakes, model exploitation and AI-assisted cyber techniques as amplifiers of both disinformation campaigns and critical-system threats.
read more →

North Korea Recruits Engineers to Rent Identities for Fraud

🔍 Security researchers revealed a North Korean scheme in which Lazarus-linked Famous Chollima recruits developers to rent their identities and act as frontmen for remote jobs to enable espionage and illicit fundraising. The actors spam GitHub and other platforms, use AI-assisted tools and deepfake techniques, and request identity data and remote-access to engineers' machines. Analysts deployed a sandboxed ANY.RUN honeypot and observed use of AnyDesk, Astrill VPN, OTP extensions, and AI interview assistants to conceal origin and streamline infiltration.
read more →

2026 Predictions: Autonomous AI and the Year of the Defender

🛡️In 2026 Palo Alto Networks forecasts a shift to the Year of the Defender as enterprises counter AI-driven threats with AI-enabled defenses. The report outlines six predictions — identity deepfakes, autonomous agents as insider threats, data poisoning, executive legal exposure, accelerated quantum urgency, and the browser as an AI workspace. It urges autonomy with control, unified DSPM/AI‑SPM platforms, and crypto agility to secure the AI economy.
read more →

AI and Deepfakes Drive Surge in Sophisticated Identity Fraud

🔍 Sumsub’s 2025 Identity Fraud Report finds that global identity fraud attempts fell slightly to 2.2%, but highly sophisticated attacks rose 180%. These multi-vector schemes combine synthetic identities, AI-driven deepfakes, layered social engineering, device tampering and cross-channel manipulation, making them far harder to detect. The report warns organisations to replace manual controls with real-time behavioural and telemetry analysis to counter this shift from quantity to quality in fraud.
read more →

AI-generated fake sites deliver malicious Syncro builds

⚠️ Kaspersky describes a campaign in which attackers used the AI-powered web builder Lovable to mass-generate convincing fake vendor pages that host malicious installers. Those pages distribute a custom, attacker-signed build of the legitimate remote administration tool Syncro, which installs silently and grants full remote access. Because the payload is a legitimate admin tool altered for abuse, detection is difficult and victims risk data theft and loss of cryptocurrency funds.
read more →

AI-Driven GLP-1 Scams Hijacking European Authorities

⚠️ Criminal networks are exploiting shortages of GLP-1 drugs like Ozempic, Wegovy and Mounjaro, using AI to generate convincing counterfeit websites, emails and documents that impersonate regulators and health services across Europe. They are hijacking the identities of the NHS, AEMPS, ANSM, BfArM and AIFA to market fake weight-loss products and harvest payments. Check Point Research documents the tactics, scale and public-safety implications of this rapidly evolving scam epidemic.
read more →

The AI Fix #77: Genome LLM, Ethics, Robots and Romance

🔬 In episode 77 of The AI Fix, Graham Cluley and Mark Stockley survey a week of unsettling and sometimes absurd AI stories. They discuss a bioRxiv preprint showing a genome-trained LLM generating novel bacteriophage sequences, debates over whether AI should be allowed to decide life-or-death outcomes, and a woman who legally ‘wed’ a ChatGPT persona she named "Klaus." The episode also covers a robot's public face-plant in Russia, MIT quietly retracting a flawed cybersecurity paper, and reflections on how early AI efforts were cobbled together.
read more →

AI and Voter Engagement: Transforming Political Campaigning

🗳️ This essay examines how AI could reshape political campaigning by enabling scaled forms of relational organizing and new channels for constituent dialogue. It contrasts the connective affordances of Facebook in 2008, which empowered person-to-person mobilization, with today’s platforms (TikTok, Reddit, YouTube) that favor broadcast or topical interaction. The authors show how AI assistants can draft highly personalized outreach and synthesize constituent feedback, survey global experiments from Japan’s Team Mirai to municipal pilots, and warn about deepfakes, artificial identities, and manipulation risks.
read more →

Generative AI Drives Rise in Deepfakes and Digital Forgeries

🔍 A new report from Entrust analyzing over one billion identity verifications between September 2024 and September 2025 warns that fraudsters increasingly use generative AI to produce hyper-realistic digital forgeries. Physical counterfeits still account for 47% of attempts, but digital forgeries now represent 35%, while deepfakes comprise 20% of biometric frauds. The report also highlights a 40% annual rise in injection attacks that feed fake images directly into verification systems.
read more →

Smashing Security Ep. 443: Tinder, Buffett Deepfake

🎧 In episode 443 of Smashing Security, host Graham Cluley and guest Ron Eddings examine Tinder’s proposal to scan users’ camera rolls and the emergence of convincing Warren Buffett deepfakes offering investment advice. They discuss the privacy, consent and fraud implications of platform-level image analysis and the risks posed by synthetic media. The conversation also covers whether agentic AI could replace human co-hosts, the idea of EDR for robots, and practical steps to mitigate these threats. Cultural topics such as Lily Allen’s new album and the release of Claude Code round out the episode.
read more →

AI-Generated Receipts Spur New Detection Arms Race

🔍 AI can now produce highly convincing receipts that reproduce paper texture, detailed itemization, and forged signatures, making manual review unreliable. Expense platforms and employers are deploying AI-driven detectors that analyze image metadata and transactional patterns to flag likely fakes. Simple countermeasures—users photographing or screenshotting generated images to remove provenance data—undermine those checks, so vendors also examine contextual signals like repeated server names, timing anomalies, and broader travel details, fueling an ongoing security arms race.
read more →