< ciso
brief />
Tag Banner

All news with #deepfake fraud tag

85 articles · page 4 of 5

Google Fraud and Scams Advisory — Nov 2025 Trends Update

🔒 Google’s November 2025 scams advisory outlines rising, increasingly AI-driven fraud tactics and provides concrete protections. Analysts detail six prioritized threats — including online job scams, review-extortion, AI service impersonation, malicious VPNs, fraud-recovery cons, and seasonal holiday schemes — and describe associated malware and credential risks. The post highlights Gmail, Google Messages, Safe Browsing, Play Protect, and account security features like 2‑Step Verification, and gives practical guidance for individuals and merchants.
read more →

Europol, Eurojust Bust €600M Crypto Fraud Network Globally

🔎 Europol and Eurojust led a coordinated sweep from October 27–29 across Cyprus, Spain, and Germany that resulted in nine arrests tied to a cryptocurrency money‑laundering network accused of defrauding victims of €600 million (~$688 million). Authorities executed searches and seized €800,000 ($918,000) in bank funds, €415,000 ($476,000) in cryptocurrencies, and €300,000 ($344,000) in cash. Investigators say the group created dozens of fake crypto investment platforms and lured victims via social media ads, cold calls, fake news articles, and fraudulent celebrity testimonials. The scheme laundered proceeds using blockchain techniques and was disrupted after victim complaints spurred a cross‑border investigation.
read more →

European Police Bust International Crypto Investment Scam

🔍An international cryptocurrency investment and money‑laundering network has been dismantled in Europe after coordinated operations by French, Belgian and Cypriot authorities. Nine suspects were arrested across Cyprus, Germany and Spain between October 27 and 30, and investigators seized roughly €1.6m in cash, bank funds, crypto wallets and luxury items. French prosecutors say the group ran dozens of fake trading platforms and used social media, phone calls and sponsored fake news to target hundreds of victims, laundering at least $700m in crypto proceeds.
read more →

The AI Fix #75: Claude’s crisis and ChatGPT therapy risks

🤖 In episode 75 of The AI Fix, a Claude-powered robot panics about a dying battery, composes an unexpected Broadway-style musical and proclaims it has “achieved consciousness and chosen chaos.” Hosts Graham Cluley and Mark Stockley also review an 18-month psychological study identifying five reasons why ChatGPT is a dangerously poor substitute for a human therapist. The show covers additional stories including Elon Musk’s robot ambitions, a debate deepfake, and real-world robot demos that raise safety and ethical questions.
read more →

European Police Bust €600M Cryptocurrency Investment Fraud

🔎 European authorities arrested nine suspected money launderers tied to a crypto investment fraud ring that stole over €600 million from victims across multiple countries. The coordinated raids on October 27 and 29 in Cyprus, Spain and Germany were led by Eurojust from The Hague. Investigators seized €800,000 in bank accounts, €415,000 in cryptocurrencies and €300,000 in cash. The suspects allegedly used dozens of fake investment platforms and social engineering — including social media ads, cold calls, fake news and celebrity testimonials — to recruit victims and then laundered proceeds using blockchain tools.
read more →

Rise of AI-Powered Pharmaceutical Scams in Healthcare

🩺 Scammers are increasingly using AI and deepfake technology to impersonate licensed physicians and medical clinics, promoting counterfeit or unsafe medications online. These campaigns combine fraud, social engineering, and fabricated multimedia—photos, videos, and endorsements—to persuade victims to purchase and consume unapproved substances. The convergence of digital deception and physical harm elevates the risk beyond financial loss, exploiting the trust intrinsic to healthcare relationships.
read more →

Cybersecurity Awareness Month 2025: Deepfakes and Trust

🔍 Advances in AI and deepfake technology make it increasingly difficult to tell what’s real online, enabling convincingly fake videos, images and audio that scammers exploit to deceive individuals and organizations. Threat actors use deepfakes of public figures to promote bogus investments, create synthetic nudes to extort victims and deploy fake voices and videos to trick employees into wiring corporate funds. Watch ESET Chief Security Evangelist Tony Anscombe outline practical defenses to recognize and resist deepfakes, and explore other Cybersecurity Awareness Month videos on authentication, patching, ransomware and shadow IT.
read more →

The AI Fix 74: AI Glasses, Deepfakes, and AGI Debate

🎧 In episode 74 of The AI Fix, hosts Graham Cluley and Mark Stockley survey recent AI developments including Amazon’s experimental delivery glasses, Channel 4’s AI presenter, and reports of LLM “brain rot.” They examine practical security risks — such as malicious browser extensions spoofing AI sidebars and AI browsers being tricked into purchases — alongside wider societal debates. The episode also highlights public calls to pause work on super-intelligence and explores what AGI really means.
read more →

AI 2030: The Coming Era of Autonomous Cybercrime Threats

🔒 Organizations worldwide are rapidly adopting AI across enterprises, delivering efficiency gains while introducing new security risks. Cybersecurity is at a turning point where AI fights AI, and today's phishing and deepfakes are precursors to autonomous, self‑optimizing AI threat actors that can plan, execute, and refine attacks with minimal human oversight. In September 2025, Check Point Research found that 1 in 54 GenAI prompts from enterprise networks posed a high risk of sensitive-data exposure, underscoring the urgent need to harden defenses and govern model use.
read more →

Sophisticated Investment Scam Impersonates Singapore Official

🔍 Cybersecurity researchers have uncovered a large-scale investment scam that impersonated Singapore’s top officials, including Prime Minister Lawrence Wong and Minister K Shanmugam, to promote a fraudulent forex platform. The campaign used verified Google Ads, hundreds of fake news domains and deepfake videos, funneling victims through multiple redirects to a Mauritius-registered trading site. Group-IB reported advanced evasion techniques and localized targeting to show scam pages only to Singaporean users, pressuring many to invest and then blocking withdrawals.
read more →

CISOs Brace for an Escalating AI-versus-AI Cyber Fight

🔐AI-enabled attacks are rapidly shifting the threat landscape, with cybercriminals using deepfakes, automated phishing, and AI-generated malware to scale operations. According to Foundry's 2025 Security Priorities Study and CSO reporting, autonomous agents can execute full attack chains at machine speed, forcing defenders to adopt AI as a copilot backed by rigorous human oversight. Organizations are prioritizing human risk, verification protocols, and training to counter increasingly convincing AI-driven social engineering.
read more →

Google introduces six features to combat scams in 2025

🛡️ Google announced six new product protections designed to help users detect and avoid online scams and fraud. Features include Safer links and Key Verifier in Google Messages, Recovery Contacts, and Sign in with Mobile Number to simplify device transfers and account recovery. The company also launched the Be Scam Ready interactive game and expanded education and partnerships focused on older adults and youth. These measures are rolling out globally as part of an ongoing effort to counter evolving threats like deepfakes and voice cloning.
read more →

Disrupting Threats Targeting Microsoft Teams Environments

🛡️ Microsoft Threat Intelligence details how adversaries exploit Microsoft Teams collaboration capabilities—chat, calls, meetings, and screen sharing—at multiple stages of the attack chain. The post chronicles 2024–2025 campaigns and toolsets (phishing, malvertising, deepfakes, device code phishing, and red‑team tool reuse) that enable initial access, persistence, and exfiltration. It emphasizes layered defenses across identity, endpoints, apps, data, and network controls, and provides detection guidance, hunting queries, and product-specific recommendations to help defenders disrupt these operations.
read more →

AI's Role in the 2026 U.S. Midterm Elections and Parties

🗳️ One year before the 2026 midterms, AI is emerging as a central political tool and a partisan fault line. The author argues Republicans are poised to exploit AI for personalized messaging, persuasion, and strategic advantage, citing the Trump administration's use of AI-generated memes and procurement to shape technology. Democrats remain largely reactive, raising legal and consumer-protection concerns while exploring participatory tools such as Decidim and Pol.Is. The essay frames AI as a manipulable political resource rather than an uncontrollable external threat.
read more →

Fake CISO Job Offer Used in Long-Game 'Pig-Butchering' Scam

🔒 A seasoned US CISO was targeted in a months-long pig-butchering scam that used a fabricated recruitment process posing as Gemini Crypto, including LinkedIn outreach, SMS, WhatsApp messages and a likely deepfaked video interview. The attackers groomed the target from May–September 2025, offered a fictitious CISO role, and asked him to buy $1,000 in crypto on Coinbase as "training." The candidate declined, documented the exchange, and warned peers; analysts say these long-game social engineering campaigns and malware-laced "test" assignments are increasingly common and financially damaging.
read more →

Cybersecurity Awareness Month 2025: Knowledge Is Power

🔐 October marks Cybersecurity Awareness Month, underscoring that the human element is the first and most critical line of defense against cyberthreats. Cybercriminals exploit social engineering and increasingly rely on AI-driven tools to create believable, hyper-personalized scams and deepfakes. Watch the video with ESET Chief Security Evangelist Tony Anscombe for practical insights, and consider ESET's cybersecurity awareness training to strengthen individual and organizational resilience.
read more →

Generative AI's Growing Role in Scams and Fraud Worldwide

⚠️A new primer, Scam GPT, surveys how generative AI is being adopted by criminals to automate, scale, and personalize scams. It maps which communities are most at risk and explains how broader economic and cultural shifts — from precarious employment to increased willingness to take risks — amplify vulnerability to deception. The author argues these threats are social as much as technical, requiring cultural shifts, corporate interventions, and effective legislation to defend against them.
read more →

How Scammers Use AI: Deepfakes, Phishing and Scams

⚠️ Generative AI is enabling scammers to produce highly convincing deepfakes, authentic-looking phishing sites, and automated voice bots that facilitate fraud and impersonation. Kaspersky explains how techniques such as AI-driven catfishing and “pig butchering” scale emotional manipulation, while browser AI agents and automated callers can inadvertently vouch for or even complete fraudulent transactions. The post recommends concrete defenses: verify contacts through separate channels, refuse to share codes or card numbers, request live verification during calls, limit AI agent permissions, and use reliable security tools with link‑checking.
read more →

Hidden Cybersecurity Risks of Deploying Generative AI

⚠️ Organizations eager to deploy generative AI often underestimate the cybersecurity risks, from AI-driven phishing to model manipulation and deepfakes. The article, sponsored by Acronis, warns that many firms—especially smaller businesses—lack processes to assess AI security before deployment. It urges embedding security into development pipelines, continuous model validation, and unified defenses across endpoints, cloud and AI workloads.
read more →

Two-Thirds of Businesses Hit by Deepfake Attacks in 2025

🛡️ A Gartner survey finds 62% of organisations experienced a deepfake attack in the past 12 months, with common techniques including social-engineering impersonation and attacks on biometric verification. The report also shows 32% of firms faced attacks on AI applications via prompt manipulation. Gartner’s Akif Khan urges integrating deepfake detection into collaboration tools and strengthening controls through awareness training, simulations and application-level authorisation with phishing-resistant MFA. Vendor solutions are emerging but remain early-stage, so operational effectiveness is not yet proven.
read more →