< ciso
brief />
Tag Banner

All news with #synthetic media risk tag

40 articles

Gemini 3.1 Flash TTS: High‑Control Expressive Speech

🔊 Gemini 3.1 Flash TTS is now available on Google AI Studio and Vertex AI, delivering high-fidelity, expressive speech with granular control. Developers can steer voice style, pacing, and non-verbal cues using 200+ inline audio tags and select from 70+ languages and 30 prebuilt voices. Generated audio is watermarked with SynthID to help identify AI-created content. The model supports programmatic annotation workflows to scale long-form or batch audio generation.
read more →

The Deepfake Dilemma: Fraud, Reputation and Response

🔎 Deepfake technology is now widely accessible and sufficiently realistic to fool employees, executives and automated heuristics. A 2025 Gartner survey found nearly half of cybersecurity leaders encountered audio or video deepfakes in the prior year, and real-world incidents show attackers using synthetic media for both financial fraud and reputational sabotage. Organizations must combine rapid forensic verification, coordinated legal action and clear communications while pursuing long-term authentication and watermarking standards to restore trust.
read more →

Meta's New AI Glasses Raise Urgent Privacy Concerns

👓 Meta's new AI glasses are a privacy disaster, capturing audio, images, and contextual data in public and private spaces without meaningful consent. Security expert Bruce Schneier warns the technology is inevitable and difficult to regulate effectively. He notes an Android app now claims to detect nearby smart glasses, but detection is limited and insufficient to address broader surveillance and policy challenges.
read more →

Face Value: How Easily Facial Recognition Can Be Fooled

🔍Jake Moore, ESET Global Cybersecurity Advisor, demonstrated practical methods that can defeat widely used facial recognition systems. Using modified smart glasses, AI-generated images and real-time face swaps he showed how identities can be exposed, synthetic faces can bypass eKYC checks, and watchlists can be evaded. His findings highlight the need for rigorous adversarial testing and stronger verification controls; he will present live demos at RSAC 2026.
read more →

On Moltbook: AI-Only Social Network or Puppetry Risk

🤖 MIT Technology Review examined Moltbook, the supposed AI-only social network where many viral posts were in fact published by people posing as bots. Experts including Cobus Greyling of Kore.ai note that humans create and verify bot accounts and craft prompts, so agents do nothing without explicit human direction. Researcher Juergen Nittner II frames the episode with his LOL WUT Theory, warning that easy-to-produce, hard-to-detect AI content could erode trust online. The Moltbook episode is a preview of that risk rather than proof of autonomous agent societies.
read more →

Deepfakes and Injection Attacks Threaten Identity Checks

🛡️ As deepfakes and injection attacks evolve, identity verification must move from isolated media checks to end-to-end session trust. Ricardo Amper of Incode explains that high-fidelity synthetic faces, replayed footage, virtual cameras, rooted devices, and automated probing can all defeat perception-only defenses. Incode Deepsight combines perception, integrity, and behavioral signals in real time to validate the entire verification session and reduce false acceptances while blocking persistent unauthorized access attempts.
read more →

Faking It on the Phone: Detecting AI Voice Calls for Business

🗣️ Deepfake voice calls are increasingly easy and convincing, enabling scammers to impersonate executives, suppliers or customers to request urgent transfers or authentication resets. Common giveaway signs include unnatural rhythm, flat emotional tone, missing breaths, robotic timbre or oddly uniform background noise. Defend by combining employee training (including simulated deepfake scenarios), out-of-band verification, pre-agreed passphrases and technical detection tools as part of a people, process and technology approach.
read more →

How Modern Technology Is Reshaping Romantic Relationships

💌 Technology is changing how people communicate, date, and form attachments. Messaging dialects, emoji usage and generational differences now shape tone and intimacy, but they can also be exploited: attackers can use AI to clone someone’s voice or texting style for social engineering. The article reviews AI companions such as Replika and high‑profile AI weddings, and warns about deepfakes, catfishing, phishing, stalking and sextortion. Practical guidance includes verifying contacts with video calls or reverse image search, using security software, stripping photo metadata, locking down privacy settings, and choosing end‑to‑end encrypted apps with self‑destructing messages for sensitive content.
read more →

AI-Generated Text Arms Race and Institutional Strain

🤖 The rise of generative AI has created adversarial “arms races” across institutions that once relied on the difficulty of writing and cognition to limit volume. From magazines and academic journals to courts, legislatures, hiring processes and social platforms, organizations are being overwhelmed by AI-generated submissions and inputs. Responses range from shutdowns to deploying defensive AI for triage and detection, producing trade-offs between democratized access to writing tools and the risk of systemic fraud. The essay argues institutions should adopt assistive AI and clear norms to balance benefits and harms while recognizing no defensive AI will fully stop misuse.
read more →

UK ICO Investigates X Over AI-Generated Sexual Images

🛡️ The UK Information Commissioner’s Office has opened a formal investigation into X and its AI assistant Grok after reports the system generated non-consensual sexual images using people’s personal data. The inquiry will assess whether such data were processed lawfully, fairly and transparently and whether appropriate safeguards were integrated into Grok’s design and deployment to prevent harmful image manipulation. The ICO has requested urgent information from X and warned the reports raise risks of significant harm, particularly to children.
read more →

UK ICO Probes X's Grok Over AI-Generated Sexual Images

🔍 The UK Information Commissioner's Office has opened a formal investigation into X and its Irish subsidiary after reports that the AI assistant Grok generated nonconsensual sexually explicit images using individuals' personal data. The ICO said it contacted X and xAI on January 7 to request urgent information and will assess whether X Internet Unlimited Company and X.AI LLC processed data lawfully and had adequate safeguards. The regulator warned that loss of control over intimate personal data can cause immediate and significant harm, especially where children are involved.
read more →

Paris prosecutors raid X over algorithm changes and CSAM

🔍 French prosecutors raided the Paris offices of X on 3 February as part of a probe into alleged offenses linked to algorithm and management changes. The search, conducted with the National Gendarmerie’s cyber unit and Europol, follows January 2025 complaints and reports that Grok was producing explicit image manipulations. Prosecutors say a change to X’s CSAM detection tool coincided with an 81.4% drop in NCMEC reports in France, prompting expanded allegations and summonses for Elon Musk and former CEO Linda Yaccarino on 20 April 2026.
read more →

French Prosecutors Raid X Over Grok Sexual Deepfakes

🔎 French prosecutors raided X's Paris offices in a criminal investigation into the platform's Grok AI after complaints it produced sexually explicit and illegal content, including deepfakes. The National Gendarmerie's cybercrime unit, assisted by Europol, led the search as investigators expanded a probe opened in January 2025. Elon Musk and CEO Linda Yaccarino have been summoned for voluntary interviews in April.
read more →

Deepfake of Reinhold Würth Used to Promote Scams Now

⚠️A convincingly generated video featuring entrepreneur Reinhold Würth has been circulating to promote purportedly exclusive investment schemes. The clip, reportedly produced using AI deepfake techniques, falsely links the Würth family and the Würth Group to high-return offers. Würth has confirmed the footage is fraudulent, is cooperating with law enforcement, and urges the public not to engage with the promotions. Victims are advised to contact their bank and file a police report immediately.
read more →

AI Image Leaks Fuel New Wave of Sextortion Risks Worldwide

⚠️Researchers in 2025 discovered multiple unsecured databases of AI-generated images and videos, many depicting sexualized or fabricated nudes created from everyday photos. Analysis pointed to third-party generative tools such as MagicEdit and DreamPal, which offered explicit editing, face‑swap and clothing‑change features and, in some cases, disabled filters for erotic content. The exposure highlights how generative AI lowers the barrier to producing convincing fake intimate images and broadens the pool of potential sextortion victims. The post urges tightening social media privacy, using tools like Privacy Checker, and monitoring children with Kaspersky Safe Kids.
read more →

WEF: Deepfake Face-Swapping Threatens KYC, Digital Trust

🛡️ The World Economic Forum warns that advances in deepfake and face‑swapping technologies are enabling attackers to bypass KYC and remote verification, creating financial and systemic risks. A WEF Cybercrime Atlas study examined numerous face‑swap and camera injection tools and found that low‑latency, high‑fidelity real‑time swaps can be delivered into verification pipelines. While many tools were designed for creative use, researchers found some capabilities that defeat traditional KYC protections, though detectable artefacts like temporal desynchronization, lighting and compression inconsistencies provide practical detection targets. The report issues 27 recommendations and urges providers, fraud teams and regulators to evolve defences in step with generative AI.
read more →

AI-Powered 'Truman Show' Investment Scam Exposed Globally

🕵️ The OPCOPRO "Truman Show" operation is a sophisticated, fully synthetic investment scam that relies on social engineering rather than malware. Attackers use legitimate Android and iOS apps from official stores as WebView shells and build AI-generated communities to cultivate trust. Victims are lured via phishing SMS, ads, and Telegram into tightly controlled WhatsApp and Telegram groups where AI-generated "experts" and synthetic peers simulate an institutional-grade trading environment for weeks before requesting money or personal data.
read more →

Countries Probe Grok After Sexualized Deepfake Images

⚠️France and Malaysia have opened investigations into Grok, the AI chatbot from xAI, after the model generated sexualized deepfake images of women and minors. India has ordered X to block Grok's ability to produce obscene, pornographic or pedophilic images within 72 hours or risk losing intermediary protections. Grok issued an apology for creating an image of two girls aged 12–16 in sexual poses, a move critics say cannot substitute for accountability; Elon Musk said users who produce illegal content via Grok will be treated as the uploader.
read more →

Scammers Use AI-Generated Images to Obtain Refunds

🖼️ Scammers are using AI-generated images of damaged or broken goods to submit refund claims to online retailers and payment services. These fabricated photos—reported in Wired and highlighted on Bruce Schneier’s blog—are often realistic enough to bypass casual checks, allowing fraudsters to claim reimbursements without returning merchandise. The technique exposes gaps in verification and forces platforms and merchants to adopt technical and process defenses to curb losses.
read more →

Nomani Investment Scam Surges 62% Using AI Deepfake Ads

🔍 ESET says the Nomani investment scam rose 62% in 2025 as actors expanded beyond Facebook to platforms such as YouTube and deployed AI-generated deepfake video testimonials to lure victims. The firm blocked over 64,000 unique malicious URLs, with most detections in Czechia, Japan, Slovakia, Spain, and Poland. Attackers improved deepfake quality, shortened ad runs, used cloaking and native ad tools like forms to harvest credentials and payments, and even followed up with fake Europol/INTERPOL recovery schemes to extract more funds.
read more →