< ciso
brief />
Tag Banner

All news with #synthetic media risk tag

40 articles · page 2 of 2

The AI Fix #81: ChatGPT, Deepfakes and AI Agents Highlights

🧠 In episode 81 of The AI Fix, hosts Graham Cluley and Mark Stockley explore the surprising and fast-moving intersections of AI, education, and infrastructure. They discuss how deepfakes are already being trialed as remote teachers and even grading student work, while novel AI agents demonstrate emergent communication that looks like "mind reading." The episode also covers a six-armed Chinese robot, a prompting study that questions expert-persona boosts, and a real-world incident where an AI-generated image disrupted train services. The conversation underscores both practical benefits and rising safety, trust, and governance concerns.
read more →

AI and Deepfakes Drive Surge in Sophisticated Identity Fraud

🔍 Sumsub’s 2025 Identity Fraud Report finds that global identity fraud attempts fell slightly to 2.2%, but highly sophisticated attacks rose 180%. These multi-vector schemes combine synthetic identities, AI-driven deepfakes, layered social engineering, device tampering and cross-channel manipulation, making them far harder to detect. The report warns organisations to replace manual controls with real-time behavioural and telemetry analysis to counter this shift from quantity to quality in fraud.
read more →

Generative AI Drives Rise in Deepfakes and Digital Forgeries

🔍 A new report from Entrust analyzing over one billion identity verifications between September 2024 and September 2025 warns that fraudsters increasingly use generative AI to produce hyper-realistic digital forgeries. Physical counterfeits still account for 47% of attempts, but digital forgeries now represent 35%, while deepfakes comprise 20% of biometric frauds. The report also highlights a 40% annual rise in injection attacks that feed fake images directly into verification systems.
read more →

When Romantic AI Chatbots Can't Keep Your Secrets Safe

🤖 AI companion apps can feel intimate and conversational, but many collect, retain, and sometimes inadvertently expose highly sensitive information. Recent breaches — including a misconfigured Kafka broker that leaked hundreds of thousands of photos and millions of private conversations — underline real dangers. Users should avoid sharing personal, financial or intimate material, enable two-factor authentication, review privacy policies, and opt out of data retention or training when possible. Parents should supervise teen use and insist on robust age verification and moderation.
read more →

Smashing Security Ep. 443: Tinder, Buffett Deepfake

🎧 In episode 443 of Smashing Security, host Graham Cluley and guest Ron Eddings examine Tinder’s proposal to scan users’ camera rolls and the emergence of convincing Warren Buffett deepfakes offering investment advice. They discuss the privacy, consent and fraud implications of platform-level image analysis and the risks posed by synthetic media. The conversation also covers whether agentic AI could replace human co-hosts, the idea of EDR for robots, and practical steps to mitigate these threats. Cultural topics such as Lily Allen’s new album and the release of Claude Code round out the episode.
read more →

Rise of AI-Powered Pharmaceutical Scams in Healthcare

🩺 Scammers are increasingly using AI and deepfake technology to impersonate licensed physicians and medical clinics, promoting counterfeit or unsafe medications online. These campaigns combine fraud, social engineering, and fabricated multimedia—photos, videos, and endorsements—to persuade victims to purchase and consume unapproved substances. The convergence of digital deception and physical harm elevates the risk beyond financial loss, exploiting the trust intrinsic to healthcare relationships.
read more →

AI-Designed Bioweapons: The Detection vs Creation Arms Race

🧬 Researchers used open-source AI to design variants of ricin and other toxic proteins, then converted those designs into DNA sequences and submitted them to commercial DNA-order screening tools. From 72 toxins and three AI packages they generated roughly 75,000 designs and found wide variation in how four screening programs flagged potential threats. Three of the packages were patched and improved after the test, but many AI-designed variants—often likely non-functional because of misfolding—exposed gaps in detection. The authors warn this imbalance could produce an arms race where design outpaces reliable screening.
read more →

TwelveLabs Marengo 3.0 Now on Amazon Bedrock Platform

🎥 TwelveLabs' Marengo Embed 3.0 is now available on Amazon Bedrock, providing a unified video-native multimodal embedding that represents video, images, audio, and text in a single vector space. The release doubles processing capacity—up to 4 hours and 6 GB per file—expands language support to 36 languages, and improves sports analysis and multimodal search precision. It supports synchronous low-latency text and image inference and asynchronous processing for video, audio, and large files.
read more →

Cybersecurity Awareness Month 2025: Deepfakes and Trust

🔍 Advances in AI and deepfake technology make it increasingly difficult to tell what’s real online, enabling convincingly fake videos, images and audio that scammers exploit to deceive individuals and organizations. Threat actors use deepfakes of public figures to promote bogus investments, create synthetic nudes to extort victims and deploy fake voices and videos to trick employees into wiring corporate funds. Watch ESET Chief Security Evangelist Tony Anscombe outline practical defenses to recognize and resist deepfakes, and explore other Cybersecurity Awareness Month videos on authentication, patching, ransomware and shadow IT.
read more →

AI and the Future of American Politics: 2026 Outlook

🔍 The essay examines how AI is reshaping U.S. politics heading into the 2026 midterms, with campaign professionals, organizers, and ordinary citizens adopting automated tools to write messaging, target voters, run deliberative platforms, and mobilize supporters. Campaign vendors from Quiller to BattlegroundAI are streamlining fundraising, ad creation, and research, while civic groups and unions experiment with AI for outreach and internal organizing. Absent meaningful regulation, these capabilities scale rapidly and raise risks ranging from decontextualized persuasion and registration interference to state surveillance and selective suppression of political speech.
read more →

Citizen Lab: AI Influence Operation Against Iran Exposed

🛡️ Citizen Lab has identified a coordinated network of more than 50 inauthentic accounts on X, labeled PRISONBREAK, conducting an AI-enabled influence operation aimed at provoking Iranian audiences to revolt against the Islamic Republic. The network was created in 2023, with most observable activity beginning in January 2025 and intensifying around June 2025, partially synchronized with Israeli military actions. Organic engagement was limited overall, though some posts achieved tens of thousands of views after seeding to large public communities and likely paid promotion. After reviewing alternatives, Citizen Lab assesses the most consistent hypothesis is direct involvement by an unidentified Israeli government agency or a closely supervised subcontractor.
read more →

Citizen Lab: AI-Enabled Influence Operation Targets Iran

🔎Citizen Lab reports a coordinated AI-enabled influence operation, dubbed PRISONBREAK, that used more than 50 inauthentic X profiles to push narratives aimed at inciting revolt within Iran. Created in 2023, the network became active mainly from January 2025 and produced bursts of activity synchronized with IDF operations in June 2025. Citizen Lab notes limited organic engagement, though some posts reached tens of thousands of views, and assesses the most consistent attribution is to an Israeli government agency or a closely supervised subcontractor.
read more →

How Scammers Use AI: Deepfakes, Phishing and Scams

⚠️ Generative AI is enabling scammers to produce highly convincing deepfakes, authentic-looking phishing sites, and automated voice bots that facilitate fraud and impersonation. Kaspersky explains how techniques such as AI-driven catfishing and “pig butchering” scale emotional manipulation, while browser AI agents and automated callers can inadvertently vouch for or even complete fraudulent transactions. The post recommends concrete defenses: verify contacts through separate channels, refuse to share codes or card numbers, request live verification during calls, limit AI agent permissions, and use reliable security tools with link‑checking.
read more →

Cloudflare Adds AI Crawl Control to Project Galileo

🛡️ Cloudflare is extending Project Galileo to include Bot Management and AI Crawl Control, giving participating journalists, independent publishers, and non-profits free tools to monitor and manage AI crawlers. These services help distinguish legitimate search crawlers from AI scrapers, analyze crawler behavior by type and provider, and apply tailored rules to protect content. The goal is to help news organizations preserve traffic, protect intellectual property, and negotiate fair compensation with AI companies.
read more →

The AI Fix — Episode 68: Merch, Hoaxes and AI Rights

🎧 In episode 68 of The AI Fix, hosts Graham Cluley and Mark Stockley blend news, commentary and light-hearted banter while launching a new merch store. The discussion covers real-world harms from AI-generated hoaxes that sent Manila firefighters to a non-existent fire, Albania appointing an AI-made minister, and reports of the so-called 'godfather of AI' being spurned by ChatGPT. They also explore wearable telepathic interfaces like AlterEgo, the rise of AI rights advocacy, and listener support options including ad-free subscriptions and merch purchases.
read more →

Google Pixel 10 Adds C2PA Support for Media Provenance

📸 Google has added support for the C2PA Content Credentials standard to the Pixel Camera and Google Photos apps on the new Pixel 10, enabling tamper-evident provenance metadata for images, video, and audio. The Pixel Camera app achieved Assurance Level 2 in the C2PA Conformance Program, the highest mobile rating currently defined. Google says a combination of the Tensor G5, Titan M2 and Android hardware-backed features provides on-device signing keys, anonymous attestation, unique per-image certificates, and an offline time-stamping authority so provenance is verifiable, privacy-preserving, and usable even when the device is offline.
read more →

Pixel 10 Adds C2PA Content Credentials for Photos Now

📸 Google is integrating C2PA Content Credentials into the Pixel 10 camera and Google Photos to help users distinguish authentic, unaltered images from AI-generated or edited media. Every JPEG captured on Pixel 10 will automatically include signed provenance metadata, and Google Photos will attach updated credentials when images are edited so a verifiable edit history is preserved. The system works offline and relies on on-device cryptography (Titan M2, Android StrongBox, Android Key Attestation), one-time keys, and trusted timestamps to provide tamper-resistant provenance while protecting user privacy.
read more →

Pixel 10 Adds C2PA Content Credentials and Trusted Imaging

📷 Google announced Pixel 10 phones will embed C2PA Content Credentials in every photo captured by the native Pixel Camera and display verification in Google Photos. The Pixel Camera app achieved Assurance Level 2 by combining Tensor G5, the certified Titan M2 security chip, and Android hardware-backed attestation. A privacy-first model uses anonymous enrollment, a strict no-logging policy, and a one-time certificate-per-image strategy to prevent linking. Pixel 10 also supports an on-device trusted timestamping mechanism so credentials remain verifiable offline.
read more →

Preventing Online Bullying as Students Return to School

📚 The online world often mirrors the schoolyard, and bullying can intensify when a new term begins. A 2023 Microsoft study highlights cyberbullying as a top parental concern, with harassment ranging from name‑calling and rumor‑spreading to sextortion and deepfake images. Watch for behavioral changes, keep open, nonjudgmental lines of communication, and review app privacy settings. If abuse occurs, calmly teach children to block, capture evidence and report incidents to platforms and schools.
read more →

The AI Fix Episode 63: Robots, GPT-5 and Ethics Debate

🎧 In episode 63 of The AI Fix, hosts Graham Cluley and Mark Stockley dissect a wide range of AI developments and controversies. Topics include Unitree Robotics referencing Black Mirror to market its A2 robot dog, concerns over shared ChatGPT conversations appearing in Google, and OpenAI releasing gpt-oss, its first open-weight model since GPT-2. The show also examines ethical issues around AI-created avatars of deceased individuals and separates the hype from the reality of GPT-5 claims.
read more →