All news with #content provenance tag
Thu, November 20, 2025
Nano Banana Pro: Gemini 3 Pro Image for Enterprise Use
🎨 Google is unveiling Nano Banana Pro (Gemini 3 Pro Image), a high-fidelity image generation and editing model available today in Vertex AI and Google Workspace, with a rollout to Gemini Enterprise coming soon. The model supports multi-language text rendering and on-image translation, connects to Google Search for context-aware outputs, and accepts up to 14 reference images and 4K inputs for production-grade assets. Built-in SynthID watermarking and planned copyright indemnification address commercial use and responsible deployment.
Tue, November 18, 2025
AI and Voter Engagement: Transforming Political Campaigning
🗳️ This essay examines how AI could reshape political campaigning by enabling scaled forms of relational organizing and new channels for constituent dialogue. It contrasts the connective affordances of Facebook in 2008, which empowered person-to-person mobilization, with today’s platforms (TikTok, Reddit, YouTube) that favor broadcast or topical interaction. The authors show how AI assistants can draft highly personalized outreach and synthesize constituent feedback, survey global experiments from Japan’s Team Mirai to municipal pilots, and warn about deepfakes, artificial identities, and manipulation risks.
Mon, November 17, 2025
Europol Removes Thousands of Extremist Gaming Links
🔍 A coordinated action led by the European Union Internet Referral Unit (EU IRU) on 13 November 2025 resulted in the referral of thousands of extremist links found across gaming and gaming-adjacent platforms. Authorities from eight participating countries flagged 5,408 jihadist links, 1,070 violent right‑wing extremist items and 105 racist or xenophobic posts. Investigators noted illicit content on live streams, video libraries, forums and hybrid storefronts, and described how creators repurpose in-game footage with coded language and imagery to evade detection. The initiative aims to reduce public exposure and bolster cross-border cooperation.
Thu, November 13, 2025
Smashing Security Ep. 443: Tinder, Buffett Deepfake
🎧 In episode 443 of Smashing Security, host Graham Cluley and guest Ron Eddings examine Tinder’s proposal to scan users’ camera rolls and the emergence of convincing Warren Buffett deepfakes offering investment advice. They discuss the privacy, consent and fraud implications of platform-level image analysis and the risks posed by synthetic media. The conversation also covers whether agentic AI could replace human co-hosts, the idea of EDR for robots, and practical steps to mitigate these threats. Cultural topics such as Lily Allen’s new album and the release of Claude Code round out the episode.
Wed, November 5, 2025
Scientists Need a Positive Vision for Artificial Intelligence
🔬 While many researchers view AI as exacerbating misinformation, authoritarian tools, labor exploitation, environmental costs, and concentrated corporate power, the essay argues that resignation is not an option. It highlights concrete, beneficial applications—language access, AI-assisted civic deliberation, climate dialogue, national-lab research models, and advances in biology—while acknowledging imperfections. Drawing on Rewiring Democracy, the authors call on scientists to reform industry norms, document abuses, responsibly deploy AI for public benefit, and retrofit institutions to manage disruption.
Mon, November 3, 2025
AI Summarization Optimization Reshapes Meeting Records
📝 AI notetakers are increasingly treated as authoritative meeting participants, and attendees are adapting speech to influence what appears in summaries. This practice—called AI summarization optimization (AISO)—uses cue phrases, repetition, timing, and formulaic framing to steer models toward including selected facts or action items. The essay outlines evidence of model vulnerability and recommends social, organizational, and technical defenses to preserve trustworthy records.
Fri, October 31, 2025
Will AI Strengthen or Undermine Democratic Institutions
🤖 Bruce Schneier and Nathan E. Sanders present five key insights from their book Rewiring Democracy, arguing that AI is rapidly embedding itself in democratic processes and can both empower citizens and concentrate power. They cite diverse examples — AI-written bills, AI avatars in campaigns, judicial use of models, and thousands of government use cases — and note many adoptions occur with little public oversight. The authors urge practical responses: reform the tech ecosystem, resist harmful applications, responsibly deploy AI in government, and renovate institutions vulnerable to AI-driven disruption.
Thu, October 30, 2025
Agent Registry for Discovering and Verifying Signed Bots
🔐 This post proposes a lightweight, crowd-curated registry for bots and agents to simplify discovery of public keys used for cryptographic Web Bot Auth signatures. It describes a simple list format of URLs that point to signature-agent cards—extended JWKS entries containing operator metadata and keys—and shows how registries enable origins and CDNs to validate agent signatures at scale. Examples and a demo integration illustrate practical adoption.
Tue, October 28, 2025
AI-Driven Malicious SEO and the Fight for Web Trust
🛡️ The article explains how malicious SEO operations use keyword stuffing, purchased backlinks, cloaking and mass-produced content to bury legitimate sites in search results. It warns that generative AI now amplifies this threat by producing tens of thousands of spam articles, spinning up fake social accounts and enabling more sophisticated cloaking. Defenders must deploy AI-based detection, graph-level backlink analysis and network behavioral analytics to spot coordinated abuse. The piece emphasizes proactive, ecosystem-wide monitoring to protect trust and legitimate businesses online.
Thu, October 23, 2025
Manipulating Meeting Notetakers: AI Summarization Risks
📝 In many organizations the most consequential meeting attendee is the AI notetaker, whose summaries often become the authoritative meeting record. Participants can tailor their speech—using cue phrases, repetition, timing, and formulaic phrasing—to increase the chance their points appear in summaries, a behavior the author calls AI summarization optimization (AISO). These tactics mirror SEO-style optimization and exploit model tendencies to overweight early or summary-style content. Without governance and technical safeguards, summaries may misrepresent debate and confer an invisible advantage to those who game the system.
Thu, October 16, 2025
CISOs Brace for an Escalating AI-versus-AI Cyber Fight
🔐AI-enabled attacks are rapidly shifting the threat landscape, with cybercriminals using deepfakes, automated phishing, and AI-generated malware to scale operations. According to Foundry's 2025 Security Priorities Study and CSO reporting, autonomous agents can execute full attack chains at machine speed, forcing defenders to adopt AI as a copilot backed by rigorous human oversight. Organizations are prioritizing human risk, verification protocols, and training to counter increasingly convincing AI-driven social engineering.
Thu, October 16, 2025
Improving JavaScript Trustworthiness via WAICT for the Web
🔒 Cloudflare presents an early design for Web Application Integrity, Consistency, and Transparency (WAICT) to address the risks of mutable JavaScript in sensitive web apps. The proposal pairs expanded Subresource Integrity (SRI) and a signed integrity manifest with append-only transparency logs and third-party witnesses to provide verifiable inclusion and consistency proofs. Browser preload lists, proof-of-enrollment, and client-side cooldowns are used to avoid extra round trips and to limit stealthy changes. Cloudflare plans to participate as a service provider and to collaborate on standardization.
Wed, October 15, 2025
Ultimate Prompting Guide for Veo 3.1 on Vertex AI Preview
🎬 This guide introduces Veo 3.1, Google Cloud's improved generative video model available in preview on Vertex AI, and explains how to move beyond "prompt and pray" toward deliberate creative control. It highlights core capabilities—high-fidelity 720p/1080p output, variable clip lengths, synchronized dialogue and sound effects, and stronger image-to-video fidelity. The article presents a five-part prompting formula and detailed techniques for cinematography, soundstage direction, negative prompting, and timestamped scenes. It also describes advanced multi-step workflows that combine Gemini 2.5 Flash Image to produce consistent characters and controlled transitions, and notes SynthID watermarking and certain current limitations.
Mon, October 13, 2025
AI and the Future of American Politics: 2026 Outlook
🔍 The essay examines how AI is reshaping U.S. politics heading into the 2026 midterms, with campaign professionals, organizers, and ordinary citizens adopting automated tools to write messaging, target voters, run deliberative platforms, and mobilize supporters. Campaign vendors from Quiller to BattlegroundAI are streamlining fundraising, ad creation, and research, while civic groups and unions experiment with AI for outreach and internal organizing. Absent meaningful regulation, these capabilities scale rapidly and raise risks ranging from decontextualized persuasion and registration interference to state surveillance and selective suppression of political speech.
Mon, October 6, 2025
AI's Role in the 2026 U.S. Midterm Elections and Parties
🗳️ One year before the 2026 midterms, AI is emerging as a central political tool and a partisan fault line. The author argues Republicans are poised to exploit AI for personalized messaging, persuasion, and strategic advantage, citing the Trump administration's use of AI-generated memes and procurement to shape technology. Democrats remain largely reactive, raising legal and consumer-protection concerns while exploring participatory tools such as Decidim and Pol.Is. The essay frames AI as a manipulable political resource rather than an uncontrollable external threat.
Wed, September 24, 2025
Cloudflare Launches Content Signals Policy for robots.txt
🛡️ Cloudflare introduced the Content Signals Policy, an extension to robots.txt that lets site operators express how crawlers may use content after it has been accessed. The policy defines three machine-readable signals — search, ai-input, and ai-train — each set to yes/no or left unset. Cloudflare will add a default signal set (search=yes, ai-train=no) to managed robots.txt for ~3.8M domains, serve commented guidance for free zones, and publish the spec under CC0. Cloudflare emphasizes signals are preferences, not technical enforcement, and recommends pairing them with WAF and Bot Management.
Sun, September 21, 2025
Cloudflare 2025 Founders’ Letter: AI, Content, and Web
📣 Cloudflare’s 2025 Founders’ Letter reflects on 15 years of Internet change, highlighting encryption’s rise thanks in part to Universal SSL, slow IPv6 adoption, and the rising costs of scarce IPv4 space. It warns that AI answer engines are shifting value away from traffic-based business models and threatening publishers. Cloudflare previews tools and partnerships — including AI Crawl Control — to help creators control access and negotiate compensation.
Thu, September 11, 2025
Google Pixel 10 Adds C2PA Support for Media Provenance
📸 Google has added support for the C2PA Content Credentials standard to the Pixel Camera and Google Photos apps on the new Pixel 10, enabling tamper-evident provenance metadata for images, video, and audio. The Pixel Camera app achieved Assurance Level 2 in the C2PA Conformance Program, the highest mobile rating currently defined. Google says a combination of the Tensor G5, Titan M2 and Android hardware-backed features provides on-device signing keys, anonymous attestation, unique per-image certificates, and an offline time-stamping authority so provenance is verifiable, privacy-preserving, and usable even when the device is offline.
Wed, September 10, 2025
Pixel 10 Adds C2PA Content Credentials for Photos Now
📸 Google is integrating C2PA Content Credentials into the Pixel 10 camera and Google Photos to help users distinguish authentic, unaltered images from AI-generated or edited media. Every JPEG captured on Pixel 10 will automatically include signed provenance metadata, and Google Photos will attach updated credentials when images are edited so a verifiable edit history is preserved. The system works offline and relies on on-device cryptography (Titan M2, Android StrongBox, Android Key Attestation), one-time keys, and trusted timestamps to provide tamper-resistant provenance while protecting user privacy.
Wed, September 10, 2025
Pixel 10 Adds C2PA Content Credentials and Trusted Imaging
📷 Google announced Pixel 10 phones will embed C2PA Content Credentials in every photo captured by the native Pixel Camera and display verification in Google Photos. The Pixel Camera app achieved Assurance Level 2 by combining Tensor G5, the certified Titan M2 security chip, and Android hardware-backed attestation. A privacy-first model uses anonymous enrollment, a strict no-logging policy, and a one-time certificate-per-image strategy to prevent linking. Pixel 10 also supports an on-device trusted timestamping mechanism so credentials remain verifiable offline.