Tag Banner

All news with #content provenance tag

Thu, December 4, 2025

Cyber Agencies Urge Provenance Standards for Digital Trust

🔎 The UK’s National Cyber Security Centre and Canada’s Centre for Cyber Security (CCCS) have published a report on public content provenance aimed at improving digital trust in the AI era. It examines emerging provenance technologies, including trusted timestamps and cryptographically secured metadata, and identifies interoperability and usability gaps that hinder adoption. The guidance offers practical steps for organisations considering provenance solutions.

read more →

Tue, December 2, 2025

AI Requires Difficult Choices: Regulatory Paths for Democracy

🧭 The piece argues that AI forces a societal reckoning similar to the arrival of social media: it can amplify individual agency but also concentrate control and harm democratic life. The authors identify four pivotal choices for executives and courts, Congress, states, and everyday users—centering on legal accountability, privacy and portability, reparative taxation, and consumer product choices. They urge proactive, aligned policy and civic action to avoid repeating past mistakes and to steer AI toward public-good outcomes.

read more →

Mon, December 1, 2025

Google Deletes X Post After Using Stolen Recipe Infographic

🧾 Google removed a promotional X post for NotebookLM after users noted an AI-generated infographic closely mirrored a stuffing recipe from the blog HowSweetEats. The card, produced using Google’s Nano Banana Pro image model, reproduced ingredient lists and structure that matched the original post. After being called out on X, Google quietly deleted the promotion; the episode highlights broader concerns about AI scraping and attribution. The company also confirmed it is testing ads in AI-generated answers alongside citations.

read more →

Thu, November 20, 2025

Nano Banana Pro: Gemini 3 Pro Image for Enterprise Use

🎨 Google is unveiling Nano Banana Pro (Gemini 3 Pro Image), a high-fidelity image generation and editing model available today in Vertex AI and Google Workspace, with a rollout to Gemini Enterprise coming soon. The model supports multi-language text rendering and on-image translation, connects to Google Search for context-aware outputs, and accepts up to 14 reference images and 4K inputs for production-grade assets. Built-in SynthID watermarking and planned copyright indemnification address commercial use and responsible deployment.

read more →

Tue, November 18, 2025

AI and Voter Engagement: Transforming Political Campaigning

🗳️ This essay examines how AI could reshape political campaigning by enabling scaled forms of relational organizing and new channels for constituent dialogue. It contrasts the connective affordances of Facebook in 2008, which empowered person-to-person mobilization, with today’s platforms (TikTok, Reddit, YouTube) that favor broadcast or topical interaction. The authors show how AI assistants can draft highly personalized outreach and synthesize constituent feedback, survey global experiments from Japan’s Team Mirai to municipal pilots, and warn about deepfakes, artificial identities, and manipulation risks.

read more →

Mon, November 17, 2025

Europol Removes Thousands of Extremist Gaming Links

🔍 A coordinated action led by the European Union Internet Referral Unit (EU IRU) on 13 November 2025 resulted in the referral of thousands of extremist links found across gaming and gaming-adjacent platforms. Authorities from eight participating countries flagged 5,408 jihadist links, 1,070 violent right‑wing extremist items and 105 racist or xenophobic posts. Investigators noted illicit content on live streams, video libraries, forums and hybrid storefronts, and described how creators repurpose in-game footage with coded language and imagery to evade detection. The initiative aims to reduce public exposure and bolster cross-border cooperation.

read more →

Thu, November 13, 2025

Smashing Security Ep. 443: Tinder, Buffett Deepfake

🎧 In episode 443 of Smashing Security, host Graham Cluley and guest Ron Eddings examine Tinder’s proposal to scan users’ camera rolls and the emergence of convincing Warren Buffett deepfakes offering investment advice. They discuss the privacy, consent and fraud implications of platform-level image analysis and the risks posed by synthetic media. The conversation also covers whether agentic AI could replace human co-hosts, the idea of EDR for robots, and practical steps to mitigate these threats. Cultural topics such as Lily Allen’s new album and the release of Claude Code round out the episode.

read more →

Wed, November 5, 2025

Scientists Need a Positive Vision for Artificial Intelligence

🔬 While many researchers view AI as exacerbating misinformation, authoritarian tools, labor exploitation, environmental costs, and concentrated corporate power, the essay argues that resignation is not an option. It highlights concrete, beneficial applications—language access, AI-assisted civic deliberation, climate dialogue, national-lab research models, and advances in biology—while acknowledging imperfections. Drawing on Rewiring Democracy, the authors call on scientists to reform industry norms, document abuses, responsibly deploy AI for public benefit, and retrofit institutions to manage disruption.

read more →

Mon, November 3, 2025

AI Summarization Optimization Reshapes Meeting Records

📝 AI notetakers are increasingly treated as authoritative meeting participants, and attendees are adapting speech to influence what appears in summaries. This practice—called AI summarization optimization (AISO)—uses cue phrases, repetition, timing, and formulaic framing to steer models toward including selected facts or action items. The essay outlines evidence of model vulnerability and recommends social, organizational, and technical defenses to preserve trustworthy records.

read more →

Fri, October 31, 2025

Will AI Strengthen or Undermine Democratic Institutions

🤖 Bruce Schneier and Nathan E. Sanders present five key insights from their book Rewiring Democracy, arguing that AI is rapidly embedding itself in democratic processes and can both empower citizens and concentrate power. They cite diverse examples — AI-written bills, AI avatars in campaigns, judicial use of models, and thousands of government use cases — and note many adoptions occur with little public oversight. The authors urge practical responses: reform the tech ecosystem, resist harmful applications, responsibly deploy AI in government, and renovate institutions vulnerable to AI-driven disruption.

read more →

Thu, October 30, 2025

Agent Registry for Discovering and Verifying Signed Bots

🔐 This post proposes a lightweight, crowd-curated registry for bots and agents to simplify discovery of public keys used for cryptographic Web Bot Auth signatures. It describes a simple list format of URLs that point to signature-agent cards—extended JWKS entries containing operator metadata and keys—and shows how registries enable origins and CDNs to validate agent signatures at scale. Examples and a demo integration illustrate practical adoption.

read more →

Tue, October 28, 2025

AI-Driven Malicious SEO and the Fight for Web Trust

🛡️ The article explains how malicious SEO operations use keyword stuffing, purchased backlinks, cloaking and mass-produced content to bury legitimate sites in search results. It warns that generative AI now amplifies this threat by producing tens of thousands of spam articles, spinning up fake social accounts and enabling more sophisticated cloaking. Defenders must deploy AI-based detection, graph-level backlink analysis and network behavioral analytics to spot coordinated abuse. The piece emphasizes proactive, ecosystem-wide monitoring to protect trust and legitimate businesses online.

read more →

Thu, October 23, 2025

Manipulating Meeting Notetakers: AI Summarization Risks

📝 In many organizations the most consequential meeting attendee is the AI notetaker, whose summaries often become the authoritative meeting record. Participants can tailor their speech—using cue phrases, repetition, timing, and formulaic phrasing—to increase the chance their points appear in summaries, a behavior the author calls AI summarization optimization (AISO). These tactics mirror SEO-style optimization and exploit model tendencies to overweight early or summary-style content. Without governance and technical safeguards, summaries may misrepresent debate and confer an invisible advantage to those who game the system.

read more →

Thu, October 16, 2025

CISOs Brace for an Escalating AI-versus-AI Cyber Fight

🔐AI-enabled attacks are rapidly shifting the threat landscape, with cybercriminals using deepfakes, automated phishing, and AI-generated malware to scale operations. According to Foundry's 2025 Security Priorities Study and CSO reporting, autonomous agents can execute full attack chains at machine speed, forcing defenders to adopt AI as a copilot backed by rigorous human oversight. Organizations are prioritizing human risk, verification protocols, and training to counter increasingly convincing AI-driven social engineering.

read more →

Thu, October 16, 2025

Improving JavaScript Trustworthiness via WAICT for the Web

🔒 Cloudflare presents an early design for Web Application Integrity, Consistency, and Transparency (WAICT) to address the risks of mutable JavaScript in sensitive web apps. The proposal pairs expanded Subresource Integrity (SRI) and a signed integrity manifest with append-only transparency logs and third-party witnesses to provide verifiable inclusion and consistency proofs. Browser preload lists, proof-of-enrollment, and client-side cooldowns are used to avoid extra round trips and to limit stealthy changes. Cloudflare plans to participate as a service provider and to collaborate on standardization.

read more →

Wed, October 15, 2025

Ultimate Prompting Guide for Veo 3.1 on Vertex AI Preview

🎬 This guide introduces Veo 3.1, Google Cloud's improved generative video model available in preview on Vertex AI, and explains how to move beyond "prompt and pray" toward deliberate creative control. It highlights core capabilities—high-fidelity 720p/1080p output, variable clip lengths, synchronized dialogue and sound effects, and stronger image-to-video fidelity. The article presents a five-part prompting formula and detailed techniques for cinematography, soundstage direction, negative prompting, and timestamped scenes. It also describes advanced multi-step workflows that combine Gemini 2.5 Flash Image to produce consistent characters and controlled transitions, and notes SynthID watermarking and certain current limitations.

read more →

Mon, October 13, 2025

AI and the Future of American Politics: 2026 Outlook

🔍 The essay examines how AI is reshaping U.S. politics heading into the 2026 midterms, with campaign professionals, organizers, and ordinary citizens adopting automated tools to write messaging, target voters, run deliberative platforms, and mobilize supporters. Campaign vendors from Quiller to BattlegroundAI are streamlining fundraising, ad creation, and research, while civic groups and unions experiment with AI for outreach and internal organizing. Absent meaningful regulation, these capabilities scale rapidly and raise risks ranging from decontextualized persuasion and registration interference to state surveillance and selective suppression of political speech.

read more →

Mon, October 6, 2025

AI's Role in the 2026 U.S. Midterm Elections and Parties

🗳️ One year before the 2026 midterms, AI is emerging as a central political tool and a partisan fault line. The author argues Republicans are poised to exploit AI for personalized messaging, persuasion, and strategic advantage, citing the Trump administration's use of AI-generated memes and procurement to shape technology. Democrats remain largely reactive, raising legal and consumer-protection concerns while exploring participatory tools such as Decidim and Pol.Is. The essay frames AI as a manipulable political resource rather than an uncontrollable external threat.

read more →

Wed, September 24, 2025

Cloudflare Launches Content Signals Policy for robots.txt

🛡️ Cloudflare introduced the Content Signals Policy, an extension to robots.txt that lets site operators express how crawlers may use content after it has been accessed. The policy defines three machine-readable signals — search, ai-input, and ai-train — each set to yes/no or left unset. Cloudflare will add a default signal set (search=yes, ai-train=no) to managed robots.txt for ~3.8M domains, serve commented guidance for free zones, and publish the spec under CC0. Cloudflare emphasizes signals are preferences, not technical enforcement, and recommends pairing them with WAF and Bot Management.

read more →

Sun, September 21, 2025

Cloudflare 2025 Founders’ Letter: AI, Content, and Web

📣 Cloudflare’s 2025 Founders’ Letter reflects on 15 years of Internet change, highlighting encryption’s rise thanks in part to Universal SSL, slow IPv6 adoption, and the rising costs of scarce IPv4 space. It warns that AI answer engines are shifting value away from traffic-based business models and threatening publishers. Cloudflare previews tools and partnerships — including AI Crawl Control — to help creators control access and negotiate compensation.

read more →