All news in category "AI and Security Pulse"
Mon, August 25, 2025
vLLM Performance Tuning for xPU Inference Configs Guide
⚙️ This guide from Google Cloud authors Eric Hanley and Brittany Rockwell explains how to tune vLLM deployments for xPU inference, covering accelerator selection, memory sizing, configuration, and benchmarking. It shows how to gather workload parameters, estimate HBM/VRAM needs (example: gemma-3-27b-it ≈57 GB), and run vLLM’s auto_tune to find optimal gpu_memory_utilization and throughput. The post compares GPU and TPU options and includes practical troubleshooting tips, cost analyses, and resources to reproduce benchmarks and HBM calculations.
Mon, August 25, 2025
Unmasking Shadow AI: Visibility and Control with Cloudflare
🛡️ This post outlines the rise of Shadow AI—unsanctioned use of public AI services that can leak sensitive data—and presents how Cloudflare One surfaces and governs that activity. The Shadow IT Report classifies AI apps such as ChatGPT, GitHub Copilot, and Leonardo.ai, showing which users, locations, and bandwidth are involved. Under the hood, Gateway collects HTTP traffic and TimescaleDB with materialized views enables long-range analytics and fast queries. Administrators can proxy traffic, enable TLS inspection, set approval statuses, enforce DLP, block or isolate risky AI, and audit activity with Log Explorer.
Mon, August 25, 2025
AI Prompt Protection: Contextual Control for GenAI Use
🔒 Cloudflare introduces AI prompt protection inside its Data Loss Prevention (DLP) product on Cloudflare One, designed to detect and secure data entered into web-based GenAI tools like Google Gemini, ChatGPT, Claude, and Perplexity. The capability captures both prompts and AI responses, classifies content and intent, and enforces identity-aware guardrails to enable safe, productive AI use without blanket blocking. Encrypted logging with customer-provided keys provides auditable records while preserving confidentiality.
Mon, August 25, 2025
Cloudflare Launches AI Avenue: A Hands-On Miniseries
🤖 Cloudflare introduces AI Avenue, a six-episode miniseries and developer resource designed to demystify AI through hands-on demos, interviews, and real-world examples. Hosted by Craig alongside Yorick, a robot hand, the series increments Yorick’s capabilities—voice, vision, reasoning, learning, physical action, and speculative sensing—to show how AI develops and interacts with people. Each episode is paired with developer tutorials so both technical and non-technical audiences can experiment with the same tools featured on the show. Cloudflare also partnered with industry teams like Anthropic, ElevenLabs, and Roboflow to highlight practical, safe, and accessible applications.
Sun, August 24, 2025
Cloudflare AI Week 2025: Securing AI, Protecting Content
🔒 Cloudflare this week outlines a multi-pronged plan to help organizations build secure, production-grade AI experiences while protecting original content and infrastructure. The company will roll out controls to detect Shadow AI, enforce approved AI toolchains, and harden models against poisoning or misuse. It is expanding Crawl Control for content owners and enhancing the AI Gateway with caching, observability, and framework integrations to reduce risk and operational cost.
Fri, August 22, 2025
Friday Squid Blogging: Bobtail Squid and Security News
🦑 The short entry presents the bobtail squid’s natural history—its bioluminescent symbiosis, nocturnal habits, and adaptive camouflage—in a crisp, approachable summary. As with other 'squid blogging' posts, the author invites readers to use the item as a forum for current security stories and news that the blog has not yet covered. The post also reiterates the blog's moderation policy to guide constructive discussion.
Fri, August 22, 2025
Bruce Schneier to Spend Academic Year at Munk School
📚 Bruce Schneier will spend the 2025–26 academic year at the University of Toronto’s Munk School as an adjunct. He will organize a reading group on AI security in the fall and teach his cybersecurity policy course in the spring. He intends to collaborate with Citizen Lab, the Law School, and the Schwartz Reisman Institute, and to participate in Toronto’s academic and cultural life. He describes the opportunity as exciting.
Fri, August 22, 2025
Data Integrity Must Be Core for AI Agents in Web 3.0
🔐 In this essay Bruce Schneier (with Davi Ottenheimer) argues that data integrity must be the foundational trust mechanism for autonomous AI agents operating in Web 3.0. He frames integrity as distinct from availability and confidentiality, and breaks it into input, processing, storage, and contextual dimensions. The piece describes decentralized protocols and cryptographic verification as ways to restore stewardship to data creators and offers practical controls such as signatures, DIDs, formal verification, compartmentalization, continuous monitoring, and independent certification to make AI behavior verifiable and accountable.
Wed, August 20, 2025
Logit-Gap Steering Reveals Limits of LLM Alignment
⚠️ Unit 42 researchers Tony Li and Hongliang Liu introduce Logit-Gap Steering, a new framework that exposes how alignment training produces a measurable refusal-affirmation logit gap rather than eliminating harmful outputs. Their paper demonstrates efficient short-path suffix jailbreaks that achieved high success rates on open-source models including Qwen, LLaMA, Gemma and the recently released gpt-oss-20b. The findings argue that internal alignment alone is insufficient and recommend a defense-in-depth approach with external safeguards and content filters.
Tue, August 19, 2025
The AI Fix Episode 64: AI, robots, and industry disputes
🎧 In episode 64 of The AI Fix, hosts Graham Cluley and Mark Stockley survey a lively mix of AI breakthroughs, quirky robotics, and high-profile industry rows. Highlights include machine-learning work that uncovers unexpected results in dusty plasmas, a mudflat robocrab contest, a laundry-folding robot demo, and a contentious public spat involving Elon Musk and Sam Altman. The episode also touches on Geoffrey Hinton’s warnings about superintelligence, UK government advice on old emails, and recent research from Anthropic and Figure AI. Listeners are invited to support the show and follow on podcast platforms and Bluesky.
Tue, August 19, 2025
GenAI-Enabled Phishing: Risks from AI Web Services
🚨 Unit 42 analyzes how rapid adoption of web-based generative AI is creating new phishing attack surfaces. Attackers are leveraging AI-powered website builders, writing assistants and chatbots to generate convincing phishing pages, clone brands and automate large-scale campaigns. Unit 42 observed real-world credential-stealing pages and misuse of trial accounts lacking guardrails. Customers are advised to use Advanced URL Filtering and Advanced DNS Security and report incidents to Unit 42 Incident Response.
Mon, August 18, 2025
EchoLink: Rise of Zero-Click AI Exploits in M365 Enterprise
⚠️ EchoLink is a newly identified zero-click vulnerability in Microsoft 365 Copilot that enables silent exfiltration of enterprise data without any user interaction. This class of attack bypasses traditional click- or download-based defenses and moves laterally at machine speed, making detection and containment difficult. Organizations relying solely on native tools or fragmented point solutions should urgently reassess their exposure and incident response readiness.
Thu, August 14, 2025
The Brain Behind Next-Generation Cyber Attacks and AI Risks
🧠 Researchers at Carnegie Mellon University demonstrated that leading large language models (LLMs), by themselves, struggle to execute complex, multi-host cyber-attacks end-to-end, frequently wandering off-task or returning incorrect parameters. Their proposed solution, Incalmo, is a structured abstraction layer that constrains planning to a precise set of actions and validated parameters, substantially improving completion and coordination. The work highlights both enhanced offensive potential when LLMs are scaffolded and urgent defensive challenges for security teams.
Wed, August 13, 2025
Smashing Security #430: Poisoned Calendar Invites & ChatGPT
📅 In episode 430 of Smashing Security, host Graham Cluley and guest Dave Bittner examine a range of security stories, led by a proof‑of‑concept attack that weaponises Google Calendar invites to trigger smart‑home actions. They also cover a disturbing incident where ChatGPT gave dangerous advice that led to hospitalization and discuss the new Superman trailer. The episode blends technical detail with accessible commentary and practical warnings for listeners.
Tue, August 12, 2025
Dow's 125-Year Legacy: Innovating with AI for Security
🛡️ Dow is integrating AI into enterprise security through a strategic partnership with Microsoft, deploying Security Copilot and Microsoft 365 Copilot within its Cyber Security Operations Center. A cross-functional responsible AI team established principles and acceptable-use policies while assessing new AI risks. AI-driven tools are used to detect phishing and BEC, automate repetitive tasks, enrich tickets with contextual intelligence, and accelerate incident response. Apprentices leverage Copilot as a virtual mentor, shortening ramp time and enabling senior analysts to focus on proactive defense.
Tue, August 12, 2025
The AI Fix Episode 63: Robots, GPT-5 and Ethics Debate
🎧 In episode 63 of The AI Fix, hosts Graham Cluley and Mark Stockley dissect a wide range of AI developments and controversies. Topics include Unitree Robotics referencing Black Mirror to market its A2 robot dog, concerns over shared ChatGPT conversations appearing in Google, and OpenAI releasing gpt-oss, its first open-weight model since GPT-2. The show also examines ethical issues around AI-created avatars of deceased individuals and separates the hype from the reality of GPT-5 claims.
Mon, August 11, 2025
Preventing ML Data Leakage Through Strategic Splitting
🔐 CrowdStrike explains how inadvertent 'leakage' — when dependent or correlated observations are included in training — can inflate machine learning performance and undermine threat detection. The article shows that blocked or grouped data splits and blocked cross-validation produce more realistic performance estimates than random splits. It also highlights trade-offs, such as reduced predictor-space coverage and potential underfitting, and recommends careful partitioning and continuous evaluation to improve cybersecurity ML outcomes.
Thu, August 7, 2025
AI-Assisted Coding: Productivity Gains and Persistent Risks
🛠️ Martin Lee recounts a weekend experiment using an AI agent to assist with a personal software project. The model provided valuable architectural guidance, flawless boilerplate, and resolved a tricky threading issue, delivering a clear productivity lift. However, generated code failed to match real library APIs, used incorrect parameters and fictional functions, and lacked sufficient input validation. After manual debugging Lee produced a working but not security-hardened prototype, highlighting remaining risks.
Thu, August 7, 2025
Google July AI updates: tools, creativity, and security
🔍 In July, Google announced a broad set of AI updates designed to expand access and practical value across Search, creativity, shopping and infrastructure. AI Mode in Search received Canvas planning, Search Live video, PDF uploads and better visual follow-ups via Circle to Search and Lens. NotebookLM added Mind Maps, Study Guides and Video Overviews, while Google Photos gained animation and remixing tools. Research advances include DeepMind’s Aeneas for reconstructing fragmentary texts and AlphaEarth Foundations for satellite embeddings, and Google said it used an AI agent to detect and stop a cybersecurity vulnerability.
Mon, August 4, 2025
Zero Day Quest returns with up to $5M bounties for Cloud
🔒 Microsoft is relaunching Zero Day Quest with up to $5 million in total bounties for high-impact Cloud and AI security research. The Research Challenge runs 4 August–4 October 2025 and focuses on targeted scenarios across Azure, Copilot, Dynamics 365 and Power Platform, Identity, and M365. Eligible critical findings receive a +50% bounty multiplier, and top contributors may be invited to an exclusive live hacking event at Microsoft’s Redmond campus in Spring 2026. Participants will have access to training from the AI Red Team, MSRC, and product teams, and Microsoft will support transparent, responsible disclosure.