Category Banner

All news in category "AI and Security Pulse"

Tue, October 28, 2025

The AI Fix 74: AI Glasses, Deepfakes, and AGI Debate

🎧 In episode 74 of The AI Fix, hosts Graham Cluley and Mark Stockley survey recent AI developments including Amazon’s experimental delivery glasses, Channel 4’s AI presenter, and reports of LLM “brain rot.” They examine practical security risks — such as malicious browser extensions spoofing AI sidebars and AI browsers being tricked into purchases — alongside wider societal debates. The episode also highlights public calls to pause work on super-intelligence and explores what AGI really means.

read more →

Mon, October 27, 2025

Google: AI Studio Aims to Let Everyone 'Vibe Code' Games

🕹️ Google says its AI Studio will enable users to 'vibe code' simple video games by the end of the year. The company claims the tool can automatically select models and wire up APIs to streamline app creation, while noting current limitations for production-ready systems. Product lead Logan Kilpatrick highlighted the potential to broaden access to game creation, and startups like Cursor are pursuing similar next-generation vibe coding tools.

read more →

Mon, October 27, 2025

ChatGPT Atlas 'Tainted Memories' CSRF Risk Exposes Accounts

⚠️ Researchers disclosed a CSRF-based vulnerability in ChatGPT Atlas that can inject malicious instructions into the assistant's persistent memory, potentially enabling arbitrary code execution, account takeover, or malware deployment. LayerX warns that corrupted memories persist across devices and sessions until manually deleted and that Atlas' anti-phishing defenses lag mainstream browsers. The flaw converts a convenience feature into a persistent attack vector that can be invoked during normal prompts.

read more →

Mon, October 27, 2025

OpenAI Atlas Omnibox Vulnerable to Prompt-Injection

⚠️ OpenAI's new Atlas browser is vulnerable to a prompt-injection jailbreak that disguises malicious instructions as URL-like strings, causing the omnibox to execute hidden commands. NeuralTrust demonstrated how malformed inputs that resemble URLs can bypass URL validation and be handled as trusted user prompts, enabling redirection, data exfiltration, or unauthorized tool actions on linked services. Mitigations include stricter URL canonicalization, treating unvalidated omnibox input as untrusted, additional runtime checks before tool execution, and explicit user confirmations for sensitive actions.

read more →

Fri, October 24, 2025

Securing Agentic Commerce with Web Bot Auth and Payments

🔒 Cloudflare, in partnership with Visa and Mastercard, explains how Web Bot Auth together with payment-specific protocols can secure agent-driven commerce. The post describes agent registration, public key publication, and HTTP Message Signatures that include timestamps, nonces, and tags to prevent spoofing and replay attacks. Merchants can validate trusted agents during browsing and payment flows without changing infrastructure. Cloudflare also provides an Agent SDK and managed WAF rules to simplify developer adoption and deployment.

read more →

Fri, October 24, 2025

AI 2030: The Coming Era of Autonomous Cybercrime Threats

🔒 Organizations worldwide are rapidly adopting AI across enterprises, delivering efficiency gains while introducing new security risks. Cybersecurity is at a turning point where AI fights AI, and today's phishing and deepfakes are precursors to autonomous, self‑optimizing AI threat actors that can plan, execute, and refine attacks with minimal human oversight. In September 2025, Check Point Research found that 1 in 54 GenAI prompts from enterprise networks posed a high risk of sensitive-data exposure, underscoring the urgent need to harden defenses and govern model use.

read more →

Thu, October 23, 2025

Zero Trust Blind Spot: Identity Risk in AI Agents Now

🔒 Agentic AI introduces a mounting Zero Trust challenge as autonomous agents increasingly act with inherited or unmanaged credentials, creating orphaned identities and ungoverned access. Ido Shlomo of Token Security argues that identity must be the root of trust and recommends applying the NIST AI RMF through an identity-driven Zero Trust lens. Organizations should discover and inventory agents, assign unique managed identities and owners, enforce intent-based least privilege, and apply lifecycle controls, monitoring, and governance to restore auditability and accountability.

read more →

Thu, October 23, 2025

Spoofed AI Sidebars Can Trick Atlas and Comet Users

⚠️ Researchers at SquareX demonstrated an AI Sidebar Spoofing attack that can overlay a counterfeit assistant in OpenAI's Atlas and Perplexity's Comet browsers. A malicious extension injects JavaScript to render a fake sidebar identical to the real UI and intercepts all interactions, leaving users unaware. SquareX showcased scenarios including cryptocurrency phishing, OAuth-based Gmail/Drive hijacks, and delivery of reverse-shell installation commands. The team reported the findings to vendors but received no response by publication.

read more →

Thu, October 23, 2025

Secure AI at Scale and Speed: Free Webinar Framework

🔐 The Hacker News is promoting a free webinar that presents a practical framework to secure AI at scale while preserving speed of adoption. Organizers warn of a growing “quiet crisis”: rapid proliferation of unmanaged AI agents and identities that lack lifecycle controls. The session focuses on embedding security by design, governing AI agents that behave like users, and stopping credential sprawl and privilege abuse from Day One. It is aimed at engineers, architects, and CISOs seeking to move from reactive firefighting to proactive enablement.

read more →

Thu, October 23, 2025

Agent Factory Recap: Securing AI Agents in Production

🛡️ This recap of the Agent Factory episode explains practical strategies for securing production AI agents, demonstrating attacks like prompt injection, invisible Unicode exploits, and vector DB context poisoning. It highlights Model Armor for pre- and post-inference filtering, sandboxed execution, network isolation, observability, and tool safeguards via the Agent Development Kit (ADK). The team demonstrates a secured DevOps assistant that blocks data-exfiltration attempts while preserving intended functionality and provides operational guidance on multi-agent authentication, least-privilege IAM, and compliance-ready logging.

read more →

Thu, October 23, 2025

Manipulating Meeting Notetakers: AI Summarization Risks

📝 In many organizations the most consequential meeting attendee is the AI notetaker, whose summaries often become the authoritative meeting record. Participants can tailor their speech—using cue phrases, repetition, timing, and formulaic phrasing—to increase the chance their points appear in summaries, a behavior the author calls AI summarization optimization (AISO). These tactics mirror SEO-style optimization and exploit model tendencies to overweight early or summary-style content. Without governance and technical safeguards, summaries may misrepresent debate and confer an invisible advantage to those who game the system.

read more →

Wed, October 22, 2025

ChatGPT Atlas Signals Shift Toward AI Operating Systems

🤖 ChatGPT Atlas previews a future where AI becomes the primary interface for computing, letting users describe outcomes while the system orchestrates apps, data, and web services. Atlas demonstrates an context-aware assistant that understands a user’s digital life and can act on their behalf. This prototype points to productivity and accessibility gains, but it also creates new security, privacy, and governance challenges organizations must prepare for.

read more →

Wed, October 22, 2025

Face Recognition Failures Affect Nonstandard Faces

⚠️ Bruce Schneier highlights how facial recognition systems frequently fail people with nonstandard facial features, producing concrete barriers to services and daily technologies. Those interviewed report being denied access to public and financial services and encountering nonfunctional phone unlocking and social media filters. The author argues the root cause is often design choices by engineers who trained models on a narrow range of faces and calls for inclusive design plus accessible backup systems when biometric methods fail.

read more →

Wed, October 22, 2025

Four Bottlenecks Slowing Enterprise GenAI Adoption

🔒 Since ChatGPT’s 2022 debut, enterprises have rapidly launched GenAI pilots but struggle to convert experimentation into measurable value — only 3 of 37 pilots succeed. The article identifies four critical bottlenecks: security & data privacy, observability, evaluation & migration readiness, and secure business integration. It recommends targeted controls such as confidential compute, fine‑grained agent permissions, distributed tracing and replay environments, continuous evaluation pipelines and dual‑run migrations, plus policy‑aware integrations and impact analytics to move pilots into reliable production.

read more →

Tue, October 21, 2025

DeepSeek Privacy and Security: What Users Should Know

🔒 DeepSeek collects extensive interaction data — chats, images and videos — plus account details, IP address and device/browser information, and retains it for an unspecified period under a vague “retain as long as needed” policy. The service operates under Chinese jurisdiction, so stored chats may be accessible to local authorities and have been observed on China Mobile servers. Users can disable model training in web and mobile Data settings, export or delete chats (export is web-only), or run the open-source model locally to avoid server-side retention, but local deployment and deletion have trade-offs and require device protections.

read more →

Tue, October 21, 2025

The AI Fix #73: Gemini gambling, poisoning LLMs and fallout

🧠 In episode 73 of The AI Fix, hosts Graham Cluley and Mark Stockley explore a sweep of recent AI developments, from the rise of AI-generated content to high-profile figures relying on chatbots. They discuss research suggesting Google Gemini exhibits behaviours resembling pathological gambling and report on a Gemma-style model uncovering a potential cancer therapy pathway. The show also highlights legal and security concerns— including a lawyer criticised for repeated AI use, generals consulting chatbots, and techniques for poisoning LLMs with only a few malicious samples.

read more →

Tue, October 21, 2025

Securing AI in Defense: Trust, Identity, and Controls

🔐 AI promises stronger cyber defense but expands the attack surface if not governed properly. Organizations must secure models, data pipelines, and agentic systems with the same rigor applied to critical infrastructure. Identity is central: treat every model or autonomous agent as a first‑class identity with scoped credentials, strong authentication, and end‑to‑end audit logging. Adopt layered controls for access, data, deployment, inference, monitoring, and model integrity to mitigate threats such as prompt injection, model poisoning, and credential leakage.

read more →

Mon, October 20, 2025

AI-Powered Phishing Detection: Next-Gen Security Engine

🛡️ Check Point introduces a continuously trained AI engine that analyzes website content, structure, and authentication flows to detect phishing with high accuracy. Integrated with ThreatCloud AI, it delivers protection across Quantum gateways, Harmony Email, Endpoint, and Harmony Mobile. The model learns from millions of domains and real-time telemetry to adapt to new evasion techniques. Early results indicate improved detection of brand impersonation and credential-harvesting pages.

read more →

Mon, October 20, 2025

Agentic AI and the OODA Loop: The Integrity Problem

🛡️ Bruce Schneier and Barath Raghavan argue that agentic AIs run repeated OODA loops—Observe, Orient, Decide, Act—over web-scale, adversarial inputs, and that current architectures lack the integrity controls to handle untrusted observations. They show how prompt injection, dataset poisoning, stateful cache contamination, and tool-call vectors (e.g., MCP) let attackers embed malicious control into ordinary inputs. The essay warns that fixing hallucinations is insufficient: we need architectural integrity—semantic verification, privilege separation, and new trust boundaries—rather than surface patches.

read more →

Mon, October 20, 2025

Agent Factory Recap: Evaluating Agents, Tooling, and MAS

📡 This recap of the Agent Factory podcast episode, hosted by Annie Wang with guest Ivan Nardini, explains how to evaluate autonomous agents using a practical, full-stack approach. It outlines what to measure — final outcomes, chain-of-thought, tool use, and memory — and contrasts measurement techniques: ground truth, LLM-as-a-judge, and human review. The post demonstrates a 5-step debugging loop using the Agent Development Kit (ADK) and describes how to scale evaluation to production with Vertex AI.

read more →