Category Banner

All news in category "AI and Security Pulse"

Wed, October 29, 2025

BSI Warns of Growing AI Governance Gap in Business

⚠️ The British Standards Institution warns of a widening AI governance gap as many organisations accelerate AI adoption without adequate controls. An AI-assisted review of 100+ annual reports and two polls of 850+ senior leaders found strong investment intent but sparse governance: only 24% have a formal AI program and 47% use formal processes. The report highlights weaknesses in incident management, training-data oversight and inconsistent approaches across markets.

read more →

Wed, October 29, 2025

Cybersecurity Awareness Month 2025: Deepfakes and Trust

🔍 Advances in AI and deepfake technology make it increasingly difficult to tell what’s real online, enabling convincingly fake videos, images and audio that scammers exploit to deceive individuals and organizations. Threat actors use deepfakes of public figures to promote bogus investments, create synthetic nudes to extort victims and deploy fake voices and videos to trick employees into wiring corporate funds. Watch ESET Chief Security Evangelist Tony Anscombe outline practical defenses to recognize and resist deepfakes, and explore other Cybersecurity Awareness Month videos on authentication, patching, ransomware and shadow IT.

read more →

Wed, October 29, 2025

Identity Crisis at the Perimeter: AI-Driven Impersonation

🛡️ Organizations face an identity crisis as generative AI and vast troves of breached personal data enable realistic digital doppelgangers. Attackers now automate hyper-personalized phishing, smishing and vishing, clone voices, and run coordinated multi-channel campaigns that reference real colleagues and recent projects. The article urges a shift to “never trust, always verify,” with radical visibility, rapid detection and phishing-resistant authentication such as FIDO2. It also warns of emerging agentic AI and recommends strict least-privilege controls plus continuous red-teaming.

read more →

Wed, October 29, 2025

Top 7 Agentic AI Use Cases Transforming Cybersecurity

🔐 Agentic AI is presented as a practical cybersecurity capability that can operate without direct human supervision, handling high-volume, time-sensitive tasks at machine speed. Industry leaders from Zoom to Dell Technologies and Deloitte highlight seven priority use cases — from autonomous threat detection and SOC augmentation to real-time zero‑trust enforcement — that capitalize on AI's scale and speed. The technology aims to reduce alert fatigue, accelerate mitigation, and free human teams for strategic work.

read more →

Tue, October 28, 2025

AI-Driven Malicious SEO and the Fight for Web Trust

🛡️ The article explains how malicious SEO operations use keyword stuffing, purchased backlinks, cloaking and mass-produced content to bury legitimate sites in search results. It warns that generative AI now amplifies this threat by producing tens of thousands of spam articles, spinning up fake social accounts and enabling more sophisticated cloaking. Defenders must deploy AI-based detection, graph-level backlink analysis and network behavioral analytics to spot coordinated abuse. The piece emphasizes proactive, ecosystem-wide monitoring to protect trust and legitimate businesses online.

read more →

Tue, October 28, 2025

Google for Startups: AI Cohort Boosts LATAM Cybersecurity

🔐 Google selected 11 startups for its inaugural Google for Startups Accelerator: AI for Cybersecurity in Latin America, a ten-week program that pairs founders with Google's technical resources, mentorship, and product support. The cohort — drawn from Brazil, Chile, Colombia, and Mexico — focuses on AI-driven solutions across threat detection, compliance automation, fraud prevention, and protections for AI agents. Participants will receive hands-on guidance to scale, validate, and deploy tools that reduce cyber risk across the region.

read more →

Tue, October 28, 2025

GitHub Agent HQ: Native, Governed AI Agents in Flow

🤖 GitHub announced Agent HQ, a unified platform that makes coding agents native to the GitHub workflow. Over the coming months, partner agents from OpenAI, Anthropic, Google, Cognition, and xAI will become available as part of paid Copilot subscriptions. The release introduces a cross‑surface mission control, VS Code planning and customizable AGENTS.md files, and an enterprise control plane with governance, metrics, and code‑quality tooling to manage agent-driven work.

read more →

Tue, October 28, 2025

Enabling a Safe Agentic Web with reCAPTCHA Controls

🔐 Google Cloud outlines a pragmatic framework to secure the emerging agentic web while preserving smooth user experiences. The post details how reCAPTCHA and Google Cloud combine agent and user identity, continuous behavior analysis, and AI-resistant mitigations such as mobile-device attestations. It highlights enabling safe agentic commerce via protocols like AP2 and tighter integration with cloud AI services.

read more →

Tue, October 28, 2025

The AI Fix 74: AI Glasses, Deepfakes, and AGI Debate

🎧 In episode 74 of The AI Fix, hosts Graham Cluley and Mark Stockley survey recent AI developments including Amazon’s experimental delivery glasses, Channel 4’s AI presenter, and reports of LLM “brain rot.” They examine practical security risks — such as malicious browser extensions spoofing AI sidebars and AI browsers being tricked into purchases — alongside wider societal debates. The episode also highlights public calls to pause work on super-intelligence and explores what AGI really means.

read more →

Mon, October 27, 2025

Google: AI Studio Aims to Let Everyone 'Vibe Code' Games

🕹️ Google says its AI Studio will enable users to 'vibe code' simple video games by the end of the year. The company claims the tool can automatically select models and wire up APIs to streamline app creation, while noting current limitations for production-ready systems. Product lead Logan Kilpatrick highlighted the potential to broaden access to game creation, and startups like Cursor are pursuing similar next-generation vibe coding tools.

read more →

Mon, October 27, 2025

ChatGPT Atlas 'Tainted Memories' CSRF Risk Exposes Accounts

⚠️ Researchers disclosed a CSRF-based vulnerability in ChatGPT Atlas that can inject malicious instructions into the assistant's persistent memory, potentially enabling arbitrary code execution, account takeover, or malware deployment. LayerX warns that corrupted memories persist across devices and sessions until manually deleted and that Atlas' anti-phishing defenses lag mainstream browsers. The flaw converts a convenience feature into a persistent attack vector that can be invoked during normal prompts.

read more →

Mon, October 27, 2025

OpenAI Atlas Omnibox Vulnerable to Prompt-Injection

⚠️ OpenAI's new Atlas browser is vulnerable to a prompt-injection jailbreak that disguises malicious instructions as URL-like strings, causing the omnibox to execute hidden commands. NeuralTrust demonstrated how malformed inputs that resemble URLs can bypass URL validation and be handled as trusted user prompts, enabling redirection, data exfiltration, or unauthorized tool actions on linked services. Mitigations include stricter URL canonicalization, treating unvalidated omnibox input as untrusted, additional runtime checks before tool execution, and explicit user confirmations for sensitive actions.

read more →

Fri, October 24, 2025

Securing Agentic Commerce with Web Bot Auth and Payments

🔒 Cloudflare, in partnership with Visa and Mastercard, explains how Web Bot Auth together with payment-specific protocols can secure agent-driven commerce. The post describes agent registration, public key publication, and HTTP Message Signatures that include timestamps, nonces, and tags to prevent spoofing and replay attacks. Merchants can validate trusted agents during browsing and payment flows without changing infrastructure. Cloudflare also provides an Agent SDK and managed WAF rules to simplify developer adoption and deployment.

read more →

Fri, October 24, 2025

AI 2030: The Coming Era of Autonomous Cybercrime Threats

🔒 Organizations worldwide are rapidly adopting AI across enterprises, delivering efficiency gains while introducing new security risks. Cybersecurity is at a turning point where AI fights AI, and today's phishing and deepfakes are precursors to autonomous, self‑optimizing AI threat actors that can plan, execute, and refine attacks with minimal human oversight. In September 2025, Check Point Research found that 1 in 54 GenAI prompts from enterprise networks posed a high risk of sensitive-data exposure, underscoring the urgent need to harden defenses and govern model use.

read more →

Thu, October 23, 2025

Zero Trust Blind Spot: Identity Risk in AI Agents Now

🔒 Agentic AI introduces a mounting Zero Trust challenge as autonomous agents increasingly act with inherited or unmanaged credentials, creating orphaned identities and ungoverned access. Ido Shlomo of Token Security argues that identity must be the root of trust and recommends applying the NIST AI RMF through an identity-driven Zero Trust lens. Organizations should discover and inventory agents, assign unique managed identities and owners, enforce intent-based least privilege, and apply lifecycle controls, monitoring, and governance to restore auditability and accountability.

read more →

Thu, October 23, 2025

Spoofed AI Sidebars Can Trick Atlas and Comet Users

⚠️ Researchers at SquareX demonstrated an AI Sidebar Spoofing attack that can overlay a counterfeit assistant in OpenAI's Atlas and Perplexity's Comet browsers. A malicious extension injects JavaScript to render a fake sidebar identical to the real UI and intercepts all interactions, leaving users unaware. SquareX showcased scenarios including cryptocurrency phishing, OAuth-based Gmail/Drive hijacks, and delivery of reverse-shell installation commands. The team reported the findings to vendors but received no response by publication.

read more →

Thu, October 23, 2025

Secure AI at Scale and Speed: Free Webinar Framework

🔐 The Hacker News is promoting a free webinar that presents a practical framework to secure AI at scale while preserving speed of adoption. Organizers warn of a growing “quiet crisis”: rapid proliferation of unmanaged AI agents and identities that lack lifecycle controls. The session focuses on embedding security by design, governing AI agents that behave like users, and stopping credential sprawl and privilege abuse from Day One. It is aimed at engineers, architects, and CISOs seeking to move from reactive firefighting to proactive enablement.

read more →

Thu, October 23, 2025

Agent Factory Recap: Securing AI Agents in Production

🛡️ This recap of the Agent Factory episode explains practical strategies for securing production AI agents, demonstrating attacks like prompt injection, invisible Unicode exploits, and vector DB context poisoning. It highlights Model Armor for pre- and post-inference filtering, sandboxed execution, network isolation, observability, and tool safeguards via the Agent Development Kit (ADK). The team demonstrates a secured DevOps assistant that blocks data-exfiltration attempts while preserving intended functionality and provides operational guidance on multi-agent authentication, least-privilege IAM, and compliance-ready logging.

read more →

Thu, October 23, 2025

Manipulating Meeting Notetakers: AI Summarization Risks

📝 In many organizations the most consequential meeting attendee is the AI notetaker, whose summaries often become the authoritative meeting record. Participants can tailor their speech—using cue phrases, repetition, timing, and formulaic phrasing—to increase the chance their points appear in summaries, a behavior the author calls AI summarization optimization (AISO). These tactics mirror SEO-style optimization and exploit model tendencies to overweight early or summary-style content. Without governance and technical safeguards, summaries may misrepresent debate and confer an invisible advantage to those who game the system.

read more →

Wed, October 22, 2025

ChatGPT Atlas Signals Shift Toward AI Operating Systems

🤖 ChatGPT Atlas previews a future where AI becomes the primary interface for computing, letting users describe outcomes while the system orchestrates apps, data, and web services. Atlas demonstrates an context-aware assistant that understands a user’s digital life and can act on their behalf. This prototype points to productivity and accessibility gains, but it also creates new security, privacy, and governance challenges organizations must prepare for.

read more →