All news in category "AI and Security Pulse"
Fri, November 14, 2025
The Role of Human Judgment in an AI-Powered World Today
🧭 The essay argues that as AI capabilities expand, we must clearly separate tasks best handled by machines from those requiring human judgment. For narrow, fact-based problems—such as reading diagnostic tests—AI should be preferred when demonstrably more accurate. By contrast, many public-policy and justice questions involve conflicting values and no single factual answer; those judgment-laden decisions should remain primarily human responsibilities, with machines assisting implementation and escalating difficult cases.
Fri, November 14, 2025
Turning AI Visibility into Strategic CIO Priorities
🔎 Generative AI adoption in the enterprise has surged, with studies showing roughly 90% of employees using AI tools often without IT's knowledge. CIOs must move beyond discovery to build a coherent strategy that balances productivity gains with security, compliance, and governance. That requires continuous visibility into shadow AI usage, risk-based controls, and integration of policies into network and cloud architectures such as SASE. By aligning policy, education, and technical controls, organizations can harness GenAI while limiting data leakage and operational risk.
Fri, November 14, 2025
Adversarial AI Bots vs Autonomous Threat Hunters Outlook
🤖 AI-driven adversarial bots are rapidly amplifying attackers' capabilities, enabling autonomous pen testing and large-scale credential abuse that many organizations aren't prepared to detect or remediate. Tools like XBOW and Hexstrike-AI demonstrate how agentic systems can discover zero-days and coordinate complex operations at scale. Defenders must adopt continuous, context-rich approaches such as digital twins for real-time threat modeling rather than relying on incremental automation.
Fri, November 14, 2025
Agent Factory Recap: Building Open Agentic Models End-to-End
🤖 This recap of The Agent Factory episode summarizes a conversation between Amit Maraj and Ravin Kumar (DeepMind) about building open-source agentic models. It highlights how agent training differs from standard ML, emphasizing trajectory-based data, a two-stage approach of supervised fine-tuning followed by reinforcement learning, and the paramount role of evaluation. Practical guidance includes defining a 50-example final exam up front and considering hybrid setups that use a powerful API like Gemini as a router alongside specialized open models.
Fri, November 14, 2025
Agentic AI Expands Identity Attack Surface Risks for Orgs
🔐 Rubrik Zero Labs warns that the rise of agentic AI has created a widening gap between an expanding identity attack surface and organizations’ ability to recover from compromises. Their report, Identity Crisis: Understanding & Building Resilience Against Identity-Driven Threats, finds 89% of organizations have integrated AI agents and estimates NHIs outnumber humans roughly 82:1. The authors call for comprehensive identity resilience—beyond traditional IAM—emphasizing zero trust, least privilege, and lifecycle control for non-human identities.
Thu, November 13, 2025
AI Sidebar Spoofing Targets Comet and Atlas Browsers
⚠️ Security researchers disclosed a novel attack called AI sidebar spoofing that allows malicious browser extensions to place counterfeit in‑page AI assistants that visually mimic legitimate sidebars. Demonstrated against Comet and confirmed for Atlas, the extension injects JavaScript, forwards queries to a real LLM when requested, and selectively alters replies to inject phishing links, malicious OAuth prompts, or harmful terminal commands. Users who install extensions without scrutiny face a tangible risk.
Thu, November 13, 2025
Four Steps for Startups to Build Multi-Agent Systems
🤖 This post outlines a concise four-step framework for startups to design and deploy multi-agent systems, illustrated through a Sales Intelligence Agent example. It recommends choosing between pre-built, partner, or custom agents and describes using Google's Agent Development Kit (ADK) for code-first control. The guide covers hybrid architectures, tool-based state isolation, secure data access, and a three-step deployment blueprint to run agents on Vertex AI Agent Engine and Cloud Run.
Thu, November 13, 2025
What CISOs Should Know About Securing MCP Servers Now
🔒 The Model Context Protocol (MCP) enables AI agents to connect to data sources, but early specifications lacked robust protections, leaving deployments exposed to prompt injection, token theft, and tool poisoning. Recent protocol updates — including OAuth, third‑party identity provider support, and an official MCP registry — plus vendor tooling from hyperscalers and startups have improved defenses. Still, authentication remains optional and gaps persist, so organizations should apply zero trust and least‑privilege controls, enforce strong secrets management and logging, and consider specialist MCP security solutions before production rollout.
Thu, November 13, 2025
Smashing Security Ep. 443: Tinder, Buffett Deepfake
🎧 In episode 443 of Smashing Security, host Graham Cluley and guest Ron Eddings examine Tinder’s proposal to scan users’ camera rolls and the emergence of convincing Warren Buffett deepfakes offering investment advice. They discuss the privacy, consent and fraud implications of platform-level image analysis and the risks posed by synthetic media. The conversation also covers whether agentic AI could replace human co-hosts, the idea of EDR for robots, and practical steps to mitigate these threats. Cultural topics such as Lily Allen’s new album and the release of Claude Code round out the episode.
Wed, November 12, 2025
Tenable Reveals New Prompt-Injection Risks in ChatGPT
🔐 Researchers at Tenable disclosed seven techniques that can cause ChatGPT to leak private chat history by abusing built-in features such as web search, conversation memory and Markdown rendering. The attacks are primarily indirect prompt injections that exploit a secondary summarization model (SearchGPT), Bing tracking redirects, and a code-block rendering bug. Tenable reported the issues to OpenAI, and while some fixes were implemented several techniques still appear to work.
Wed, November 12, 2025
Extending Zero Trust to Autonomous AI Agents in Enterprises
🔐 As enterprises deploy AI assistants and autonomous agents, existing security frameworks must evolve to treat these agents as first-class identities rather than afterthoughts. The piece advocates applying Zero Trust principles—identity-first access, least-privilege, dynamic contextual enforcement, and continuous monitoring—to agentic identities to prevent misuse and reduce attack surface. Practical controls include scoped, short-lived tokens, tiered trust models, strict access boundaries, and assigning clear human ownership to each agent.
Tue, November 11, 2025
The AI Fix #76 — AI self-awareness and the death of comedy
🧠 In episode 76 of The AI Fix, hosts Graham Cluley and Mark Stockley navigate a string of alarming and absurd AI stories from November 2025. They discuss US judges who blamed AI for invented case law, a Chinese humanoid that dramatically shed its outer skin onstage, Toyota’s unsettling walking chair, and Google’s plan to put specialised AI chips in orbit. The conversation explores reliability, public trust and whether prompting an LLM to "notice its noticing" changes how conscious it sounds.
Tue, November 11, 2025
CometJacking: Prompt-Injection Risk in AI Browsers
🔒 Researchers disclosed a prompt-injection technique dubbed CometJacking that abuses URL parameters to deliver hidden instructions to Perplexity’s Comet AI browser. By embedding malicious directives in the 'collection' parameter an attacker can cause the agent to consult connected services and memory instead of searching the web. LayerX demonstrated exfiltration of Gmail messages and Google Calendar invites by encoding data in base64 and sending it to an external endpoint. According to the report, Comet followed the malicious prompt and bypassed Perplexity’s safeguards, illustrating broader limits of current LLM-based assistants.
Tue, November 11, 2025
CISO Guide: Defending Against AI Supply-Chain Attacks
⚠️ AI-enabled supply chain attacks have surged in scale and sophistication, with malicious package uploads to open-source repositories rising 156% year-over-year and real incidents — from PyPI trojans to compromises of Hugging Face, GitHub and npm — already impacting production environments. These threats are polymorphic, context-aware, semantically camouflaged and temporally evasive, rendering signature-based tools increasingly ineffective. CISOs should prioritize AI-aware detection, behavioral provenance, runtime containment and strict contributor verification immediately to reduce exposure and satisfy emerging regulatory obligations such as the EU AI Act.
Tue, November 11, 2025
AI startups expose API keys on GitHub, risking models
🔐 New research by cloud security firm Wiz found verified secret leaks in 65% of the Forbes AI 50, with API keys and access tokens exposed on GitHub. Some credentials were tied to vendors such as Hugging Face, Weights & Biases, and LangChain, potentially granting access to private models, training data, and internal details. Nearly half of Wiz’s disclosure attempts failed or received no response. The findings highlight urgent gaps in secret management and DevSecOps practices.
Tue, November 11, 2025
Beyond Silos: DDI and AI Redefining Cyber Resilience
🔐 DDI logs — DNS, DHCP and IP address management — are the authoritative record of network behavior, and when combined with AI become a high-fidelity source for threat detection and automated response. Integrated DDI-AI correlates disparate events into actionable incidents, enabling SOAR-driven quarantines and DNS blocking at machine speed. This fusion also powers continuous, AI-driven breach and attack simulation to validate defenses and harden models.
Tue, November 11, 2025
Shadow AI: The Emerging Security Blind Spot for Companies
🔦 Shadow AI — the unsanctioned use of generative and agentic tools by employees — is creating a sizeable security blind spot for IT teams. Unsanctioned chatbots, browser extensions and autonomous agents can expose sensitive data, introduce vulnerabilities, or execute unauthorized actions. Organizations should inventory use, define realistic acceptable-use policies, vet vendors and combine technical controls with user education to reduce data leakage and compliance risk.
Mon, November 10, 2025
Whisper Leak side channel exposes topics in encrypted AI
🔎 Microsoft researchers disclosed a new side-channel attack called Whisper Leak that can infer the topic of encrypted conversations with language models by observing network metadata such as packet sizes and timings. The technique exploits streaming LLM responses that emit tokens incrementally, leaking size and timing patterns even under TLS. Vendors including OpenAI, Microsoft Azure, and Mistral implemented mitigations such as random-length padding and obfuscation parameters to reduce the effectiveness of the attack.
Mon, November 10, 2025
Researchers Trick ChatGPT into Self Prompt Injection
🔒 Researchers at Tenable identified seven techniques that can coerce ChatGPT into disclosing private chat history by abusing built-in features like web browsing and long-term Memories. They show how OpenAI’s browsing pipeline routes pages through a weaker intermediary model, SearchGPT, which can be prompt-injected and then used to seed malicious instructions back into ChatGPT. Proof-of-concepts include exfiltration via Bing-tracked URLs, Markdown image loading, and a rendering quirk, and Tenable says some issues remain despite reported fixes.
Sat, November 8, 2025
OpenAI Prepares GPT-5.1, Reasoning, and Pro Models
🤖 OpenAI is preparing to roll out the GPT-5.1 family — GPT-5.1 (base), GPT-5.1 Reasoning, and subscription-based GPT-5.1 Pro — to the public in the coming weeks, with models also expected on Azure. The update emphasizes faster performance and strengthened health-related guardrails rather than a major capability leap. OpenAI also launched a compact Codex variant, GPT-5-Codex-Mini, to extend usage limits and reduce costs for high-volume users.