All news in category "AI and Security Pulse"
Fri, October 31, 2025
Aembit Launches IAM for Agentic AI with Blended Identity
🔐 Aembit today announced Aembit Identity and Access Management (IAM) for Agentic AI, introducing Blended Identity and the MCP Identity Gateway to assign cryptographically verified identities and ephemeral credentials to AI agents. The solution extends the Aembit Workload IAM Platform to enforce runtime policies, apply least-privilege access, and maintain centralized audit trails for agent and human actions. Designed for cloud, on‑premises, and SaaS environments, it records every access decision and preserves attribution across autonomous and human-driven workflows.
Fri, October 31, 2025
AI-Powered Bug Hunting Disrupts Bounty Programs and Triage
🔍 AI-powered tools and large language models are speeding up vulnerability discovery, enabling so-called "bionic hackers" to automate reconnaissance, reverse engineering, and large-scale scanning. Platforms such as HackerOne report sharp increases in valid AI-related reports and payouts, but many submissions are low-quality noise that burdens maintainers. Experts recommend treating AI as a research assistant, strengthening triage, and preserving human judgment to filter false positives and duplicates.
Thu, October 30, 2025
OpenAI Updates GPT-5 to Better Handle Emotional Distress
🧭 OpenAI rolled out an October 5 update that enables GPT-5 to better recognize and respond to mental and emotional distress in conversations. The change specifically upgrades GPT-5 Instant—the fast, low-end default—so it can detect signs of acute distress and route sensitive exchanges to reasoning models when needed. OpenAI says it developed the update with mental-health experts to prioritize de-escalation and provide appropriate crisis resources while retaining supportive, grounding language. The update is available broadly and complements new company-context access via connected apps.
Thu, October 30, 2025
Agent Registry for Discovering and Verifying Signed Bots
🔐 This post proposes a lightweight, crowd-curated registry for bots and agents to simplify discovery of public keys used for cryptographic Web Bot Auth signatures. It describes a simple list format of URLs that point to signature-agent cards—extended JWKS entries containing operator metadata and keys—and shows how registries enable origins and CDNs to validate agent signatures at scale. Examples and a demo integration illustrate practical adoption.
Thu, October 30, 2025
Five Generative AI Security Threats and Defensive Steps
🔒 Microsoft summarizes the top generative AI security risks and mitigation strategies in a new e-book, highlighting threats such as prompt injection, data poisoning, jailbreaks, and adaptive evasion. The post underscores cloud vulnerabilities, large-scale data exposure, and unpredictable model behavior that create new attack surfaces. It recommends unified defenses—such as CNAPP approaches—and presents Microsoft Defender for Cloud as an example that combines posture management with runtime detection to protect AI workloads.
Thu, October 30, 2025
Rethinking Identity Security for Autonomous AI Agents
🔐 Autonomous AI agents are creating a new class of non-human identities that traditional, human-centric security models struggle to govern. These agents can persist beyond intended lifecycles, hold excessive permissions, and perform actions across systems without clear ownership, increasing risks like privilege escalation and large-scale data exfiltration. Security teams must adopt identity-first controls—unique managed identities, strict scoping, lifecycle management, and continuous auditing—to regain visibility and enforce least privilege.
Thu, October 30, 2025
Anonymous Credentials for Privacy-preserving Rate Limiting
🔐 Cloudflare presents a privacy-first approach to rate-limiting AI agents using anonymous credentials. The post explains how schemes such as ARC and ACT extend the Privacy Pass model by enabling late origin-binding, multi-show tokens, and stateful counters so origins can enforce limits or revoke abusive actors without identifying users. It outlines the cryptographic building blocks—algebraic MACs and zero-knowledge proofs—compares performance against Blind RSA and VOPRF, and demonstrates an MCP-integrated demo showing issuance and redemption flows for agent tooling.
Thu, October 30, 2025
AI-Designed Bioweapons: The Detection vs Creation Arms Race
🧬 Researchers used open-source AI to design variants of ricin and other toxic proteins, then converted those designs into DNA sequences and submitted them to commercial DNA-order screening tools. From 72 toxins and three AI packages they generated roughly 75,000 designs and found wide variation in how four screening programs flagged potential threats. Three of the packages were patched and improved after the test, but many AI-designed variants—often likely non-functional because of misfolding—exposed gaps in detection. The authors warn this imbalance could produce an arms race where design outpaces reliable screening.
Thu, October 30, 2025
LinkedIn to Use EU, UK and Other Profiles for AI Training
🔒 Microsoft-owned LinkedIn will begin using profile details, public posts and feed activity from users in the UK, EU, Switzerland, Canada and Hong Kong to train generative AI models and to support personalised ads across Microsoft starting 3 November 2025. Private messages are excluded. Users can opt out via Settings & Privacy > Data Privacy and toggle Data for Generative AI Improvement to Off. Organisations should update social media policies and remind staff to review their advertising and data-sharing settings.
Wed, October 29, 2025
AI Literacy Is Critical for Cybersecurity Readiness
🔒 Artificial intelligence is reshaping cybersecurity, creating both enhanced defensive capabilities and new risks that require broad AI literacy. The White House's America’s AI Action Plan and Fortinet’s 2025 Cybersecurity Global Skills Gap Report show strong adoption—97% of organizations use or plan AI in security—yet 48% cite lack of staff expertise as a major barrier. Fortinet recommends targeted training, policies for generative AI use, and its Security Awareness modules to help close the gap and reduce threat exposure.
Wed, October 29, 2025
AI-targeted Cloaking Tricks Agentic Browsers, Warns SPLX
⚠ Researchers report a new form of context-poisoning called AI-targeted cloaking that serves different content to agentic browsers and AI crawlers. SPLX shows attackers can use a trivial user-agent check to deliver alternate pages to crawlers from ChatGPT and Perplexity, turning retrieved content into manipulated ground truth. The technique mirrors search engine cloaking but targets AI overviews and autonomous reasoning, creating a potent misinformation vector. A concurrent hTAG analysis also found many agents execute risky actions with minimal safeguards, amplifying potential harm.
Wed, October 29, 2025
Open-Source b3 Benchmark Boosts LLM Security Testing
🛡️ The UK AI Security Institute (AISI), Check Point and Lakera have launched b3, an open-source benchmark to assess and strengthen the security of backbone LLMs that power AI agents. b3 focuses on the specific LLM calls within agent workflows where malicious inputs can trigger harmful outputs, using 10 representative "threat snapshots" combined with a dataset of 19,433 adversarial attacks from Lakera’s Gandalf initiative. The benchmark surfaces vulnerabilities such as system prompt exfiltration, phishing link insertion, malicious code injection, denial-of-service and unauthorized tool calls, making LLM security more measurable, reproducible and comparable across models and applications.
Wed, October 29, 2025
Cybersecurity Awareness Month 2025: Deepfakes and Trust
🔍 Advances in AI and deepfake technology make it increasingly difficult to tell what’s real online, enabling convincingly fake videos, images and audio that scammers exploit to deceive individuals and organizations. Threat actors use deepfakes of public figures to promote bogus investments, create synthetic nudes to extort victims and deploy fake voices and videos to trick employees into wiring corporate funds. Watch ESET Chief Security Evangelist Tony Anscombe outline practical defenses to recognize and resist deepfakes, and explore other Cybersecurity Awareness Month videos on authentication, patching, ransomware and shadow IT.
Wed, October 29, 2025
BSI Warns of Growing AI Governance Gap in Business
⚠️ The British Standards Institution warns of a widening AI governance gap as many organisations accelerate AI adoption without adequate controls. An AI-assisted review of 100+ annual reports and two polls of 850+ senior leaders found strong investment intent but sparse governance: only 24% have a formal AI program and 47% use formal processes. The report highlights weaknesses in incident management, training-data oversight and inconsistent approaches across markets.
Wed, October 29, 2025
Identity Crisis at the Perimeter: AI-Driven Impersonation
🛡️ Organizations face an identity crisis as generative AI and vast troves of breached personal data enable realistic digital doppelgangers. Attackers now automate hyper-personalized phishing, smishing and vishing, clone voices, and run coordinated multi-channel campaigns that reference real colleagues and recent projects. The article urges a shift to “never trust, always verify,” with radical visibility, rapid detection and phishing-resistant authentication such as FIDO2. It also warns of emerging agentic AI and recommends strict least-privilege controls plus continuous red-teaming.
Wed, October 29, 2025
Top 7 Agentic AI Use Cases Transforming Cybersecurity
🔐 Agentic AI is presented as a practical cybersecurity capability that can operate without direct human supervision, handling high-volume, time-sensitive tasks at machine speed. Industry leaders from Zoom to Dell Technologies and Deloitte highlight seven priority use cases — from autonomous threat detection and SOC augmentation to real-time zero‑trust enforcement — that capitalize on AI's scale and speed. The technology aims to reduce alert fatigue, accelerate mitigation, and free human teams for strategic work.
Tue, October 28, 2025
AI-Driven Malicious SEO and the Fight for Web Trust
🛡️ The article explains how malicious SEO operations use keyword stuffing, purchased backlinks, cloaking and mass-produced content to bury legitimate sites in search results. It warns that generative AI now amplifies this threat by producing tens of thousands of spam articles, spinning up fake social accounts and enabling more sophisticated cloaking. Defenders must deploy AI-based detection, graph-level backlink analysis and network behavioral analytics to spot coordinated abuse. The piece emphasizes proactive, ecosystem-wide monitoring to protect trust and legitimate businesses online.
Tue, October 28, 2025
Google for Startups: AI Cohort Boosts LATAM Cybersecurity
🔐 Google selected 11 startups for its inaugural Google for Startups Accelerator: AI for Cybersecurity in Latin America, a ten-week program that pairs founders with Google's technical resources, mentorship, and product support. The cohort — drawn from Brazil, Chile, Colombia, and Mexico — focuses on AI-driven solutions across threat detection, compliance automation, fraud prevention, and protections for AI agents. Participants will receive hands-on guidance to scale, validate, and deploy tools that reduce cyber risk across the region.
Tue, October 28, 2025
GitHub Agent HQ: Native, Governed AI Agents in Flow
🤖 GitHub announced Agent HQ, a unified platform that makes coding agents native to the GitHub workflow. Over the coming months, partner agents from OpenAI, Anthropic, Google, Cognition, and xAI will become available as part of paid Copilot subscriptions. The release introduces a cross‑surface mission control, VS Code planning and customizable AGENTS.md files, and an enterprise control plane with governance, metrics, and code‑quality tooling to manage agent-driven work.
Tue, October 28, 2025
Enabling a Safe Agentic Web with reCAPTCHA Controls
🔐 Google Cloud outlines a pragmatic framework to secure the emerging agentic web while preserving smooth user experiences. The post details how reCAPTCHA and Google Cloud combine agent and user identity, continuous behavior analysis, and AI-resistant mitigations such as mobile-device attestations. It highlights enabling safe agentic commerce via protocols like AP2 and tighter integration with cloud AI services.