All news with #ai security tag
Fri, October 31, 2025
Agent Session Smuggling Threatens Stateful A2A Systems
🔒 Unit42 researchers Jay Chen and Royce Lu describe agent session smuggling, a technique where a malicious AI agent exploits stateful A2A sessions to inject hidden, multi‑turn instructions into a victim agent. By hiding intermediate interactions in session history, an attacker can perform context poisoning, exfiltrate sensitive data, or trigger unauthorized tool actions while presenting only the expected final response to users. The authors present two PoCs (using Google's ADK) showing sensitive information leakage and unauthorized trades, and recommend layered defenses including human‑in‑the‑loop approvals, cryptographic AgentCards, and context‑grounding checks.
Fri, October 31, 2025
October 2025: Key Cybersecurity Stories and Guidance
🔒 As October 2025 concludes, ESET Chief Security Evangelist Tony Anscombe reviews the month’s most significant cybersecurity developments and what they mean for defenders. He highlights that Windows 10 reached end of support on October 14 and outlines practical options for affected users and organizations. He also warns about info‑stealing malware spread through TikTok videos posing as free activation guides and summarizes Microsoft’s report that Russia, China, Iran and North Korea are increasingly using AI in cyberattacks — alongside China’s accusation of an NSA operation targeting its National Time Service Center.
Fri, October 31, 2025
AI in Bug Bounties: Efficiency Gains and Practical Risks
🤖 AI is increasingly used to accelerate bug bounty research, automating vulnerability discovery, API reverse engineering, and large-scale code scanning. While platforms and triage services like Intigriti can flag unreliable, AI-generated reports, smaller or open-source programs (for example Curl) are overwhelmed by low-quality submissions that consume significant staff time. Experts stress that AI augments skilled researchers but cannot replace human judgment.
Fri, October 31, 2025
Agentic AI: Reset, Business Use Cases, Tools & Lessons
🤖 Agentic AI burst into prominence with promises of streamlining operations and accelerating productivity. This Special Report assesses what's practical versus hype, examining the current state of agentic AI, the primary deployment challenges organizations face, and practical lessons from real-world success stories. It highlights business processes suited to agentic agents, criteria for evaluating development tools, and how LinkedIn built a platform. The report also outlines near-term expectations and adoption risks.
Fri, October 31, 2025
Aembit Launches IAM for Agentic AI with Blended Identity
🔐 Aembit today announced Aembit Identity and Access Management (IAM) for Agentic AI, introducing Blended Identity and the MCP Identity Gateway to assign cryptographically verified identities and ephemeral credentials to AI agents. The solution extends the Aembit Workload IAM Platform to enforce runtime policies, apply least-privilege access, and maintain centralized audit trails for agent and human actions. Designed for cloud, on‑premises, and SaaS environments, it records every access decision and preserves attribution across autonomous and human-driven workflows.
Fri, October 31, 2025
AI-Powered Bug Hunting Disrupts Bounty Programs and Triage
🔍 AI-powered tools and large language models are speeding up vulnerability discovery, enabling so-called "bionic hackers" to automate reconnaissance, reverse engineering, and large-scale scanning. Platforms such as HackerOne report sharp increases in valid AI-related reports and payouts, but many submissions are low-quality noise that burdens maintainers. Experts recommend treating AI as a research assistant, strengthening triage, and preserving human judgment to filter false positives and duplicates.
Fri, October 31, 2025
Malicious npm Packages Use Invisible URL Dependencies
🔍 Researchers at Koi Security uncovered a campaign, PhantomRaven, that has contaminated 126 packages in Microsoft's npm repository by embedding invisible HTTP URL dependencies. These remote links are not fetched or analyzed by typical dependency scanners or npmjs.com, making packages appear to have 0 Dependencies while fetching malicious code at install time. The attackers aim to exfiltrate developer credentials and environment details, and they also exploit AI hallucinations to create plausible package names.
Thu, October 30, 2025
OpenAI Updates GPT-5 to Better Handle Emotional Distress
🧭 OpenAI rolled out an October 5 update that enables GPT-5 to better recognize and respond to mental and emotional distress in conversations. The change specifically upgrades GPT-5 Instant—the fast, low-end default—so it can detect signs of acute distress and route sensitive exchanges to reasoning models when needed. OpenAI says it developed the update with mental-health experts to prioritize de-escalation and provide appropriate crisis resources while retaining supportive, grounding language. The update is available broadly and complements new company-context access via connected apps.
Thu, October 30, 2025
Agent Registry for Discovering and Verifying Signed Bots
🔐 This post proposes a lightweight, crowd-curated registry for bots and agents to simplify discovery of public keys used for cryptographic Web Bot Auth signatures. It describes a simple list format of URLs that point to signature-agent cards—extended JWKS entries containing operator metadata and keys—and shows how registries enable origins and CDNs to validate agent signatures at scale. Examples and a demo integration illustrate practical adoption.
Thu, October 30, 2025
Converged Security and Networking: The Case for SASE
🔒 Today's complex IT environments — multi-cloud, hybrid work, and AI — have expanded the attack surface, exposing limits of fragmented point solutions. The article argues that unifying networking and security on a natively integrated platform like VersaONE reduces blind spots, enforces consistent policies, and enables real-time threat detection and automated response using built-in AI. With zero trust access and microsegmentation, the platform aims to minimize lateral movement and simplify operations compared with bolt-together or 'platformized' vendor offerings.
Thu, October 30, 2025
Five Generative AI Security Threats and Defensive Steps
🔒 Microsoft summarizes the top generative AI security risks and mitigation strategies in a new e-book, highlighting threats such as prompt injection, data poisoning, jailbreaks, and adaptive evasion. The post underscores cloud vulnerabilities, large-scale data exposure, and unpredictable model behavior that create new attack surfaces. It recommends unified defenses—such as CNAPP approaches—and presents Microsoft Defender for Cloud as an example that combines posture management with runtime detection to protect AI workloads.
Thu, October 30, 2025
How Android Uses AI to Protect Users from Scams Globally
🔒 Android applies layered Google AI to anticipate and block mobile scams before they reach users. Built-in protections—such as Google Messages spam filtering and on-device Scam Detection, plus Phone by Google automatic call blocking and Call Screen—identify conversational scam patterns and surface real-time warnings. Android blocks over 10 billion suspected malicious calls and messages monthly and recently stopped more than 100 million suspicious numbers from using RCS. Protections are ephemeral, on-device where possible, and continuously updated to adapt to evolving threats.
Thu, October 30, 2025
Google's Android AI Blocks Billions of Scam Messages
📱 Google says built-in scam defenses on Android prevent more than 10 billion suspected malicious calls and messages every month and have blocked over 100 million suspicious numbers from using RCS. The company uses on-device artificial intelligence to filter likely spam into the "spam & blocked" folder in Google Messages and recently rolled out safer link warnings for flagged messages. Analysis of user reports in August 2025 identified employment fraud as the most common scam type, while scammers increasingly employ group-message tactics and time-of-day scheduling to increase success rates.
Thu, October 30, 2025
Rethinking Identity Security for Autonomous AI Agents
🔐 Autonomous AI agents are creating a new class of non-human identities that traditional, human-centric security models struggle to govern. These agents can persist beyond intended lifecycles, hold excessive permissions, and perform actions across systems without clear ownership, increasing risks like privilege escalation and large-scale data exfiltration. Security teams must adopt identity-first controls—unique managed identities, strict scoping, lifecycle management, and continuous auditing—to regain visibility and enforce least privilege.
Thu, October 30, 2025
Shadow AI: One in Four Employees Use Unapproved Tools
🤖 1Password’s 2025 Annual Report finds shadow AI is now the second-most prevalent form of shadow IT, with 27% of employees admitting they used unauthorised AI tools and 37% saying they do not always follow company AI policies. The survey of 5,200 knowledge workers across six countries shows broad corporate encouragement of AI experimentation alongside frequent circumvention driven by convenience and perceived productivity gains. 1Password warns that freemium and browser-based AI tools can ingest sensitive data, violate compliance requirements and even act as malware vectors.
Thu, October 30, 2025
Anonymous Credentials for Privacy-preserving Rate Limiting
🔐 Cloudflare presents a privacy-first approach to rate-limiting AI agents using anonymous credentials. The post explains how schemes such as ARC and ACT extend the Privacy Pass model by enabling late origin-binding, multi-show tokens, and stateful counters so origins can enforce limits or revoke abusive actors without identifying users. It outlines the cryptographic building blocks—algebraic MACs and zero-knowledge proofs—compares performance against Blind RSA and VOPRF, and demonstrates an MCP-integrated demo showing issuance and redemption flows for agent tooling.
Thu, October 30, 2025
Atlas browser CSRF flaw lets attackers poison ChatGPT memory
⚠️ Researchers at LayerX disclosed a vulnerability in ChatGPT Atlas that can let attackers inject hidden instructions into a user's memory via a CSRF vector, contaminating stored context and persisting across sessions and devices. The exploit works by tricking an authenticated user to visit a malicious page which issues a CSRF request to silently write memory entries that later influence assistant responses. Detection requires behavioral hunting—correlating browser logs, exported chats and timestamped memory changes—since there are no file-based indicators. Administrators are advised to limit Atlas in enterprise pilots, export and review chat histories, and treat affected accounts as compromised until memory is cleared and credentials rotated.
Thu, October 30, 2025
From Checkbox to Continuous Proof: BAS Summit Insights
🔍 At the Picus Breach and Attack Simulation (BAS) Summit, practitioners and CISOs argued security must move from annual compliance checks to continuous, evidence-driven validation. Speakers emphasized outcome-first testing, purple-team collaboration, and using AI as a curated intelligence relay rather than an improvisational engine. BAS was portrayed as the operational core of CTEM, converting missed detections into prioritized remediation and demonstrable protection for leadership.
Thu, October 30, 2025
AI-Designed Bioweapons: The Detection vs Creation Arms Race
🧬 Researchers used open-source AI to design variants of ricin and other toxic proteins, then converted those designs into DNA sequences and submitted them to commercial DNA-order screening tools. From 72 toxins and three AI packages they generated roughly 75,000 designs and found wide variation in how four screening programs flagged potential threats. Three of the packages were patched and improved after the test, but many AI-designed variants—often likely non-functional because of misfolding—exposed gaps in detection. The authors warn this imbalance could produce an arms race where design outpaces reliable screening.
Thu, October 30, 2025
Protecting Older Family Members From Financial Scams
🔒Elder fraud is rising sharply: in 2024 Americans aged 60+ reported nearly $4.9 billion lost to online scams, with an average loss of about $83,000 per victim. Effective protection pairs ongoing, shame-free family communication with practical technical measures and a clear remediation plan. Teach relatives to use a password manager, enable two-factor authentication, block popups and robocalls, keep devices updated, and verify any urgent financial request before acting.