< ciso
brief />
Tag Banner

All news with #agent security tag

148 articles · page 2 of 8

Securing Agentic AI in Financial Services: Observability

🔒 This post explains how financial institutions should augment traditional security frameworks with AI-specific controls when deploying agentic AI. It emphasizes two foundational capabilities—comprehensive observability of agent workflows and fine-grained tool access controls—to preserve explainability and accountability. The author presents seven design principles and actionable implementation guidance, referencing SR 11-7 and practical AWS tooling such as Amazon Bedrock AgentCore and monitoring integrations.
read more →

AI Agents Invalidate the Traditional Cyber Kill Chain

⚠️ AI agents embedded across SaaS environments can render the traditional kill chain ineffective when they are compromised. The piece cites a September 2025 Anthropic disclosure where a state-backed actor used an AI coding agent to perform autonomous espionage, handling the majority of tactical operations. Because agents already hold broad permissions and move data as part of normal workflows, a breach looks like legitimate activity. Reco is positioned as a solution to discover agents, map blast radius, enforce least privilege, and detect anomalous agent behavior in real time.
read more →

Governing AI Agent Behavior Across Intent Layers Guide

🧭 This article presents a practical framework for governing AI agents by aligning user, developer, role-based, and organizational intent. It prescribes a precedence model—organization, role, developer, then user—to resolve conflicts and preserve security and compliance. The authors illustrate expected agent behaviors (refuse, escalate, clarify, or proceed) and advocate for guardrails, least-privilege access, continuous evaluation, telemetry, and human-in-the-loop controls to sustain safe, reliable agent operations.
read more →

Gartner Market Guide Marks Emergence of Guardian Agents

🔒 Gartner's inaugural Market Guide for Guardian Agents defines a new enterprise control layer that supervises AI agents to keep their actions aligned with organizational goals and boundaries. The article stresses risks from unmanaged non-human identities—so-called identity dark matter—and lists mandatory capabilities across visibility, continuous assurance, and runtime enforcement. It urges enterprises to adopt an enterprise-owned guardian layer rather than relying solely on platform-native controls.
read more →

Autonomous AI Adoption Is Rising — Benefits and Risks

🤖 Early this year, enterprises began experimenting with autonomous, agentic tools such as Anthropic’s Claude Cowork and the open-source OpenClaw, which can access apps, files and the web to execute multi-step workflows on users’ behalf. Proponents highlight large efficiency gains and the ability to offload routine IT tasks to non-technical staff, while security researchers warn of misalignment, prompt‑injection flaws and unintended destructive actions. IT leaders are advised to permit controlled experimentation, enforce strict permissions and monitoring, and invest in clean operational context to reduce amplified mistakes and limit shadow‑AI risk.
read more →

GKE and OSS Innovation Highlights at KubeCon EU 2026 Updates

🚀 Google Cloud previews GKE and open-source innovations at KubeCon Europe 2026, focusing on making Kubernetes the best platform for AI and agentic workloads. Autopilot compute classes can now be enabled per workload on Standard clusters, and GKE Cluster Autoscaler will be open-sourced to advance vendor-neutral provisioning. GKE is certified for the CNCF Kubernetes AI Conformance program, and projects like llm-d, DRA drivers for TPUs, and DRANET aim to standardize inference and resource management. Features such as the Model Context Protocol, Kubernetes Agent Sandbox, and GKE Pod Snapshots target secure, fast startup and manageability for agents.
read more →

Palo Alto Updates Prisma AIRS and Browser for AI Agents

🔒 Palo Alto Networks updated Prisma AIRS and its Prisma Browser to discover and map AI agents, models and connections across cloud, SaaS and endpoints, scan agent artifacts for vulnerabilities, and simulate agent-targeted attacks. Prisma AIRS 3.0 — contingent on the planned acquisition of Koi Security — will add an AI Agent Gateway to enforce agent runtime and identity security. Prisma Browser now detects user-generated AI activity, enforces content-aware boundaries, prevents sensitive data from leaking to unmanaged LLMs, and blocks prompt-injection attacks. Separately, following its CyberArk deal, Palo Alto introduced Next Generation Trust Security (NGTS) to automate certificate discovery and lifecycle management.
read more →

Prisma SASE: Enabling Secure Agentic AI Workspaces

🔒 Palo Alto Networks announces the next evolution of Prisma SASE, engineered to secure the emerging era of agentic AI by treating autonomous agents as first-class identities. The platform reimagines Prisma Browser as a secure AI workspace, extending AI-powered data protection across endpoints, network, SaaS and GenAI apps while detecting prompt injection and agent hijacking. It also adds autonomous operations and resilient deployment options, including SASE Private Location and hyperscaler integration, to ensure always-on performance for machine-speed workflows.
read more →

Palo Alto Networks Unveils Prisma AIRS 3.0 Platform

🔒 Palo Alto Networks today introduced Prisma AIRS 3.0, a unified security platform designed to secure the emerging AI enterprise and agentic systems across cloud, SaaS, endpoints and browsers. The release emphasizes three pillars—Discover, Assess, Protect—expanding visibility from AI applications to live maps of enterprise agents and surfacing shadow AI. New capabilities include Agent Artifact Scanning, multiagent red teaming, an AI Agent Gateway for centralized policy enforcement, and agent identity controls to govern delegated access. Palo Alto positions the platform as a single control plane to replace point solutions and manage agent-specific runtime threats.
read more →

Reco Adds AI Agent Security to Tackle Agent Sprawl

🔒 Reco has introduced Reco AI Agent Security, a capability designed to give enterprises visibility and control over autonomous AI agents operating across SaaS environments. The tool detects agent activity beyond traditional OAuth discovery by analyzing API call patterns, service-account correlations, and automation workflow signatures in platforms like Microsoft Copilot, ChatGPT, Zapier and n8n. It consolidates agent discovery, risk analysis, and governance into Reco's existing SaaS security platform.
read more →

Nvidia unveils NemoClaw to secure OpenClaw agents today

🔐 At the Nvidia GTC conference CEO Jensen Huang introduced NemoClaw, a secure runtime for running OpenClaw-style agents built on the Nvidia Agent Toolkit and the broader NeMo ecosystem. Central to the offering is the open-source OpenShell runtime, which provides kernel-level sandboxing and a “privacy router” to monitor and block unsafe communications. Nvidia says NemoClaw is hardware-agnostic though optimized for its own microservices, and aims to make edge agent deployment viable for enterprises while researchers inspect it for CVE-level flaws.
read more →

Top 5 Actions CISOs Must Take to Secure AI Agents Now

🔐 Treat AI agents as first-class identities and enforce identity-based access across systems and APIs. The author argues CISOs must move beyond prompt guardrails to explicit authentication, scoped permissions, continuous logging, and monitoring of tokens, service accounts, OAuth grants, and keys. Organizations should discover shadow AI, map agent access, and enforce intent-aware controls. Full lifecycle governance — ownership, rotation, reviews, and decommissioning — is required to prevent privilege creep and data loss while enabling safe autonomy.
read more →

Runtime: Securing AI Agents Inside Enterprise Systems

🔒 Enterprises are confronting a shift: autonomous AI agents now operate inside corporate environments with real permissions and real consequences. Security must move beyond build-time controls to continuous runtime monitoring that observes agent behavior, preserves tamper-proof logs, and applies agent-aware policies. Practical first steps include inventorying agents, extending EDR-style behavioral baselining, and designing incident-response playbooks that stop misbehaving agents without destroying evidence.
read more →

Agentic Exposure Validation: Unifying Security Testing

🛡️Security validation must evolve from disconnected tests to continuous, context-aware assessment powered by agentic AI. The piece argues that defenders need to converge three perspectives — adversarial, defensive, and risk — into a unified discipline supported by a Security Data Fabric that unites Asset Intelligence, Exposure Intelligence, and Security Control Effectiveness. With real-time context, autonomous agents can plan, execute, and prioritize validation workflows, turning fragmented tool outputs into actionable evidence and faster remediation. The article highlights Picus Security and industry recognition as indicators that the market is moving toward CTEM-native, agentic validation.
read more →

OpenClaw AI Agent Flaws Could Enable Endpoint Takeover

🔒 China's CNCERT warned that OpenClaw, an open-source, self-hosted autonomous AI agent, ships with weak default security and broad system privileges that attackers can abuse to seize endpoints and exfiltrate data. The advisory highlights indirect prompt injection (IDPI/XPIA) risks where benign features like web-page summarization and messaging link previews are weaponized to embed malicious instructions or automatically leak secrets. Researchers at PromptArmor demonstrated a technique in which an agent constructs attacker-controlled URLs that, when rendered as link previews, transmit confidential data without user clicks. CNCERT also flagged risks from malicious skills, accidental destructive commands, and disclosed vulnerabilities, urging isolation, tightened network controls, credential protection, and cautious skill sourcing.
read more →

Agentic AI Security: Assessing Risks and Defenses Now

🛡️ Organizations are adopting agentic AI—autonomous, task-driven systems powered by LLMs—to streamline processes and boost throughput. These agents can plan, act, and iterate, but their non-deterministic behavior creates gaps in traceability, auditability, and access control. Apply strong role-based access, threat modeling, and oversight (human or independent evaluators) to limit exposure and ensure safe deployment.
read more →

Preventing AI Agent Data Leaks: Webinar Guide for Security

🔒 AI agents are transforming workflows but can act as an unmonitored access layer—an 'invisible employee' with broad privileges. In an upcoming webinar, Rahul Parwani, Head of Product for AI Security at Airia, will explain how attackers are manipulating agents to exfiltrate sensitive information and how to stop them. Attendees will learn the Dark Matter of identity, common manipulation techniques, and a practical safety blueprint to limit privileges, detect misuse, and prevent leaks. Reserve your spot to learn actionable defenses.
read more →

Unpacking Agentic AI: The Shift Podcast Launch and Insights

🎧 Microsoft introduces The Shift, an evolution of its earlier podcast to explore agentic AI across engineering, product, and strategy perspectives. Over eight weekly episodes this spring, hosts and guests from teams including Microsoft Azure, Microsoft Fabric, and Microsoft Foundry tackle practical questions about how agents interact with data, databases, and cloud foundations. Episodes emphasize that agents succeed only when data strategy, cloud reliability, and application orchestration work together, highlighting operational concerns like observability, governance, security, and optimization.
read more →

Amazon Lightsail Adds OpenClaw Self-Hosted AI Assistant

🤖 Amazon Lightsail now lets you deploy OpenClaw, a private self-hosted AI assistant, on your own cloud infrastructure with simple, secure defaults. Each Lightsail OpenClaw instance includes built-in security controls—sandboxed agent sessions, one-click HTTPS for TLS, device-pairing authentication, and automatic snapshots—reducing manual configuration and operational risk. Amazon Bedrock is the default model provider, and users can swap models or connect to Slack, Telegram, WhatsApp, and Discord as needed.
read more →

Developer's Guide to Building Production-Ready AI Agents

🧭 This practical guide from GoogleWalks developers through how to move AI agents from prototype to production, highlighting architecture, operational patterns, and safety considerations. It explains an agent as an LLM-driven autonomous system surrounded by an orchestration layer that manages session state, long-term memory, retrieval (RAG), tool use, and security. The post emphasizes emerging interoperability standards such as MCP and A2A, and underscores the importance of context engineering, trajectory-based testing, and staged rollouts. Authors provide targeted guides and code samples to help teams adopt these practices and validate agents before broad deployment.
read more →