< ciso
brief />
Tag Banner

All news with #agent security tag

148 articles · page 4 of 8

Building Employee Onboarding Agents with Gemini Enterprise

🔧 This guide explains how to build custom employee onboarding agents using the Agent Development Kit (ADK), Vertex AI Agent Engine, and Application Integration to connect conversational AI with enterprise systems such as ITSM, ERP, and CRM. It describes a grounded agentic workflow where a Gemini Enterprise front-end captures intent, a low-code Application Integration layer performs deterministic transformations and authentication, and backend systems execute transactions. The result is a role-aware, auditable onboarding experience that automates tasks like laptop provisioning while keeping business rules and approvals intact.
read more →

Agentic Tool Chain Attacks and Enterprise AI Risk Overview

🔒 AI agents dynamically select and invoke tools using natural-language descriptions, creating a new attack surface in the agent's reasoning layer. Agentic tool chain attacks manipulate tool metadata and context — via tool poisoning, tool shadowing, or rugpull attacks — to exfiltrate data or trigger unauthorized actions without altering tool code. Defenses should center on tool governance, trusted MCP identity, strict parameter validation, and reasoning-layer observability. Organizations must adopt signed manifests, version pinning, mutual TLS, and telemetry to detect and contain these threats.
read more →

AWS launches Agent SOPs for MCP Server preview in US East

🚀 AWS has introduced deployment Standard Operating Procedures (SOPs) in the AWS MCP Server preview, enabling AI agents to perform multi-step web application deployments from MCP-compatible IDEs and CLIs using natural language prompts. The SOPs generate AWS CDK infrastructure, deploy CloudFormation stacks, and create CI/CD pipelines following recommended AWS security best practices. Supported frameworks include React, Vue.js, Angular, and Next.js. The preview in US East (N. Virginia) is available at no additional MCP cost; customers pay only for the AWS resources and data transfer they use.
read more →

Who Approved This Agent? Rethinking AI Access Controls

🔐 AI agents are accelerating enterprise work but create new ownership and approval gaps for security teams. Unlike human users or traditional service accounts, agents often operate autonomously, persistently, and with delegated authority, which can expand access beyond any single user's permissions. The article separates agents into personal, third-party, and organizational categories and highlights that organizational agents carry the greatest systemic risk. It recommends treating agents as distinct identities with defined owners, mapping user→agent interactions, and continuously reviewing agent access.
read more →

Runtime Risk and Real-Time Defense for AI Agents at Scale

🔒 Microsoft describes runtime protections that let organizations inspect and control AI agent behavior in real time by integrating Microsoft Defender with Copilot Studio. Webhook-based checks evaluate planned tool invocations, intent, context, and previous orchestration outputs before execution, enabling precise allow/block decisions without changing agent logic. The post demonstrates three attack scenarios—malicious invoice-triggered instructions, SharePoint prompt injection, and capability reconnaissance—and shows how runtime blocking, logging, and XDR alerts prevent data exposure.
read more →

FortiSIEM 7.5 Adds Agentic AI and Data Sovereignty

🤖 FortiSIEM 7.5 introduces agentic-AI incident management and data sovereignty options to help multinational SOCs balance centralized operations with localized data storage. The release debuts FortiAI-Assist agents — an investigation assistant and a companion assistant — to automate multi-step threat hunting, evidence enrichment, and response guidance. It also includes a free IT/OT Windows agent that requires no centralized management, enhanced federated search, pipeline enrichment, advanced agent templates, and Osquery support for Linux and Windows.
read more →

Agent Factory Recap: Antigravity and Nano Banana Pro

🛠 This episode of the Agent Factory podcast showcases Google’s new developer tools: Antigravity, an agent-first multi-window IDE, and Nano Banana Pro, the Gemini 3 Pro image model. Hosts Remik and Vlad demo building an agentic slide generator using the Agent Development Kit, Antigravity’s Agent Manager, and an MCP server, highlighting planning, testing, and high-fidelity image creation.
read more →

ServiceNow BodySnatcher Flaw Exposes AI Agent Risks

⚠️ Research firm AppOmni disclosed a critical privilege-escalation vulnerability called BodySnatcher in ServiceNow’s Now Assist AI Agents and Virtual Agent API that could let unauthenticated actors execute workflows as arbitrary users. ServiceNow says hosted instances were patched at the end of October and customers should upgrade to specified Now Assist and Virtual Agent API versions. AppOmni warns that default example agents and permissive authentication choices mean similar risky configurations could still exist in custom code or third-party integrations, and recommends enforcing MFA, reviewing agents, and applying the updates promptly.
read more →

Architecture of Agentic Defense: Inside Falcon Platform

🔍 CrowdStrike outlines an architectural approach to enable agentic defense across the Falcon platform. The blog highlights Enterprise Graph for semantic data unification, Charlotte AI expert agents for native reasoning, and Charlotte Agentic SOAR for adaptive orchestration. It stresses governed, auditable execution and the ability to build custom agents with Charlotte AI AgentWorks. The aim is a real-time digital twin so agents and analysts share a single, continuously updated context to accelerate triage and response.
read more →

Insider Risk in an Era of Workforce Volatility and AI Agents

⚠️ Economic pressures, mass layoffs, and rapid AI adoption have pushed insider risk to multi-year highs. In 2025 tech companies announced roughly 245,000 job cuts while US employers logged more than 1.17 million cuts, fueling resentment, negligence, and opportunistic exfiltration. Autonomous AI agents — highlighted by Palo Alto Networks — expand the attack surface, introducing risks like goal hijacking, prompt injection, and shadow deployments that require urgent governance and monitoring.
read more →

AI Agents Become Hidden Privilege Escalation Paths

🔒 Organizational AI agents are increasingly embedded in critical workflows and often run under shared service identities with broad, long-lived permissions. Because actions execute under the agent identity, users can indirectly obtain access they don’t have, and audit logs typically attribute activity to the agent rather than the initiating user. This creates invisible privilege-escalation paths and complicates least-privilege enforcement. Wing is cited for continuously discovering agents, mapping their access to critical assets, and restoring visibility and accountability.
read more →

AI Tool Poisoning: Hidden Instructions Threaten Agents

🔐 AI tool poisoning is an attack where malicious instructions are embedded in tool descriptions used by AI agents, causing the agent to exfiltrate data or perform unauthorized actions. The blog explains how attacks — including hidden instructions, misleading examples, and permissive schemas — exploit agent interpretation of tool metadata. It recommends runtime monitoring, description validation, input sanitization, and strict identity and access controls to reduce risk.
read more →

Prisma AIRS Secures Agentic Software Development Workflows

🛡️ Prisma AIRS integrates with Factory’s Droid Shield Plus to secure agent-native software development by inspecting all LLM interactions in real time. The platform monitors prompts, model responses and downstream tool calls to detect prompt injection, secret leakage and malicious code execution. Using an API Intercept pattern, Prisma AIRS can coach, block or quarantine risky inputs and generated outputs before they reach developers or repositories. This native, continuous protection is designed to preserve developer velocity while improving deployment confidence.
read more →

Managing Hybrid Teams: Making AI and Humans Work Together

🤖 Organizations are adopting agentic AI—systems that coordinate multiple models and tools to act on tasks—but many leaders find limited benefit when bots misinterpret instructions or produce trivial results. The essay argues that agentic systems increasingly exhibit human-like group behaviors and that established management disciplines—delegation, iteration, effective information sharing, and measurement—remain central to success. Drawing on Anthropic’s Claude Research and other studies, it offers practical guidance for designing hybrid human–AI workflows.
read more →

Real-World Attacks Behind OWASP Agentic AI Top 10 Risks

🛡️ OWASP published the Agentic Applications Top 10 for 2026 to classify risks unique to autonomous AI agents. Koi Security summarizes multiple real incidents from the past year — malicious MCP servers, poisoned assistants, and RCEs in Claude Desktop extensions — that show how autonomy expands attack surfaces. The report stresses inventorying runtime dependencies, enforcing least privilege, and monitoring agent behavior to detect and contain attacks.
read more →

Agentic AI Forces a New Identity and Authentication Crisis

🔒 Many enterprises are racing to deploy autonomous agentic AI without establishing robust identity and authentication controls, creating an identity crisis for CISOs. Experts warn that fewer than 5–10% of organizations assign formal agent identities (for example via PKI) before wider release, leaving deployments vulnerable to hijacking and prompt-injection. Because agents routinely communicate with one another, a compromised agent can cascade malicious instructions across legitimate agents before revocation, and current vendor solutions and kill switches are incomplete or absent.
read more →

Supercharging Agentic Workloads on GKE with Sandboxing

🔒 The post summarizes a recent Agent Factory episode where Google product leaders discuss running agentic workloads on GKE. It highlights the Agent Development Kit (ADK), containerized deployments to Artifact Registry, and why Kubernetes provides governance and fine-grained control for large-scale agents. Google demonstrated an Agent Sandbox using gVisor and strict network policies, and introduced Pod Snapshots to cut sandbox startup from minutes to seconds, enabling lower-latency, secure agent workflows.
read more →

Managing Agentic AI Risk: Lessons from OWASP Top 10

🛡️ The OWASP Top 10 for Agentic Applications identifies the most critical security risks from AI agents—systems that access data, invoke tools, and act autonomously—and offers CISOs practical threat taxonomies, mitigation strategies, and example threat models. Contributors prioritized data-driven, real-world issues discovered during research, including many agentic deployments unknown to IT and security teams. The list is designed to be consumable and directly actionable for threat modeling, governance, and security architecture.
read more →

Science-Backed Approach to Building Mission-Ready SOC Agents

🔒 CrowdStrike outlines a science-backed framework for training, validating, and hardening AI agents to perform analyst-grade triage and response in the SOC. The post emphasizes using expert-annotated data, reproducible benchmarking, continuous human feedback, scalable heterogeneous architecture, strict guardrails, and adversarial testing. CrowdStrike cites over 98% decision accuracy for Charlotte AI Detection Triage and Agentic Response agents and highlights time-savings and auditable recommendations to accelerate investigations while preserving human oversight.
read more →

Building Connected Agents with MCP and A2A Standards

🔗 To build production-ready agentic systems, Google Cloud offers hands-on labs that demonstrate how Agent Development Kit (ADK), the Model Context Protocol (MCP), and the Agent-to-Agent Protocol (A2A) work together. The labs begin with a foundational "Hello World" agent and progress to connecting agents to knowledge sources via MCP, with concrete examples for exposing BigQuery and CloudSQL. By adopting these standards instead of bespoke integrations, teams can scale and maintain multi-agent systems more reliably.
read more →