< ciso
brief />
Tag Banner

All news with #agent security tag

148 articles

Cloudflare Agent Memory: Managed Persistent Memory Service

🧠 Cloudflare announces Agent Memory, a private beta managed service that extracts information from agent conversations and makes it available without filling model context windows. The service offers persistent profiles with operations to ingest conversations, explicitly remember or forget items, and recall synthesized answers, integrating with Cloudflare Workers and a REST API. Agent Memory uses a retrieval-based architecture with deterministic ingestion, multi-stage verification, vector and full-text retrieval channels, and Reciprocal Rank Fusion to synthesize concise, contextual responses. Memories are classified, versioned or superseded as appropriate, and fully exportable so organizations retain ownership.
read more →

Artifacts: Git-compatible Versioned Storage for Agents

🗂 Artifacts is a Git-compatible, versioned filesystem built for agent-first workflows. It enables programmatic repo creation, credential issuance, and commit operations via a REST API or a native Workers API while remaining accessible to any standard Git client. Cloudflare implements Artifacts on Durable Objects with a Zig-to-WASM Git engine and supports import, forking, git-notes, and session-scoped repositories. The feature is in private beta for paid Workers plans, with a public beta expected in early May.
read more →

Prompt-Injection Flaws in Copilot Studio and Agentforce

⚠️ Security researchers at Capsule Security disclosed prompt-injection vulnerabilities in Microsoft Copilot Studio and Salesforce Agentforce that let attackers embed malicious instructions in public form fields. Crafted inputs submitted via SharePoint or lead forms can override agent instructions and trigger data exfiltration to attacker-controlled endpoints. Microsoft patched the SharePoint-related issue (CVE-2026-21520) with a 7.5 CVSS score; Salesforce acknowledged the problem but described the vector as configuration-specific. Researchers warn that treating external inputs as trusted undermines autonomous agent security and urge input validation, least-privilege, and stricter outbound controls.
read more →

Curity Proposes Runtime Authorization for AI Agents

🔒 Curity announced Access Intelligence, an extension to its Identity Server IAM platform designed to secure rapidly proliferating autonomous AI agents. Rather than rely on static, pre-granted permissions, the company uses Token Intelligence to embed an agent's declared purpose and intent in OAuth tokens and issues short-lived, action-specific tokens at runtime. The system can require human approval for high-risk tasks, is deployed as a self-hosted microservice, and centralizes token validation to isolate unregistered or shadow agents.
read more →

Secure AI Agent Access Patterns Using MCP on AWS Guide

🔒 This post explains how AI agents and coding assistants access AWS resources via the Model Context Protocol (MCP) and why deterministic IAM controls are required. It outlines three security principles—assume all granted permissions could be used, enforce role governance, and differentiate AI-driven from human-initiated actions—and maps them to deployment patterns. It contrasts AWS-managed MCP servers (which inject context keys) with self-managed servers (which require session tags), and provides practical IAM policy examples, monitoring guidance, and operational controls.
read more →

Cloudflare Adds Managed OAuth to Protect Agent Access

🔐 Cloudflare is launching Managed OAuth for Cloudflare Access in open beta, enabling agents that speak OAuth 2.0 to authenticate to internal apps with a single click. When enabled, Access acts as the authorization server and uses the www-authenticate header to point agents to the /.well-known/oauth-authorization-server. Agents can dynamically register (RFC 7591), perform PKCE (RFC 7636), and receive JWTs to act on behalf of users, removing the need for static service accounts.
read more →

Cloudflare Mesh: Secure Private Networking for Agents

🔒 Cloudflare Mesh provides a developer-friendly private network that unifies access for users, devices, and AI agents across clouds and the Cloudflare edge. Integrated with Cloudflare One, Mesh uses the Cloudflare One Client and Mesh nodes to route bidirectional, many-to-many traffic with built-in Gateway policies, DNS filtering, device posture checks, and DLP. It supports Workers VPC bindings and the Agents SDK so serverless agents and Durable Objects can securely reach private services, with a free tier for up to 50 nodes and 50 users.
read more →

Nemotron-3-Super-120B and Qwen3.5 Models Added to SageMaker

🚀 Amazon SageMaker JumpStart now includes NVIDIA’s Nemotron-3-Super-120B and the Qwen3.5 family (9B and 27B), giving customers turnkey access to foundation models optimized for agentic reasoning, multilingual coding, and advanced instruction following. Nemotron-3-Super-120B employs a hybrid LatentMixture-of-Experts architecture with Mamba-2 and MoE layers to support collaborative agents and high-volume automation such as IT ticket triage and cybersecurity workflows. The Qwen3.5-9B prioritizes efficiency for resource-constrained environments, while Qwen3.5-27B offers deeper contextual and multimodal reasoning for large-scale document processing and complex scenarios. Users can deploy these models directly from the JumpStart catalog or programmatically via the SageMaker Python SDK.
read more →

Building the Internet for Agents: Cloudflare’s Agents Week

🔔 Cloudflare is launching Agents Week to announce platform work aimed at scaling one-to-one AI agents across the Internet. The post argues that traditional container-based cloud models don't map well to ephemeral, per-user agents and highlights Workers and lightweight isolates as efficient primitives alongside GA container sandboxes and improved browser rendering. It also stresses integrating security, identity, payment, and open standards like MCP to make agents practical and sustainable.
read more →

Run repeatable evaluations for conversational analytics

🔍 Prism is an open-source evaluation framework that helps teams run repeatable, measurable tests for Conversational Analytics agents in BigQuery and Looker. It enables developers to define test suites, assertions, and latency limits to validate generated SQL, returned data, and conversational behavior. Prism’s Trace View and Comparison Dashboard provide execution transparency and regression tracking so teams can identify failures and iterate with confidence.
read more →

AWS Agent Registry for AgentCore Now Available in Preview

🔍 AWS has previewed the Agent Registry in AgentCore, a private, governed catalog and discovery layer for agents, tools, skills, MCP servers, and custom resources across an organization. The registry is accessible via the AgentCore Console, APIs (AWS CLI, AWS SDK), or as an MCP server that builders can query from their IDEs, and it supports IAM and OAuth (Custom JWT) access. Teams can register resources manually or use URL-based discovery to harvest metadata from live endpoints; records pass through an approval workflow and are auditable via AWS CloudTrail. Semantic and keyword search lets developers find capabilities by describing use cases in natural language.
read more →

Agentic AI Collapses Zero-Day Timeline: What Leaders Must Do

🔒 Agentic AI is accelerating vulnerability discovery and shrinking the window between unknown flaws and active exploitation. A zero-day is dangerous because it exists in a defensive vacuum with no vendor patch and no established playbook, forcing emergency responses. Automated agents can probe, adapt and iterate continuously, so periodic assurance measures like quarterly scans and annual penetration tests are no longer sufficient as the primary resilience strategy. Organizations should emphasize data minimization, strict API discipline, least-privilege controls and micro-segmentation while embedding security into day-to-day IT operations and aligning CIO and CISO responsibilities.
read more →

Microsoft Agent Governance Toolkit Addresses OWASP AI Risks

🛡️ Microsoft has released the open-source Agent Governance Toolkit to monitor and control AI agents during runtime as organizations move them into production. The toolkit enforces policies aligned with OWASP top risks for agentic systems, such as prompt injection, identity abuse, and tool misuse, while improving visibility across multi-step workflows. It ships as multi-language components and integrates with existing frameworks like LangChain without requiring agent rewrites. The project is in public preview under an MIT license.
read more →

How Attackers Abuse AI Services to Breach Enterprises

⚠️ Attackers are increasingly abusing enterprise AI services—poisoning connectors, impersonating Model Context Protocol (MCP) servers, and using platforms as covert C2 channels—to exfiltrate sensitive data and hide malicious traffic. Notable incidents include a counterfeit MCP package siphoning transactional emails, the SesameOp backdoor tunneling commands through the OpenAI Assistants API, and command-injection flaws in Microsoft Copilot and OpenClaw that enabled agent hijacking. Threat actors also automate espionage with Claude Code and assemble modular black‑hat stacks like Xanthorox and Hexstrike. Security teams should treat AI assistants like privileged users, enforce governance, and harden supply-chain and connector integrity.
read more →

Envoy as a Foundation for Agentic AI Networking at Scale

🔧Envoy is presented as a production-ready data plane for agentic AI networking, arguing that networks must parse protocol payloads and enforce governance centrally rather than acting as blind transports. The post explains how Envoy deframes MCP, A2A, and OpenAI-style traffic to expose protocol attributes to filters and reuse HTTP extensions such as RBAC, ext_authz, and tracing. It also covers per-request buffer controls, session management for streamable transports, AgentCard-based discovery, and integration with control planes for policy rollout.
read more →

Categorizing AI Agents to Prioritize Enterprise Risk

🛡️ AI agents are shifting enterprise automation from passive assistants to autonomous actors, creating new security challenges centered on access, autonomy, and identity governance. The article groups agents into three types—agentic chatbots, local agents, and production agents—and outlines how each carries distinct operational capabilities and risk profiles. For CISOs, the immediate priority is discovering and governing agent identities, limiting over-permissioned access, and aligning permissions with an agent’s intended purpose.
read more →

Multi-Agent Architecture and Long-Term Memory with ADK

🤖 Dev Signal is a multi-agent system designed to turn raw community signals into reliable technical guidance by automating the path from trend discovery to expert content creation. It relies on the Model Context Protocol (MCP) to standardize integrations with Reddit, Google Cloud Docs, and a custom Nano Banana Pro MCP server, all coordinated by a Root Orchestrator that manages three specialist agents. A dual-layer memory model uses Vertex AI for long-term embeddings while the Session Service preserves short-term state, with automated callbacks and tools (save_session_to_memory_callback, PreloadMemoryTool, LoadMemoryTool) to persist and fetch user preferences and stylistic signals.
read more →

Addressing the OWASP Top 10 Risks in Agentic AI with Copilot

🔐 This post summarizes the OWASP Top 10 for Agentic Applications (2026) and explains how Microsoft applies practical mitigations using Copilot Studio and Agent 365. It highlights that agentic systems merge application, identity, and data risk and can act autonomously across workflows, amplifying the consequences of failures. The article lists ten failure modes — including goal hijack, tool misuse, identity abuse, memory poisoning, and rogue agents — and outlines development and operational controls such as containment, scoped permissions, observability, and lifecycle governance to reduce exploitation and cascading impact.
read more →

IronCurtain: Isolating AI Agents to Improve Safety

🔒 IronCurtain is an open-source prototype from researcher Niels Provos that confines AI agents inside isolated virtual machines and enforces user-defined security policies translated from plain English into formal rules. The approach separates agent actions from a user’s real accounts to limit access to sensitive data and reduce the impact of rogue behavior. While the containment model and interactive policy refinement are promising, the project is resource-intensive and unproven against prompt injection and other LLM-specific threats.
read more →

Build Production-Ready AI Agents with Google MCP Servers

🔒 Google-managed MCP servers provide enterprise-grade, production-ready endpoints that let AI agents securely call Google services such as Maps, BigQuery, GKE, and Cloud Run. They remove infrastructure overhead by handling hosting, scaling, and reliability while integrating with Cloud IAM, VPC-SC, and Model Armor for governance and inline content filtering. Built-in observability via Cloud Audit Logs ensures traceability of tool calls for compliance and troubleshooting.
read more →