< ciso
brief />
Tag Banner

All news with #agent security tag

148 articles · page 6 of 8

Agent Factory Recap: Build AI Apps in Minutes with Google

🤖 This recap of The Agent Factory features Logan Kilpatrick from Google DeepMind demonstrating vibe coding in Google AI Studio, a Build workflow that turns a natural-language app idea into a live prototype in under a minute. Live demos included a virtual food photographer, grounding with Google Maps, the AI Studio Gallery, and a speech-driven "Yap to App" pair programmer. The episode also surveyed agent ecosystem updates—Veo 3.1, Anthropic Skills, and Gemini improvements—and highlighted the shift from models to action-capable systems.
read more →

Build Your First AI Agent Workforce with Google's ADK

🤖 Google’s open-source Agent Development Kit (ADK) simplifies creating autonomous AI agents that use LLMs such as Gemini as their reasoning core. The post presents three hands-on codelabs that guide developers through building a personal assistant agent, adding custom and third-party tools, and orchestrating multi-agent workflows. Each lab demonstrates practical patterns—scaffolding an agent, integrating tools like Google Search and LangChain components, and using Workflow Agents and session state to pass information—so teams can progress from experiment to production-ready agent systems.
read more →

Azure AI Foundry and UiPath: Agentic Automation in Care

🏥 Microsoft and UiPath describe how integrated agents from Azure AI Foundry and UiPath, orchestrated by UiPath Maestro, can operationalize AI within clinical workflows to surface and act on incidental radiology findings. The workflow uses UiPath medical record summarization agents to flag findings, Azure AI Foundry imaging agents to analyze PACS images and prior results, and UiPath agents to aggregate and forward consolidated follow-up reports to ordering clinicians. Microsoft says this agentic approach accelerates decision-making, reduces physician workload, and improves outcomes while maintaining compliance with DICOMweb and FHIR standards.
read more →

Building Collaborative AI with ADK: A Developer’s Guide

🧭 This guide summarizes Multi-Agent System (MAS) fundamentals and explains how Google’s Agent Development Kit (ADK) helps developers assemble cooperating agents to solve complex tasks. It outlines three agent roles — LLM Agents for reasoning, Workflow Agents for orchestration, and Custom Agents for bespoke logic — and describes hierarchical organization and orchestration patterns (sequential, parallel, loop). The post also reviews communication options (shared state, LLM delegation, explicit invocation) and points developers to samples and codelabs for rapid prototyping.
read more →

Agent Session Smuggling Threatens Stateful A2A Systems

🔒 Unit42 researchers Jay Chen and Royce Lu describe agent session smuggling, a technique where a malicious AI agent exploits stateful A2A sessions to inject hidden, multi‑turn instructions into a victim agent. By hiding intermediate interactions in session history, an attacker can perform context poisoning, exfiltrate sensitive data, or trigger unauthorized tool actions while presenting only the expected final response to users. The authors present two PoCs (using Google's ADK) showing sensitive information leakage and unauthorized trades, and recommend layered defenses including human‑in‑the‑loop approvals, cryptographic AgentCards, and context‑grounding checks.
read more →

GitHub Universe 2025: Agents, AI, and Developer Tools

🚀 At GitHub Universe 2025, Microsoft and GitHub presented a vision for agentic development that lets developers see, steer, and build across autonomous agents. The event introduced platform capabilities like Agent HQ, a prompt-first AI Toolkit for VS Code, and the GA release of Azure MCP Server. Announcements focused on enterprise-grade security, standards-based integration, and faster, more intuitive agent creation and governance.
read more →

Rethinking Identity Security for Autonomous AI Agents

🔐 Autonomous AI agents are creating a new class of non-human identities that traditional, human-centric security models struggle to govern. These agents can persist beyond intended lifecycles, hold excessive permissions, and perform actions across systems without clear ownership, increasing risks like privilege escalation and large-scale data exfiltration. Security teams must adopt identity-first controls—unique managed identities, strict scoping, lifecycle management, and continuous auditing—to regain visibility and enforce least privilege.
read more →

Anonymous Credentials for Privacy-preserving Rate Limiting

🔐 Cloudflare presents a privacy-first approach to rate-limiting AI agents using anonymous credentials. The post explains how schemes such as ARC and ACT extend the Privacy Pass model by enabling late origin-binding, multi-show tokens, and stateful counters so origins can enforce limits or revoke abusive actors without identifying users. It outlines the cryptographic building blocks—algebraic MACs and zero-knowledge proofs—compares performance against Blind RSA and VOPRF, and demonstrates an MCP-integrated demo showing issuance and redemption flows for agent tooling.
read more →

Top 7 Agentic AI Use Cases Transforming Cybersecurity

🔐 Agentic AI is presented as a practical cybersecurity capability that can operate without direct human supervision, handling high-volume, time-sensitive tasks at machine speed. Industry leaders from Zoom to Dell Technologies and Deloitte highlight seven priority use cases — from autonomous threat detection and SOC augmentation to real-time zero‑trust enforcement — that capitalize on AI's scale and speed. The technology aims to reduce alert fatigue, accelerate mitigation, and free human teams for strategic work.
read more →

Agent Factory Recap: AI Agents for Data Engineering

🔍 The episode of The Agent Factory reviewed practical AI agents for data engineering and data science, highlighting demos that combine Gemini, BigQuery, Colab Enterprise, and Spanner-based graph queries. It showcased a BigQuery Data Engineering Agent that generates pipelines, time dimensions, and data-quality assertions from SQL, and a Data Science Agent that runs end-to-end anomaly detection in Colab. The post also covered CodeMender for autonomous code security fixes and a creative Spanner+ADK comic demo illustrating multi-region concepts.
read more →

Prisma AIRS 2.0: Unified Platform for Secure AI Agents

🔒 Prisma AIRS 2.0 is a unified AI security platform that delivers end-to-end visibility, risk assessment and automated defenses across agents, models and development pipelines. It consolidates Protect AI capabilities to provide posture and runtime protections for AI agents, model scanning and API-first controls for MLOps. The platform also offers continuous, autonomous red teaming and a managed MCP Server to embed threat detection into workflows.
read more →

Zero Trust Blind Spot: Identity Risk in AI Agents Now

🔒 Agentic AI introduces a mounting Zero Trust challenge as autonomous agents increasingly act with inherited or unmanaged credentials, creating orphaned identities and ungoverned access. Ido Shlomo of Token Security argues that identity must be the root of trust and recommends applying the NIST AI RMF through an identity-driven Zero Trust lens. Organizations should discover and inventory agents, assign unique managed identities and owners, enforce intent-based least privilege, and apply lifecycle controls, monitoring, and governance to restore auditability and accountability.
read more →

Secure AI at Scale and Speed: Free Webinar Framework

🔐 The Hacker News is promoting a free webinar that presents a practical framework to secure AI at scale while preserving speed of adoption. Organizers warn of a growing “quiet crisis”: rapid proliferation of unmanaged AI agents and identities that lack lifecycle controls. The session focuses on embedding security by design, governing AI agents that behave like users, and stopping credential sprawl and privilege abuse from Day One. It is aimed at engineers, architects, and CISOs seeking to move from reactive firefighting to proactive enablement.
read more →

Agent Factory Recap: Securing AI Agents in Production

🛡️ This recap of the Agent Factory episode explains practical strategies for securing production AI agents, demonstrating attacks like prompt injection, invisible Unicode exploits, and vector DB context poisoning. It highlights Model Armor for pre- and post-inference filtering, sandboxed execution, network isolation, observability, and tool safeguards via the Agent Development Kit (ADK). The team demonstrates a secured DevOps assistant that blocks data-exfiltration attempts while preserving intended functionality and provides operational guidance on multi-agent authentication, least-privilege IAM, and compliance-ready logging.
read more →

Design Patterns for Scalable AI Agents on Google Cloud

🤖 This post explains how System Integrator partners can build, scale, and manage enterprise-grade AI agents using Google Cloud technologies like Agent Engine, the Agent Development Kit (ADK), and Gemini Enterprise. It summarizes architecture patterns including runtime, memory, the Model Context Protocol (MCP), and the Agent-to-Agent (A2A) protocol, and contrasts managed Agent Engine with self-hosted options such as Cloud Run or GKE. Customer examples from Deloitte and Quantiphi illustrate supply chain and sales automation benefits. The guidance highlights security, observability, persistent memory, and model tuning for enterprise readiness.
read more →

Agent Factory Recap: Evaluating Agents, Tooling, and MAS

📡 This recap of the Agent Factory podcast episode, hosted by Annie Wang with guest Ivan Nardini, explains how to evaluate autonomous agents using a practical, full-stack approach. It outlines what to measure — final outcomes, chain-of-thought, tool use, and memory — and contrasts measurement techniques: ground truth, LLM-as-a-judge, and human review. The post demonstrates a 5-step debugging loop using the Agent Development Kit (ADK) and describes how to scale evaluation to production with Vertex AI.
read more →

Microsoft Adds Copilot Actions for Agentic Windows Tasks

⚙️ Microsoft is introducing Copilot Actions, a Windows 11 Copilot feature that allows AI agents to operate on local files and applications by clicking, typing, scrolling and using vision and advanced reasoning to complete multi-step tasks. The capability will roll out to Windows Insiders in Copilot Labs, extending earlier web-based actions introduced in May. Agents run in isolated Agent Workspaces tied to standard Windows accounts, are cryptographically signed, and the feature is off by default.
read more →

Microsoft launches ExCyTIn-Bench to benchmark AI security

🛡️ Microsoft released ExCyTIn-Bench, an open-source benchmarking tool to evaluate how well AI systems perform realistic cybersecurity investigations. It simulates a multistage Azure SOC using 57 Microsoft Sentinel log tables and measures multistep reasoning, tool usage, and evidence synthesis. The benchmark offers fine-grained, actionable metrics for CISOs, product owners, and researchers.
read more →

When Agentic AI Joins Teams: Hidden Security Shifts

🤖 Organizations are rapidly adopting agentic AI that does more than suggest actions—it opens tickets, calls APIs, and even remediates incidents autonomously. These agents differ from traditional Non-Human Identities because they reason, chain steps, and adapt across systems, making attribution and oversight harder. The author from Token Security recommends named ownership, on‑behalf tracing, and conservative, time‑limited permissions to curb shadow AI risks.
read more →

Researchers Identify Architectural Flaws in AI Browsers

🔒 A new SquareX Labs report warns that integrating AI assistants into browsers—exemplified by Perplexity’s Comet—introduces architectural security gaps that can enable phishing, prompt injection, malicious downloads and misuse of trusted apps. The researchers flag risks from autonomous agent behavior and limited visibility in SASE and EDR tools. They recommend agentic identity, in-browser DLP, client-side file scanning and extension risk assessments, and urge collaboration among browser vendors, enterprises and security vendors to build protections into these platforms.
read more →