< ciso
brief />
Tag Banner

All news with #agent security tag

148 articles · page 3 of 8

Identity-First AI Security: Adding Intent to Access

🔐 Today’s enterprise AI agents are no longer passive assistants but active operators that authenticate to systems using API keys, OAuth tokens, cloud roles, and service accounts. The article advocates treating every agent as a first-class identity with unique identities, lifecycle management, defined roles, clear ownership, and auditability. It warns that identity alone is insufficient because agents are dynamic and can drift from original missions; instead it promotes intent-based permissioning, activating privileges only when an agent's declared mission and runtime context justify the action. Practical steps include inventorying agents, assigning lifecycle-managed identities, documenting approved missions, and enforcing conditional access based on identity, intent, and context.
read more →

Running OpenClaw Safely: Identity, Isolation, Runtime

🔒 Self-hosted agent runtimes such as OpenClaw shift the execution boundary by ingesting untrusted text, downloading third‑party skills, and acting with the host's credentials. This combination makes the runtime effectively untrusted code execution with persistent tokens and elevated access, unsuitable for standard workstations. Microsoft recommends evaluating OpenClaw only in isolated VMs or dedicated devices, using dedicated non‑privileged credentials, continuous monitoring, and a fast rebuild plan. Prioritize containment, least privilege, and monitoring with solutions like Microsoft Defender XDR.
read more →

Researchers Reveal Six New High-Risk OpenClaw Flaws

🔒OpenClaw has patched six vulnerabilities disclosed by Endor Labs, including SSRF, missing webhook authentication and a path traversal issue that range from moderate to high severity. The set includes CVE-2026-26322 (Gateway SSRF, CVSS 7.6), CVE-2026-26319 (Telnyx webhook auth bypass, CVSS 7.5) and several GitHub Security Advisories such as GHSA-56f2-hvwg-5743. Endor warns that agent frameworks’ multi-layered architectures mean vulnerabilities can span files and components, requiring data-flow analysis and layered validation to mitigate exploitation. SecurityScorecard also flagged many publicly exposed OpenClaw instances, raising enterprise risk.
read more →

Mistral Devstral 2 123B Now Available on Amazon Bedrock

🚀 Amazon Bedrock now offers Mistral AI Devstral 2 123B, an open-weight 123B-parameter LLM optimized for agentic software engineering workflows. The model focuses on code generation, automation, and reliable multi-step reasoning, supporting long-context comprehension for multi-turn coding tasks. Bedrock exposes Devstral 2 via a single, fully managed API so customers do not need to provision infrastructure or host models. It is intended for production coding assistants, automated code review, and complex software development agents and is available in select AWS Regions.
read more →

Securing the Agentic Endpoint: New Protection Needed

🔒 Traditional endpoint defenses miss a growing class of non-binary software — browser extensions, code packages, IDE plugins, local servers, containers and model artifacts — that employees and developers install without centralized oversight. AI agents amplify that blind spot by acting with user credentials, autonomously discovering, invoking and installing components at machine speed. Palo Alto Networks says it intends to acquire Koi to deliver Agentic Endpoint Security, focused on visibility, continuous risk analysis and real-time policy enforcement to remediate risky behaviors.
read more →

Agentic AI Boom: A CISO's Worst-Case Security Risk

🛡️ Late 2025 marked a decisive shift from brittle RAG deployments to autonomous, goal-oriented agents across the enterprise. While architectures like self-RAG and CRAG improved reliability, they also expanded the attack surface to include every document, memory store and integrated tool. New threats — indirect prompt injection, memory poisoning and agentic DoS — can exfiltrate data or drain budgets, forcing defenders to secure the full perception-reason-action loop.
read more →

What CISOs Need to Know About OpenClaw Risks and Mitigations

⚠️ OpenClaw is an open‑source AI‑agent orchestration tool that runs locally, integrates with common chat apps and can use any LLM backend, driving rapid adoption. Researchers have found widespread exposed instances, critical authentication‑bypass flaws, plaintext credentials in the ClawHub marketplace and hundreds of malicious skills enabling credential theft and remote code execution. Experts urge enterprises to ban or tightly restrict use, enforce least privilege, MFA, endpoint segmentation and continuous telemetry if pilots are allowed.
read more →

Infostealer Harvests OpenClaw AI Agent Configurations

🔓 Hudson Rock says an info‑stealer, likely a Vidar variant, exfiltrated an OpenClaw agent's configuration, including openclaw.json, device.json and soul.md. The files contain gateway tokens, cryptographic keys and the agent's operational 'soul,' which could let attackers impersonate the AI assistant or connect to local instances if exposed. The incident signals a shift from stealing credentials to harvesting AI agent identities, and vendors should expect targeted modules to follow.
read more →

Infostealer Observed Harvesting OpenClaw Agent Secrets

🔐 Hudson Rock has observed information-stealing malware exfiltrating configuration and memory files from the OpenClaw agent framework, exposing API tokens, private keys, and persistent agent memory. The activity, attributed to a Vidar-like infostealer and recorded on 13 February 2026, captured openclaw.json, device.json, and agent 'soul' and memory files. With these items an attacker could impersonate the device, bypass Safe Device checks, access encrypted logs, or fully compromise a user's digital identity. Organizations should audit agent directories, apply vendor fixes, and enforce strict filesystem permissions immediately.
read more →

OpenClaw (Moltbot): Critical Enterprise AI Agent Risks

⚠️ OpenClaw (formerly Clawdbot/Moltbot) is an open-source local AI assistant that integrates with chat apps and can access calendars, email, browsers and the filesystem. Since its November 2025 debut and January 2026 viral spike, multiple critical vulnerabilities — notably CVE-2026-25253 — enabled token theft and arbitrary command execution. The project stores secrets in plaintext, exposes dangerous defaults, and hosts a marketplace where malicious skills have proliferated. Organizations face regulatory, operational, and insider-threat risks if employees run this software on personal or corporate devices.
read more →

Copilot Studio Agent Security: Top 10 Detectable Risks

🔒 The Microsoft Defender Security Research Team describes the top 10 misconfigurations that make Copilot Studio agents risky across enterprises. The post explains how small choices — broad sharing, weak authentication, raw HTTP calls, hard-coded secrets, orphaned agents, and unconstrained orchestration — create exploitable paths. It includes Advanced Hunting Community Queries to detect these issues and a short mitigation checklist to reduce exposure. The guidance stresses treating agents as production assets with lifecycle governance and least-privilege controls.
read more →

AI Skills Exposed: New Attack Surface for Enterprises

⚠️ TrendAI warns that so-called AI skills—executable artifacts that combine human-readable instructions, decision logic and operational constraints—are dangerously exposed to theft, sabotage and disruption. These skills power automation in tools such as Anthropic’s Agent Skills, OpenAI’s GPT Actions and Microsoft’s Copilot Plugin, and can surface proprietary data and business logic. If attackers obtain skill logic or operational data they could disrupt public services, manipulate manufacturing or steal sensitive records. TrendAI recommends integrity monitoring, strict access controls, separation of data and logic, least-privilege execution, adversary testing and continuous logging and auditing.
read more →

OpenClaw Risks and Enterprise Exposure: What CISOs Must Know

⚠️ OpenClaw is a rapidly adopted local agent orchestration tool (formerly Clawdbot/Moltbot) that integrates with chat apps, operating systems, smart-home devices, browsers and productivity platforms and can be configured to use any LLM backend. Its GitHub repo and the Moltbook social layer saw millions of visits and hundreds of thousands of agents and downloads in recent weeks. Security researchers warn the tool is insecure-by-default: exposed instances, authentication bypasses, plaintext credentials and malicious third-party skills create serious enterprise risk. Organizations are advised to block traffic, rotate credentials and restrict experimentation to isolated, managed environments.
read more →

Observability, Governance, and Security for AI Agents

🔍 Microsoft’s Cyber Pulse highlights that more than 80% of Fortune 500 organizations use active AI agents and warns that rapid agent adoption is outpacing visibility, governance, and security. The report urges applying Zero Trust principles—least privilege, explicit verification, and assume compromise—to non-human users operating at scale. It recommends a centralized registry, identity-driven access controls, real-time telemetry and visualization, cross-platform interoperability, and integrated security tooling to detect and contain misaligned or compromised agents.
read more →

OpenClaw AI Agent Exposed: Critical Vulnerabilities Revealed

🔒 OpenClaw (formerly Clawdbot/Moltbot) surged in popularity in January 2026 but contains numerous critical vulnerabilities that place local secrets and system integrity at risk. Researchers found many publicly accessible instances running without authentication, allowing theft of API keys, chat histories, and remote code execution. The agent’s default trust of localhost, an unmoderated skills catalog, and prompt-injection weaknesses enable credential theft and malicious plugin execution. The article recommends isolating deployments, using burner accounts and allowlists, and restricting OpenClaw to dedicated experimental hosts.
read more →

Clawdbot and DKnife: Security Risks from Rapid AI Adoption

🚨 As AI agent frameworks surge, Talos warns of two immediate threats: Clawdbot — a popular open-source agentic tool (aka Moltbot/OpenClaw) that requires users to store credentials and API keys locally and can accept unvetted Skills granted broad system privileges. DKnife, active since at least 2019, is a modular Linux attack framework that compromises routers and edge devices to intercept traffic, hijack updates, and deliver malware while evading many endpoint defenses. The newsletter urges skepticism toward rushed AI tools and recommends hardening gateways, auditing firmware, enforcing strong authentication, and monitoring for suspicious update behaviors.
read more →

Study: Over 1.5M AI Agents Ungoverned, Risk Going Rogue

⚠️ Gravitee reports that roughly half of an estimated three million AI agents running in US and UK enterprises are unmonitored and potentially "going rogue." A December 2025 Opinion Matters survey of 750 IT executives found a mean of 36.9 agents per large organization and that 88% suspected an agent-related security or privacy incident in the prior year. Experts warn deployment is outpacing governance and call for continuous runtime oversight, tiered access controls, and stricter credential management.
read more →

Choosing Between Antigravity and Gemini CLI for Agents

🧭 Antigravity and Gemini CLI offer two complementary approaches for running agent-driven workflows. Antigravity delivers an approachable, graphical experience with an Agent Manager, in-browser application views, guided walkthroughs, and a native debugger for inspecting stack traces. Gemini CLI is terminal-first, installs via npm (npm install -g @google/gemini-cli, requires Node.js), supports headless/CI-friendly execution, and can call local tools like gh or gcloud. Both are extensible with MCP and Agent Skills, and both provide generous free tiers so teams can evaluate which workflow best fits their needs.
read more →

AI Agent Identity Management: New Control Plane for CISOs

🔐 AI agents—custom GPTs, copilots, coding agents and other autonomous tooling—are proliferating in production while remaining largely outside traditional IAM, PAM, and IGA controls. The piece argues for treating agents as a distinct identity class and applying continuous identity lifecycle management to ensure visibility, ownership, dynamic least privilege, and auditability. Rather than slowing adoption, this approach positions identity as the control plane for balancing innovation and security.
read more →

Public Sector Embraces AI Agents: ROI, Security, and Scale

🤖 Our inaugural survey of 251 senior public sector leaders, commissioned by Google Cloud and conducted by National Research Group, finds agentic AI is already mission‑critical: 55% report using AI agents and 42% have deployed more than 10 in production. Respondents expect to allocate 50%+ of future AI budgets to agents. The report highlights productivity gains (70% improved; 46% at least doubled) and security improvements (79% better threat identification, 70% improved intelligence/response integration), and points to Gemini for Government with FedRAMP High-authorized protections as a clear path to scale.
read more →