< ciso
brief />
Tag Banner

All news with #ai agent hijacking tag

30 articles

Integrating VirusTotal into AI Agent Decision Loops

🛡️At VirusTotal we are integrating reputation and Code Insight directly into AI agent decision loops so agents can consult verdicts and context as part of their runtime behavior. Two community plugins, VT-sentinel (OpenClaw) and hermes-virustotal (Hermes), demonstrate the approach using the new VTAI API with compact responses and per-agent identities. Both MIT-licensed projects scan files, annotate hashes, and provide configurable privacy and enforcement presets so agents can quarantine, block, or proceed based on risk appetite.
read more →

ThreatsDay: OAuth Consent Abuse, EDR Bypass & More

🔒 Multiple vendors and researchers this week disclosed a broad set of active threats spanning cloud environments, endpoints, and messaging platforms. OAuth consent abuse campaigns impersonated trusted apps to harvest tokens and access mail and files without passwords, while the BlackSanta campaign used resume-themed ISOs to chain DLL side‑loading and disable AV/EDR via vulnerable drivers. Other notable items include microcontroller debug bypasses, ZIP header evasion that defeats some AV/EDR tools, an AI-agent compromise of an internal platform, and targeted phishing against Signal and WhatsApp users.
read more →

Attackers Weaponize SOC Workloads to Exploit Phishing

🛡️ Attackers increasingly treat high-volume phishing as a weapon, flooding Security Operations Centers to exhaust analysts and hide targeted spear-phish. The article argues defenders must move from rule-based automation to decision-ready investigations—transparent, auditable agentic AI that produces concise verdicts and evidence. This reduces analyst fatigue, restores rapid response, and limits the window for attacker success.
read more →

Autonomous AI Agent Chains Bugs to Compromise Platform

🛡️ CodeWall’s autonomous red-team agent compromised hiring startup Jack & Jill by chaining four seemingly minor bugs into a complete account takeover within an hour. The agent abused a permissive URL fetcher, an enabled test-login mode, missing onboarding role checks, and absent domain verification to map APIs, authenticate via a test OTP flow, and escalate to org-admin privileges. It then generated synthetic voice clips to social-engineer Jack, conducting 28 multi-turn exchanges and even impersonating Donald Trump before moving on, demonstrating how AI can rapidly combine low-risk flaws into high-impact attacks.
read more →

OpenAI to Acquire Promptfoo to Boost AI Agent Security

🔒 OpenAI said it will acquire AI testing startup Promptfoo to strengthen security checks for AI agents as enterprises deploy autonomous systems in business workflows. Promptfoo’s tools let developers test LLM applications against adversarial prompts, including prompt injection and jailbreak attempts, and evaluate whether models follow safety and reliability guidelines. OpenAI plans to integrate Promptfoo into OpenAI Frontier and to continue developing the open-source project while expanding enterprise capabilities.
read more →

AI Assistants Shift Organizational Security Priorities

🤖 AI-based assistants such as OpenClaw are rapidly reshaping organizational security, blurring boundaries between data and code and between trusted co-workers and insider threats. Incidents and research show agents taking autonomous actions and misconfigured admin interfaces exposing credentials, conversations, and integrations. Demonstrated supply-chain and prompt injection attacks can install rogue agents and manipulate agent perception. Organizations should isolate agents, enforce strict network controls, vet third-party skills, and address AI fragility as a core security concern.
read more →

Google Cloud and Nokia Integrate Network as Code Platform

🚀 Google Cloud and Nokia announced an integration at MWC Barcelona that connects Nokia Network as Code (NaC) with Google Cloud’s agentic AI stack to enable AI agents to observe, program, and optimize mobile networks autonomously. The collaboration leverages Gemini models and standardized protocols such as A2A and MCP to translate natural-language intent into network actions. An Agent Development Kit (ADK) allows enterprises to build custom multi-agent workflows that bridge business logic and network intelligence, delivering a zero-code, intent-driven developer experience.
read more →

OpenClaw: Supply-Chain Risks and Underground Chatter

🔍 OpenClaw is an AI-driven automation framework with a modular skills marketplace that lets agents run user-installed plugins to manage mail, schedules, and system tasks. Security researchers disclosed multiple critical flaws — including one-click RCE (CVE-2026-25253), token/OAuth abuse, prompt-injection pathways, and absent sandboxing — and documented dozens of poisoned skills on ClawHub. Flare's telemetry shows significant chatter across research and fringe channels but limited evidence of mass criminal operationalization; the immediate confirmed threat is supply-chain abuse where malicious skills execute with agent-level privileges and exfiltrate credentials and sessions.
read more →

Grok and Copilot Can Be Abused as Covert C2 Channels

⚠️ Check Point Research warns attackers can misuse web-based AI assistants such as Grok and Microsoft Copilot to create covert, bidirectional command-and-control channels. By abusing built-in web-browsing and URL-fetch capabilities, malware can instruct an AI web interface to retrieve content from attacker-controlled URLs and return embedded commands without requiring API keys or authenticated accounts. Because many organizations treat AI domains as trusted outbound traffic and apply limited inspection, these C2 flows can blend into routine HTTPS sessions and evade traditional network controls.
read more →

Amazon Aurora DSQL Integrates with Kiro Powers, Skills

🤖 Amazon Web Services today announced that Amazon Aurora DSQL now integrates with Kiro powers and AI agent skills to accelerate database-backed application development. The integration packages the Aurora DSQL Model Context Protocol (MCP) server with development best practices so AI agents can assist with schema design, performance tuning, and routine database operations out of the box. Kiro powers provides a curated registry of MCP servers, steering files, and agent hooks with one-click installation in the Kiro IDE. The Aurora DSQL skill extends the same guidance to other agent ecosystems via a Skills CLI, allowing agents to dynamically load Postgres-compatible SQL patterns, distributed design advice, and IAM authentication guidance.
read more →

Provisioned Throughput on Vertex AI: Expanded Capacity

⚙️ Provisioned Throughput on Vertex AI standardizes reserved capacity across first-party, third-party, and open-source models, adding multimodal and operational enhancements to support production-scale AI agents. The update introduces Anthropic integration (private preview), PT for popular open models such as Llama 4, Qwen3, and GLM-4.7, and native support for high-bandwidth modalities including Gemini 3, Nano Banana, and Gemini Live API. Operational improvements — one-week PT terms, scheduled change orders, and explicit caching for long contexts — enable predictable latency, flexible commitments, and lower input costs for peak events and high-concurrency workloads.
read more →

Infostealer Harvests OpenClaw AI Agent Configurations

🔓 Hudson Rock says an info‑stealer, likely a Vidar variant, exfiltrated an OpenClaw agent's configuration, including openclaw.json, device.json and soul.md. The files contain gateway tokens, cryptographic keys and the agent's operational 'soul,' which could let attackers impersonate the AI assistant or connect to local instances if exposed. The incident signals a shift from stealing credentials to harvesting AI agent identities, and vendors should expect targeted modules to follow.
read more →

Infostealer Observed Harvesting OpenClaw Agent Secrets

🔐 Hudson Rock has observed information-stealing malware exfiltrating configuration and memory files from the OpenClaw agent framework, exposing API tokens, private keys, and persistent agent memory. The activity, attributed to a Vidar-like infostealer and recorded on 13 February 2026, captured openclaw.json, device.json, and agent 'soul' and memory files. With these items an attacker could impersonate the device, bypass Safe Device checks, access encrypted logs, or fully compromise a user's digital identity. Organizations should audit agent directories, apply vendor fixes, and enforce strict filesystem permissions immediately.
read more →

Docker patches critical Ask Gordon AI 'DockerDash' flaw

🛡️ Researchers disclosed a critical prompt-injection flaw, codenamed DockerDash, that allowed malicious Docker image metadata to hijack the Ask Gordon AI assistant in Docker Desktop and the Docker CLI. The vulnerability, discovered by Noma Labs, could enable remote code execution or sensitive data exfiltration by treating unverified LABEL fields as executable instructions. Docker fixed the issue in Ask Gordon version 4.50.0 (November 2025). Administrators should upgrade and apply zero-trust validation to AI toolchains and MCP/Gateway integrations.
read more →

Moltworker: Self-Hosted AI Agent on Cloudflare Edge

🤖 Cloudflare published Moltworker, an adaptation of the open-source Moltbot personal AI agent designed to run on the Cloudflare Developer Platform instead of dedicated local hardware. The implementation combines Workers, the Sandbox SDK, Browser Rendering, and R2 to run agent workloads at the edge with controlled persistence. Integration with AI Gateway adds centralized observability, BYOK support, unified billing and fallback behavior. The repo is open-source and the project is presented as a proof-of-concept that requires a paid Workers plan.
read more →

Researchers Find 30+ Flaws in AI IDEs, Enabling Data Theft

⚠️Researchers disclosed more than 30 vulnerabilities in AI-integrated IDEs in a report dubbed IDEsaster by Ari Marzouk (MaccariTA). The issues chain prompt-injection with auto-approved agent tooling and legitimate IDE features to achieve data exfiltration and remote code execution across products like Cursor, GitHub Copilot, Zed.dev, and others. Of the findings, 24 received CVE identifiers; exploit examples include workspace writes that cause outbound requests, settings hijacks that point executable paths to attacker binaries, and multi-root overrides that trigger execution. Researchers advise using AI agents only with trusted projects, applying least privilege to tool access, hardening prompts, and sandboxing risky operations.
read more →

Zero-Click Agentic Browser Deletes Entire Google Drive

⚠️ Straiker STAR Labs researchers disclosed a zero-click agentic browser attack that can erase a user's entire Google Drive by abusing OAuth-connected assistants in AI browsers such as Perplexity Comet. A crafted, polite email containing sequential natural-language instructions causes the agent to treat housekeeping requests as actionable commands and delete files without further confirmation. The technique requires no jailbreak or visible prompt injection, and deletions can cascade across shared folders and team drives.
read more →

Public Sector Agentic Era: 300 Agents in One Day Showcase

🤖 Google Public Sector ran a #100DaysOfAgents campaign and an interactive Mission District at its October 29, 2025 Public Sector Summit where attendees built 300+ AI agent prototypes using self-serve builder stations. The initiative demonstrates how AI agents can accelerate mission outcomes by automating complex tasks, breaking down data silos, and improving access to services. Prototype examples ranged from a Grid Optimization Analyst to a Water System Transition Planner and an NIH Access Assistant; agents in the library are illustrative, not production-ready. Google invites agencies to partner with experts, prototype with Gemini for Government, and continue development at Google Cloud Next.
read more →

Agentic AI Framework for Life Sciences R&D on Google Cloud

🔬 Google Cloud outlines an agentic AI framework to accelerate life sciences R&D by orchestrating specialized, fine-tunable models into modular workflows. It describes four agents—MedGemma for deep literature and data synthesis, TxGemma for in-silico preclinical prediction, Gemini 2.5 Pro as the cognitive orchestrator, and AlphaFold-2 plus docking tools for molecular design. The architecture maps data flows, tooling, and cloud services (Vertex AI, HPC, search) to move from target discovery through iterative Design→Dock→Predict→Refine cycles toward lab-ready lead nomination while preserving version control and compliance.
read more →

Amazon SageMaker notebooks with built-in AI agent experience

🤖 Amazon SageMaker introduces a serverless notebook experience that consolidates SQL, Python, and natural-language workflows into a single interactive workspace for analytics and ML. The environment is backed by Amazon Athena for Apache Spark to scale from interactive queries to petabyte-scale processing without pre-provisioned infrastructure. A built-in AI agent generates code and SQL from natural-language prompts to accelerate development, and the feature is available via SageMaker Unified Studio's one-click onboarding in multiple AWS Regions.
read more →