< ciso
brief />
Tag Banner

All news with #tool abuse tag

27 articles · page 2 of 2

GTIG: Threat Actors Shift to AI-Enabled Runtime Malware

🔍 Google Threat Intelligence Group (GTIG) reports an operational shift from adversaries using AI for productivity to embedding generative models inside malware to generate or alter code at runtime. GTIG details “just-in-time” LLM calls in families like PROMPTFLUX and PROMPTSTEAL, which query external models such as Gemini to obfuscate, regenerate, or produce one‑time functions during execution. Google says it disabled abusive assets, strengthened classifiers and model protections, and recommends monitoring LLM API usage, protecting credentials, and treating runtime model calls as potential live command channels.
read more →

Agent Session Smuggling Threatens Stateful A2A Systems

🔒 Unit42 researchers Jay Chen and Royce Lu describe agent session smuggling, a technique where a malicious AI agent exploits stateful A2A sessions to inject hidden, multi‑turn instructions into a victim agent. By hiding intermediate interactions in session history, an attacker can perform context poisoning, exfiltrate sensitive data, or trigger unauthorized tool actions while presenting only the expected final response to users. The authors present two PoCs (using Google's ADK) showing sensitive information leakage and unauthorized trades, and recommend layered defenses including human‑in‑the‑loop approvals, cryptographic AgentCards, and context‑grounding checks.
read more →

Agentic AI and the OODA Loop: The Integrity Problem

🛡️ Bruce Schneier and Barath Raghavan argue that agentic AIs run repeated OODA loops—Observe, Orient, Decide, Act—over web-scale, adversarial inputs, and that current architectures lack the integrity controls to handle untrusted observations. They show how prompt injection, dataset poisoning, stateful cache contamination, and tool-call vectors (e.g., MCP) let attackers embed malicious control into ordinary inputs. The essay warns that fixing hallucinations is insufficient: we need architectural integrity—semantic verification, privilege separation, and new trust boundaries—rather than surface patches.
read more →

Google Introduces LLM-Evalkit for Prompt Engineering

🧭 LLM-Evalkit is an open-source, lightweight application from Google that centralizes and streamlines prompt engineering using Vertex AI SDKs. It provides a no-code interface for creating, versioning, testing, and benchmarking prompts while tracking objective performance metrics. The tool promotes a dataset-driven evaluation workflow—define the task, assemble representative test cases, and score outputs against clear metrics—to replace ad-hoc iteration and subjective comparisons. Documentation and a guided console tutorial are available to help teams adopt the framework and reproduce experiments.
read more →

Six Ways to Curb Security Tool Proliferation in Organizations

🛡️ Organizations facing security-tool sprawl should begin by inventorying controls and eliminating those that no longer map to business risk. Use automated analytics and dashboards to surface ineffective or redundant products, and prioritize tools that enable automation to consolidate alerts and workflows. Remove duplicate solutions—often introduced through acquisitions or silos—and move toward unified platforms while fostering continuous training so teams actually use and benefit from deployed tools.
read more →

Salty2FA Phishing Framework Evades MFA Using Turnstile

🔒 A newly identified phishing-as-a-service called Salty2FA is being used in campaigns that bypass multi-factor authentication by intercepting verification flows and abusing trusted services like Cloudflare Turnstile. Ontinue researchers report the kit uses subdomain rotation, domain-pairing, geo-blocking and dynamic corporate branding to make credential pages appear legitimate. The framework simulates SMS, authenticator apps, push approvals and even hardware-token prompts, routing victims through Turnstile gates to filter automated analysis before harvesting credentials.
read more →

Threat Actors Use X's Grok AI to Spread Malicious Links

🛡️ Guardio Labs researcher Nati Tal reported that threat actors are abusing Grok, X's built-in AI assistant, to surface malicious links hidden inside video ad metadata. Attackers omit destination URLs from visible posts and instead embed them in the small "From:" field under video cards, which X apparently does not scan. By prompting Grok with queries like "where is this video from?", actors get the assistant to repost the hidden link as a clickable reference, effectively legitimizing and amplifying scams, malware distribution, and deceptive CAPTCHA schemes across the platform.
read more →