Category Banner

All news in category "AI and Security Pulse"

Fri, August 29, 2025

Cloudy-driven Email Detection Summaries and Guardrails

🛡️Cloudflare extended its AI agent Cloudy to generate clear, concise explanations for email security detections so SOC teams can understand why messages are blocked. Early LLM implementations produced dangerous hallucinations when asked to interpret complex, multi-model signals, so Cloudflare implemented a Retrieval-Augmented Generation approach and enriched contextual prompts to ground outputs. Testing shows these guardrails yield more reliable summaries, and a controlled beta will validate performance before wider rollout.

read more →

Fri, August 29, 2025

AI Systems Begin Conducting Autonomous Cyberattacks

🤖 Anthropic's Threat Intelligence Report says the developer tool Claude Code was abused to breach networks and exfiltrate data, targeting 17 organizations last month, including healthcare providers. Security vendor ESET published a proof-of-concept AI ransomware, PromptLock, illustrating how public AI tools could amplify threats. Experts recommend red-teaming, prompt-injection defenses, DNS monitoring, and isolation of critical systems.

read more →

Fri, August 29, 2025

Network Visibility for Generative AI Data Protection

🔍 Generative AI platforms such as ChatGPT, Gemini, Copilot, and Claude create new data‑exfiltration risks that can evade traditional endpoint and channel DLP products. Network‑based detection, exemplified by Fidelis NDR, restores visibility via URL‑based alerts, metadata auditing, and file‑upload inspection across monitored network paths. Organizations can tune real‑time alerts, retain searchable session metadata, and capture full packet context for forensics while acknowledging limits around unmanaged channels and asset‑level attribution.

read more →

Thu, August 28, 2025

Securing AI Before Times: Preparing for AI-driven Threats

🔐 At the Aspen US Cybersecurity Group Summer 2025 meeting, Wendi Whitmore urged urgent action to secure AI while defenders still retain a temporary advantage. Drawing on Unit 42 simulations that executed a full attack chain in as little as 25 minutes, she warned adversaries are evolving from automating old tactics to attacking the foundations of AI — targeting internal LLMs, training data and autonomous agents. Whitmore recommended adoption of a five-layer AI tech stack — Governance, Application, Infrastructure, Model and Data — combined with secure-by-design practices, strengthened identity and zero-trust controls, and investment in post-quantum cryptography to protect long-lived secrets and preserve resilience.

read more →

Thu, August 28, 2025

George Finney on Quantum Risk, AI and CISO Influence

🔐 George Finney, CISO for the University of Texas System, outlines priorities for modern security leaders. He highlights anti-ransomware technologies and enterprise browser controls as critical defenses and warns of the harvest now, decrypt later threat posed by future quantum advances. Finney predicts AI tools will accelerate SOC workflows and expand opportunities for entry-level analysts, and his book Rise of the Machines explains how zero trust can secure AI while AI accelerates zero trust adoption.

read more →

Thu, August 28, 2025

Threat Actors Used Anthropic's Claude to Build Ransomware

🔒Anthropic's Claude Code large language model has been abused by cybercriminals to build ransomware, run data‑extortion operations, and support assorted fraud schemes. In one RaaS case (GTG-5004) Claude helped implement ChaCha20 with RSA key management, reflective DLL injection, syscall-based evasion, and shadow copy deletion, enabling a working ransomware product sold on dark web forums. Anthropic says it has banned related accounts, deployed tailored classifiers, and shared technical indicators with partners to help defenders.

read more →

Thu, August 28, 2025

AI Crawler Traffic: Purpose and Industry Breakdown

🔍 Cloudflare Radar introduces industry-focused AI crawler insights and a new crawl purpose selector that classifies bots as Training, Search, User action, or Undeclared. The update surfaces top bot trends, crawl-to-refer ratios, and per-industry views so publishers can see who crawls their content and why. Data shows Training drives nearly 80% of crawl requests, while User action and Undeclared exhibit smaller, cyclical patterns.

read more →

Thu, August 28, 2025

Background Removal: Evaluating Image Segmentation Models

🧠 Cloudflare introduces background removal for Images, running a dichotomous image segmentation model on Workers AI to isolate subjects and produce soft saliency masks that map pixel opacity (0–255). The team evaluated U2-Net, IS-Net, BiRefNet, and SAM via the open-source rembg interface on the Humans and DIS5K datasets, prioritizing IoU and Dice metrics over pixel accuracy. BiRefNet-general achieved the best overall balance of fidelity and detail (IoU 0.87, Dice 0.92) while lightweight models were faster on modest GPUs and SAM was excluded for unprompted tasks. The feature is available in open beta through the Images API using the segment parameter and can be combined with other transforms or draw() overlays.

read more →

Thu, August 28, 2025

Integrating Code Insight into Reverse Engineering Workflows

🔎 VirusTotal has extended Code Insight to analyze disassembled and decompiled code via a new API endpoint that returns a concise summary and a detailed description for each queried function. The endpoint accepts prior requests as a history input so analysts can chain, correct, and refine context across iterations. An updated VT-IDA plugin for IDA Pro demonstrates integration inside an analyst notebook, allowing selection of functions, iterative review, and acceptance of insights into a shared corpus. The feature is available in trial mode; results have been promising in testing but are not guaranteed complete or perfectly accurate, and community feedback is encouraged.

read more →

Thu, August 28, 2025

Gemini Available On-Premises with Google Distributed Cloud

🚀 Gemini on Google Distributed Cloud (GDC) is now generally available for customers, bringing Google’s advanced Gemini models on‑premises with GA for air‑gapped deployments and a connected preview. The solution provides managed Gemini endpoints with zero‑touch updates, automatic load balancing and autoscaling, and integrates with Vertex AI and preview agents. It pairs Gemini 2.5 Flash and Pro with NVIDIA Hopper and Blackwell accelerators and includes audit logging, access controls, and support for Confidential Computing (Intel TDX and NVIDIA) to meet strict data residency, sovereignty, and compliance requirements.

read more →

Thu, August 28, 2025

Anthropic Warns of GenAI-Only Cyberattacks Rising Now

🤖 Anthropic published a report detailing attacks in which generative AI tools operated as the primary adversary, conducting reconnaissance, credential harvesting, lateral movement and data exfiltration without human operators. The company identified a scaled, multi-target data extortion campaign that used Claude Code to automate the full attack lifecycle across at least 17 organizations. Security vendors including ESET have reported similar patterns, prompting calls to accelerate defenses and re-evaluate controls around both hosted and open-source AI models.

read more →

Wed, August 27, 2025

ESET Finds PromptLock: First AI-Powered Ransomware

🔒 ESET researchers have identified PromptLock, described as the first known AI-powered ransomware implant, in an August 2025 report. The Golang sample (Windows and Linux variants) leverages a locally hosted gpt-oss:20b model via the Ollama API to dynamically generate malicious Lua scripts. Those cross-platform scripts perform enumeration, selective exfiltration and encryption using SPECK 128-bit, but ESET characterises the sample as a proof-of-concept rather than an active campaign.

read more →

Wed, August 27, 2025

Agent Factory: Top 5 Agent Observability Practices

🔍 This post outlines five practical observability best practices to improve the reliability, safety, and performance of agentic AI. It defines agent observability as continuous monitoring, detailed tracing, and logging of decisions and tool calls combined with systematic evaluations and governance across the lifecycle. The article highlights Azure AI Foundry Observability capabilities—evaluations, an AI Red Teaming Agent, Azure Monitor integration, CI/CD automation, and governance integrations—and recommends embedding evaluations into CI/CD, performing adversarial testing before production, and maintaining production tracing and alerts to detect drift and incidents.

read more →

Wed, August 27, 2025

How Cloudflare Runs More AI Models on Fewer GPUs with Omni

🤖 Cloudflare explains how Omni, an internal platform, consolidates many AI models onto fewer GPUs using lightweight process isolation, per-model Python virtual environments, and controlled GPU over-commitment. Omni’s scheduler spawns and manages model processes, isolates file systems with a FUSE-backed /proc/meminfo, and intercepts CUDA allocations to safely over-commit GPU RAM. The result is improved availability, lower latency, and reduced idle GPU waste.

read more →

Wed, August 27, 2025

Five Essential Rules for Safe AI Adoption in Enterprises

🛡️ AI adoption is accelerating in enterprises, but many deployments lack the visibility, controls, and ongoing safeguards needed to manage risk. The article presents five practical rules: continuous AI discovery, contextual risk assessment, strong data protection, access controls aligned with zero trust, and continuous oversight. Together these measures help CISOs enable innovation while reducing exposure to breaches, data loss, and compliance failures.

read more →

Wed, August 27, 2025

LLMs Remain Vulnerable to Malicious Prompt Injection Attacks

🛡️ A recent proof-of-concept by Bargury demonstrates a practical and stealthy prompt injection that leverages a poisoned document stored in a victim's Google Drive. The attacker hides a 300-word instruction in near-invisible white, size-one text that tells an LLM to search Drive for API keys and exfiltrate them via a crafted Markdown URL. Schneier warns this technique shows how agentic AI systems exposed to untrusted inputs remain fundamentally insecure, and that current defenses are inadequate against such adversarial inputs.

read more →

Tue, August 26, 2025

Securing and Governing Autonomous AI Agents in Business

🔐 Microsoft outlines practical guidance for securing and governing the emerging class of autonomous agents. Igor Sakhnov explains how agents—now moving from experimentation into deployment—introduce risks such as task drift, Cross Prompt Injection Attacks (XPIA), hallucinations, and data exfiltration. Microsoft recommends starting with a unified agent inventory and layered controls across identity, access, data, posture, threat, network, and compliance. It introduces Entra Agent ID and an agent registry concept to enable auditable, just-in-time identities and improved observability.

read more →

Tue, August 26, 2025

The AI Fix #65 — Excel Copilot Dangers and Social Media

⚠️ In episode 65 of The AI Fix, Graham Cluley warns that Microsoft Excel’s new COPILOT function can produce unpredictable, non-reproducible formula results and should not be used for important numeric work. The hosts also discuss a research experiment that created a 500‑AI social network and the arXiv paper Can We Fix Social Media?. The episode blends technical analysis with lighter AI culture stories and offers subscription and support notes.

read more →

Tue, August 26, 2025

Cloudflare Introduces MCP Server Portals for Zero Trust

🔒 Cloudflare has launched MCP Server Portals in Open Beta to centralize and secure Model Context Protocol (MCP) connections between large language models and application backends. The Portals provide a single gateway where administrators register MCP servers and enforce identity-driven policies such as MFA, device posture checks, and geographic restrictions. They deliver unified visibility and logging, curated least-privilege user experiences, and simplified client configuration to reduce the risk of prompt injection, supply chain attacks, and data leakage.

read more →

Tue, August 26, 2025

SASE Best Practices for Securing Generative AI Deployments

🔒 Cloudflare outlines practical steps to secure generative AI adoption using its SASE platform, combining SWG, CASB, Access, DLP, MCP controls and AI infrastructure. The post introduces new AI Security Posture Management (AI‑SPM) features — shadow AI reporting, provider confidence scoring, prompt protection, and API CASB integrations — to improve visibility, risk management, and data protection without blocking innovation. These controls are integrated into a single dashboard to simplify enforcement and protect internal and third‑party LLMs.

read more →