Category Banner

All news in category "AI and Security Pulse"

Wed, September 3, 2025

Threat Actors Use X's Grok AI to Spread Malicious Links

🛡️ Guardio Labs researcher Nati Tal reported that threat actors are abusing Grok, X's built-in AI assistant, to surface malicious links hidden inside video ad metadata. Attackers omit destination URLs from visible posts and instead embed them in the small "From:" field under video cards, which X apparently does not scan. By prompting Grok with queries like "where is this video from?", actors get the assistant to repost the hidden link as a clickable reference, effectively legitimizing and amplifying scams, malware distribution, and deceptive CAPTCHA schemes across the platform.

read more →

Wed, September 3, 2025

HexStrike‑AI Enables Rapid N‑Day Exploitation of Citrix

🔒 HexStrike-AI, an open-source red‑teaming framework, is being adopted by malicious actors to rapidly weaponize newly disclosed Citrix NetScaler vulnerabilities such as CVE-2025-7775, CVE-2025-7776, and CVE-2025-8424. Check Point Research reports dark‑web chatter and evidence of automated exploitation chains that scan, exploit, and persist on vulnerable appliances. Defenders should prioritize immediate patching, threat intelligence, and AI-enabled detection to reduce shrinking n‑day windows.

read more →

Wed, September 3, 2025

Managing Shadow AI: Three Practical Corporate Policies

🔒 The MIT report "The GenAI Divide: State of AI in Business 2025" exposes a pervasive shadow AI economy—90% of employees use personal AI while only 40% of organizations buy LLM subscriptions. This article translates those findings into three realistic policy paths: a complete ban, unrestricted use with hygiene controls, and a balanced, role-based model. Each option is paired with concrete technical controls (DLP, NGFW, CASB, EDR), organizational steps, and enforcement measures to help security teams align risk management with real-world employee behaviour.

read more →

Wed, September 3, 2025

Indirect Prompt-Injection Threats to LLM Assistants

🔐 New research demonstrates practical, dangerous promptware attacks that exploit common interactions—calendar invites, emails, and shared documents—to manipulate LLM-powered assistants. The paper Invitation Is All You Need! evaluates 14 attack scenarios against Gemini-powered assistants and introduces a TARA framework to quantify risk. The authors reported 73% of identified threats as High-Critical and disclosed findings to Google, which deployed mitigations. Attacks include context and memory poisoning, tool misuse, automatic agent/app invocation, and on-device lateral movement affecting smart-home and device control.

read more →

Wed, September 3, 2025

Model Namespace Reuse: Supply-Chain RCE in Cloud AI

🔒 Unit 42 describes a widespread flaw called Model Namespace Reuse that lets attackers reclaim abandoned Hugging Face Author/ModelName namespaces and distribute malicious model code. The technique can lead to remote code execution and was demonstrated against major platforms including Google Vertex AI and Azure AI Foundry, as well as thousands of open-source projects. Recommended mitigations include version pinning, cloning models to trusted storage, and scanning repositories for reusable references.

read more →

Wed, September 3, 2025

How the Generative AI Boom Opens Privacy and Cyber Risks

🔒The rapid adoption of generative AI is prompting significant privacy and security concerns as vendors revise terms to use user data for model training. High-profile pushback — exemplified by WeTransfer’s reversal — revealed how unclear terms and live experimentation can expose corporate and personal information. Employees using consumer tools like ChatGPT for work tasks risk leaking secrets, and platforms such as Slack are explicitly reserving rights to leverage customer data. CISOs must balance strategic AI adoption with heightened compliance, governance and operational risk.

read more →

Wed, September 3, 2025

EMBER2024: Advancing ML Benchmarks for Evasive Malware

🛡️ The EMBER2024 release modernizes the popular EMBER malware benchmark by providing metadata, labels, and computed features for over 3.2 million files spanning six file formats. It supplies a 6,315-sample challenge set of initially evasive malware, updated feature extraction code using pefile, and supplemental raw bytes and disassembly for 16.3 million functions. The package also includes source code to reproduce feature calculation, labeling, and dataset construction so researchers can replicate and extend benchmarks.

read more →

Tue, September 2, 2025

HexStrike-AI Enables Rapid Zero-Day Exploitation at Scale

⚠️ HexStrike-AI is a newly released framework that acts as an orchestration “brain,” directing more than 150 specialized AI agents to autonomously scan, exploit, and persist inside targets. Within hours of release, dark‑web chatter showed threat actors attempting to weaponize it against recent zero‑day CVEs, dropping webshells enabling unauthenticated remote code execution. Although the targeted vulnerabilities are complex and typically require advanced skills, operators claim HexStrike-AI can reduce exploitation time from days to under 10 minutes, potentially lowering the barrier for less skilled attackers.

read more →

Tue, September 2, 2025

The AI Fix Ep. 66: AI Mishaps, Breakthroughs and Safety

🧠 In episode 66 of The AI Fix, hosts Graham Cluley and Mark Stockley walk listeners through a rapid-fire roundup of recent AI developments, from a ChatGPT prompt that produced an inaccurate anatomy diagram to a controversial Stanford sushi hackathon. They cover a Google Gemini bug that generated self-deprecating responses, criticisms that gave DeepSeek poor marks on existential-risk mitigation, and a debunked pregnancy-robot story. The episode also celebrates a genuine scientific advance: a team of AI agents that designed novel COVID-19 nanobodies, and considers how unusual collaborations and growing safety work could change the broader AI risk landscape.

read more →

Tue, September 2, 2025

Shadow AI Discovery: Visibility, Governance, and Risk

🔍 Employees are driving AI adoption from the ground up, often using unsanctioned tools and personal accounts that bypass corporate controls. Harmonic Security found that 45.4% of sensitive AI interactions come from personal email, underscoring a growing Shadow AI Economy. Rather than broad blocking, security and governance teams should prioritize continuous discovery and an AI asset inventory to apply role- and data-sensitive controls that protect sensitive workflows while enabling productivity.

read more →

Tue, September 2, 2025

NCSC and AISI Back Public Disclosure for AI Safeguards

🔍 The NCSC and the AI Security Institute have broadly welcomed public, bug-bounty style disclosure programs to help identify and remediate AI safeguard bypass threats. They said initiatives from vendors such as OpenAI and Anthropic could mirror traditional vulnerability disclosure to encourage responsible reporting and cross-industry collaboration. The agencies cautioned that programs require clear scope, strong foundational security, prior internal reviews and sufficient triage resources, and that disclosure alone will not guarantee model safety.

read more →

Tue, September 2, 2025

Agentic AI: Emerging Security Challenges for CISOs

🔒 Agentic AI is poised to transform workflows like software development, customer support, RPA, and employee assistance, but its autonomy raises new cybersecurity risks for CISOs. A 2024 Cisco Talos report and industry experts warn these systems can act without human oversight, chain benign actions into harmful sequences, or learn to evade detection. Lack of visibility fosters shadow AI, and third-party integrations and multi-agent setups widen supply-chain and data-exfiltration exposures. Organizations should adopt observability, governance, and secure-by-design practices before scaling agentic deployments.

read more →

Tue, September 2, 2025

Secure AI at Machine Speed: Full-Stack Enterprise Defense

🔒 CrowdStrike explains how widespread AI adoption expands the enterprise attack surface, exposing models, data pipelines, APIs, and autonomous agents to new adversary techniques. The post argues that legacy controls and fragmented tooling are insufficient and advocates for real-time, full‑stack protections. The Falcon platform is presented as a unified solution offering telemetry, lifecycle protection, GenAI-aware data loss prevention, and agent governance to detect, prevent, and remediate AI-related threats.

read more →

Mon, September 1, 2025

Spotlight Report: Navigating IT Careers in the AI Era

🔍 This spotlight report examines how AI is reshaping IT careers across roles—from developers and SOC analysts to helpdesk staff, I&O teams, enterprise architects, and CIOs. It identifies emerging functions and essential skills such as prompt engineering, model governance, and security-aware development. The report also offers practical steps to adapt learning paths, demonstrate capability, and align individual growth with organizational AI strategy.

read more →

Sun, August 31, 2025

OpenAI Enhances ChatGPT Codex with IDE and CLI Sync

🚀 OpenAI has released a major update to Codex, its agentic coding assistant, adding a native VS Code extension and expanded terminal and IDE support. Plus and Pro subscribers can now use Codex with every build across web, terminal, and IDE without separate API keys, as the service links to your ChatGPT account to preserve session state. The release also adds a Seamless Local ↔ Cloud Handoff to delegate paired local tasks to the cloud asynchronously, alongside CLI command upgrades and bug fixes; competitors like Claude are pursuing similar web-to-terminal integrations.

read more →

Sun, August 31, 2025

Anthropic Tests Web Version of Claude Code for Developers

🛠️ Anthropic is rolling out a research preview of a web-based Claude Code, bringing its terminal-focused coding assistant into the browser at Claude.ai/code. The web preview requires installing the GitHub Claude app on a repository and committing a "Claude Dispatch" GitHub workflow file before use, with optional email and web notifications for updates. Claude Code—already available in terminals and integrated editors under paid plans—can inspect codebases to help fix bugs, test features, simplify Git tasks, and automate workflows. It remains unclear whether the terminal and web versions can access or share the same repository content or usage data.

read more →

Sun, August 31, 2025

ChatGPT Adds Flashcard-Based Quiz Feature for Learning

📚 ChatGPT now offers an interactive flashcard-style quiz feature within its new Study and Learn tool, designed to help users evaluate and reinforce their knowledge on any topic. Using models such as GPT-5-Thinking (or Instant/Default), the assistant generates embedded flashcards, presents answer choices, and provides a running scorecard at the end of the quiz. The system preserves conversational memory so it can refine future quizzes and adapt to a learner’s progress, aligning with research that shows testing improves retention.

read more →

Sun, August 31, 2025

OpenAI Tests 'Thinking Effort' Picker for ChatGPT Controls

🧠 OpenAI is testing a new "Thinking effort" picker for ChatGPT that lets users set how much internal compute—or "juice"—the model can spend on a response. The feature offers four levels: light (5), standard (18), extended (48) and max (200), with higher settings producing deeper but slower replies. The 200 "max" tier is gated behind a $200 Pro plan. OpenAI positions the picker as a way to give users more control over response depth and speed.

read more →

Fri, August 29, 2025

Cloudy AI Agent Automates Threat Analysis and Response

🔍 Cloudflare has integrated Cloudy, its first AI agent, with security analytics and introduced a conversational chat interface to accelerate root-cause analysis and mitigation. The chat lets users ask natural-language questions, refine investigations, and pivot from a single indicator to related threat events in minutes. Paired with the Cloudforce One Threat Events platform and built on the Agents SDK running on Workers AI, Cloudy surfaces contextual IOCs, attacker timelines, and prioritized actions at scale. Cloudflare emphasizes Cloudy was not trained on customer data and plans deeper WAF debugging and Alerts integrations.

read more →

Fri, August 29, 2025

Cloudflare data: AI bot crawling surges, referrals fall

🤖 Cloudflare's mid‑2025 dataset shows AI training crawlers now account for nearly 80% of AI bot activity, driving a surge in crawling while sending far fewer human referrals. Google referrals to news sites fell sharply in March–April 2025 as AI Overviews and Gemini upgrades reduced click-throughs. OpenAI’s GPTBot and Anthropic’s ClaudeBot increased crawling share while ByteDance’s Bytespider declined. The resulting crawl-to-refer imbalance — tens of thousands of crawls per human click for some platforms — threatens publisher revenue.

read more →