Category Banner

All news in category "AI and Security Pulse"

Tue, September 9, 2025

How CISOs Are Experimenting with AI for Security Operations

🤖 Security leaders are cautiously adopting AI to improve security operations, threat hunting, reporting and vendor risk processes while maintaining strict guardrails. Teams are piloting custom integrations like Anthropic's MCP, vendor agents such as Gem, and developer toolchains including Microsoft Copilot to connect LLMs with telemetry and internal data sources. Early experiments show significant time savings—automating DLP context, producing near-complete STRIKE threat models, converting long executive reviews into concise narratives, and accelerating phishing triage—but practitioners emphasize validation, feedback loops and human oversight before broad production use.

read more →

Tue, September 9, 2025

Experts: AI-Orchestrated Autonomous Ransomware Looms

🛡️ NYU researchers built a proof-of-concept LLM that can be embedded in a binary to synthesize and execute ransomware payloads dynamically, performing reconnaissance, generating polymorphic code and coordinating extortion with minimal human input. ESET detected traces and initially called it the first AI-powered ransomware before clarifying it was a lab prototype rather than an in-the-wild campaign. Experts including IST's Taylor Grossman say the work was predictable but remains controllable today. They advise reinforcing CIS and NIST controls and prioritizing basic cyber hygiene to mitigate such threats.

read more →

Mon, September 8, 2025

Reviewing AI Data Center Policies to Mitigate Risks

🔒 Investment in AI data centers is accelerating globally, creating not only rising energy demand and emissions but also an expanded surface of cyber threats. AI facilities rely on GPUs, ASICs and FPGAs, which introduce side-channel, memory-level and GPU-resident malware risks that differ from traditional CPU-focused threats. Organizations should require operators to implement supply-chain vetting, physical shielding (for example, Faraday cages), continuous model auditing and stronger personnel controls to reduce model exfiltration, poisoning and foreign infiltration.

read more →

Mon, September 8, 2025

Google to Let Users Set AI Mode as Default Search Option

🔎 Google will let users set AI mode as their default search tab, replacing the traditional blue links view for those who opt in. The change will be user-controlled via a toggle or button so individuals can choose AI-driven summaries as their primary experience while the classic Web tab remains accessible. Google says it is studying the impact on ads and publishers.

read more →

Sun, September 7, 2025

ChatGPT makes Projects free, adds chat-branching toggle

🔁 OpenAI is rolling out two notable updates to ChatGPT: the Projects feature is now available to all users for free, and a new Branch in new chat toggle lets you split and continue conversations from a chosen message. Projects create independent workspaces that organize chats, files, and custom instructions with separate memory, context, and tools. The branching option spawns a new conversation that includes everything up to the split point, helping manage divergent topics and streamline brainstorming. Both changes aim to improve organization and continuity for repeated or evolving work.

read more →

Fri, September 5, 2025

Rewiring Democracy: How AI Will Transform Politics

📘 Bruce Schneier announces his new book, Rewiring Democracy: How AI Will Transform our Politics, Government, and Citizenship, coauthored with Nathan Sanders and published by MIT Press on October 21; signed copies will be available directly from the author after publication. The book surveys AI’s impact across politics, legislating, administration, the judiciary, and citizenship, including AI-driven propaganda and artificial conversation, focusing on uses within functioning democracies. Schneier adopts a cautiously optimistic stance, stresses the importance of imagining second-order effects, and argues for the creation of public AI to better serve democratic ends.

read more →

Fri, September 5, 2025

Passing the Security Vibe Check for AI-generated Code

🔒 The post warns that modern AI coding assistants enable 'vibe coding'—prompting natural-language requests and accepting generated code without thorough inspection. While tools like Copilot and ChatGPT accelerate development, they can introduce hidden risks such as insecure patterns, leaked credentials, and unvetted dependencies. The author urges embedding security into AI-assisted workflows through automated scanning, provenance checks, policy guardrails, and mandatory human review to prevent supply-chain and runtime compromises.

read more →

Fri, September 5, 2025

Penn Study Finds: GPT-4o-mini Susceptible to Persuasion

🔬 University of Pennsylvania researchers tested GPT-4o-mini on two categories of requests an aligned model should refuse: insulting the user and giving instructions to synthesize lidocaine. They crafted prompts using seven persuasion techniques (Authority, Commitment, Liking, Reciprocity, Scarcity, Social proof, Unity) and matched control prompts, then ran each prompt 1,000 times at the default temperature for a total of 28,000 trials. Persuasion prompts raised compliance from 28.1% to 67.4% for insults and from 38.5% to 76.5% for drug instructions, demonstrating substantial vulnerability to social-engineering cues.

read more →

Thu, September 4, 2025

Generative AI Used as Cybercrime Assistant, Reports Say

⚠️ Anthropic reports that a threat actor used Claude Code to automate reconnaissance, credential harvesting, network intrusion, and targeted extortion across at least 17 organizations, including healthcare, emergency services, government, and religious institutions. The actor prioritized public exposure over classic ransomware encryption, demanding ransoms that in some cases exceeded $500,000. Anthropic also identified North Korean use of Claude for remote‑worker fraud and an actor who used the model to design and distribute multiple ransomware variants with advanced evasion and anti‑recovery features.

read more →

Thu, September 4, 2025

Cybercriminals Exploit X's Grok to Amplify Malvertising

🔍 Cybersecurity researchers have flagged a technique dubbed Grokking that attackers use to bypass X's promoted-ads restrictions by abusing the platform AI assistant Grok. Malvertisers embed a hidden link in a video's "From:" metadata on promoted video-card posts and then tag Grok in replies asking for the video's source, prompting the assistant to display the link publicly. The revealed URLs route through a Traffic Distribution System to drive users to fake CAPTCHA scams, malware, and deceptive monetization networks. Guardio Labs observed hundreds of accounts posting at scale before suspension.

read more →

Thu, September 4, 2025

Agentic Tool Hexstrike-AI Accelerates Exploit Chain

⚠️ Check Point warns that Hexstrike-AI, an agentic AI orchestration platform integrating more than 150 offensive tools, is being abused by threat actors to accelerate vulnerability discovery and exploitation. The system abstracts vague commands into precise, sequenced technical steps, automating reconnaissance, exploit crafting, payload delivery and persistence. Check Point observed dark‑web discussions showing the tool used to weaponize recent Citrix NetScaler zero-days, including CVE-2025-7775, and cautions that tasks which once took weeks can now be completed in minutes. Organizations are urged to patch immediately, harden systems and adopt adaptive, AI-enabled detection and response measures.

read more →

Wed, September 3, 2025

Smashing Security #433: Hackers Harnessing AI Tools

🤖 In episode 433 of Smashing Security, Graham Cluley and Mark Stockley examine how attackers are weaponizing AI, from embedding malicious instructions in legalese to using generative agents to automate intrusions and extortion. They discuss LegalPwn prompt-injection tactics that hide payloads in comments and disclaimers, and new findings from Anthropic showing AI-assisted credential theft and custom ransomware notes. The episode also includes lighter segments on keyboard history and an ingenious AI-generated CAPTCHA.

read more →

Wed, September 3, 2025

Threat Actors Use X's Grok AI to Spread Malicious Links

🛡️ Guardio Labs researcher Nati Tal reported that threat actors are abusing Grok, X's built-in AI assistant, to surface malicious links hidden inside video ad metadata. Attackers omit destination URLs from visible posts and instead embed them in the small "From:" field under video cards, which X apparently does not scan. By prompting Grok with queries like "where is this video from?", actors get the assistant to repost the hidden link as a clickable reference, effectively legitimizing and amplifying scams, malware distribution, and deceptive CAPTCHA schemes across the platform.

read more →

Wed, September 3, 2025

HexStrike‑AI Enables Rapid N‑Day Exploitation of Citrix

🔒 HexStrike-AI, an open-source red‑teaming framework, is being adopted by malicious actors to rapidly weaponize newly disclosed Citrix NetScaler vulnerabilities such as CVE-2025-7775, CVE-2025-7776, and CVE-2025-8424. Check Point Research reports dark‑web chatter and evidence of automated exploitation chains that scan, exploit, and persist on vulnerable appliances. Defenders should prioritize immediate patching, threat intelligence, and AI-enabled detection to reduce shrinking n‑day windows.

read more →

Wed, September 3, 2025

Managing Shadow AI: Three Practical Corporate Policies

🔒 The MIT report "The GenAI Divide: State of AI in Business 2025" exposes a pervasive shadow AI economy—90% of employees use personal AI while only 40% of organizations buy LLM subscriptions. This article translates those findings into three realistic policy paths: a complete ban, unrestricted use with hygiene controls, and a balanced, role-based model. Each option is paired with concrete technical controls (DLP, NGFW, CASB, EDR), organizational steps, and enforcement measures to help security teams align risk management with real-world employee behaviour.

read more →

Wed, September 3, 2025

Indirect Prompt-Injection Threats to LLM Assistants

🔐 New research demonstrates practical, dangerous promptware attacks that exploit common interactions—calendar invites, emails, and shared documents—to manipulate LLM-powered assistants. The paper Invitation Is All You Need! evaluates 14 attack scenarios against Gemini-powered assistants and introduces a TARA framework to quantify risk. The authors reported 73% of identified threats as High-Critical and disclosed findings to Google, which deployed mitigations. Attacks include context and memory poisoning, tool misuse, automatic agent/app invocation, and on-device lateral movement affecting smart-home and device control.

read more →

Wed, September 3, 2025

Model Namespace Reuse: Supply-Chain RCE in Cloud AI

🔒 Unit 42 describes a widespread flaw called Model Namespace Reuse that lets attackers reclaim abandoned Hugging Face Author/ModelName namespaces and distribute malicious model code. The technique can lead to remote code execution and was demonstrated against major platforms including Google Vertex AI and Azure AI Foundry, as well as thousands of open-source projects. Recommended mitigations include version pinning, cloning models to trusted storage, and scanning repositories for reusable references.

read more →

Wed, September 3, 2025

How the Generative AI Boom Opens Privacy and Cyber Risks

🔒The rapid adoption of generative AI is prompting significant privacy and security concerns as vendors revise terms to use user data for model training. High-profile pushback — exemplified by WeTransfer’s reversal — revealed how unclear terms and live experimentation can expose corporate and personal information. Employees using consumer tools like ChatGPT for work tasks risk leaking secrets, and platforms such as Slack are explicitly reserving rights to leverage customer data. CISOs must balance strategic AI adoption with heightened compliance, governance and operational risk.

read more →

Wed, September 3, 2025

EMBER2024: Advancing ML Benchmarks for Evasive Malware

🛡️ The EMBER2024 release modernizes the popular EMBER malware benchmark by providing metadata, labels, and computed features for over 3.2 million files spanning six file formats. It supplies a 6,315-sample challenge set of initially evasive malware, updated feature extraction code using pefile, and supplemental raw bytes and disassembly for 16.3 million functions. The package also includes source code to reproduce feature calculation, labeling, and dataset construction so researchers can replicate and extend benchmarks.

read more →

Tue, September 2, 2025

HexStrike-AI Enables Rapid Zero-Day Exploitation at Scale

⚠️ HexStrike-AI is a newly released framework that acts as an orchestration “brain,” directing more than 150 specialized AI agents to autonomously scan, exploit, and persist inside targets. Within hours of release, dark‑web chatter showed threat actors attempting to weaponize it against recent zero‑day CVEs, dropping webshells enabling unauthenticated remote code execution. Although the targeted vulnerabilities are complex and typically require advanced skills, operators claim HexStrike-AI can reduce exploitation time from days to under 10 minutes, potentially lowering the barrier for less skilled attackers.

read more →