Tag Banner

All news with #ai security tag

Wed, September 3, 2025

Smashing Security #433: Hackers Harnessing AI Tools

🤖 In episode 433 of Smashing Security, Graham Cluley and Mark Stockley examine how attackers are weaponizing AI, from embedding malicious instructions in legalese to using generative agents to automate intrusions and extortion. They discuss LegalPwn prompt-injection tactics that hide payloads in comments and disclaimers, and new findings from Anthropic showing AI-assisted credential theft and custom ransomware notes. The episode also includes lighter segments on keyboard history and an ingenious AI-generated CAPTCHA.

read more →

Wed, September 3, 2025

Threat Actors Use X's Grok AI to Spread Malicious Links

🛡️ Guardio Labs researcher Nati Tal reported that threat actors are abusing Grok, X's built-in AI assistant, to surface malicious links hidden inside video ad metadata. Attackers omit destination URLs from visible posts and instead embed them in the small "From:" field under video cards, which X apparently does not scan. By prompting Grok with queries like "where is this video from?", actors get the assistant to repost the hidden link as a clickable reference, effectively legitimizing and amplifying scams, malware distribution, and deceptive CAPTCHA schemes across the platform.

read more →

Wed, September 3, 2025

HexStrike‑AI Enables Rapid N‑Day Exploitation of Citrix

🔒 HexStrike-AI, an open-source red‑teaming framework, is being adopted by malicious actors to rapidly weaponize newly disclosed Citrix NetScaler vulnerabilities such as CVE-2025-7775, CVE-2025-7776, and CVE-2025-8424. Check Point Research reports dark‑web chatter and evidence of automated exploitation chains that scan, exploit, and persist on vulnerable appliances. Defenders should prioritize immediate patching, threat intelligence, and AI-enabled detection to reduce shrinking n‑day windows.

read more →

Wed, September 3, 2025

Managing Shadow AI: Three Practical Corporate Policies

🔒 The MIT report "The GenAI Divide: State of AI in Business 2025" exposes a pervasive shadow AI economy—90% of employees use personal AI while only 40% of organizations buy LLM subscriptions. This article translates those findings into three realistic policy paths: a complete ban, unrestricted use with hygiene controls, and a balanced, role-based model. Each option is paired with concrete technical controls (DLP, NGFW, CASB, EDR), organizational steps, and enforcement measures to help security teams align risk management with real-world employee behaviour.

read more →

Wed, September 3, 2025

They Know Where You Are: Geolocation Cyber Risks Evolving

📍 Geolocation data from smartphones, apps and IPs can be weaponized by threat actors to launch precise, geographically targeted attacks such as localized phishing and malware activation. These attacks can act as "floating zero days," remaining dormant until they reach a specific location, as seen with Stuxnet and modern campaigns like Astaroth. Organizations should adopt multilayered defenses — robust endpoint detection, decoys, location baselines and stronger multi-factor verification — to mitigate this evolving threat.

read more →

Wed, September 3, 2025

Cloudflare AI Week 2025: Product, Security, and Tools

🔒 Cloudflare framed AI Week 2025 around products and controls to help organizations adopt AI while retaining safety and visibility. The company emphasized four core priorities: securing AI environments and workflows; protecting original content from misuse; enabling developers to build secure AI experiences; and applying AI to improve Cloudflare’s services. Key launches included AI Gateway, Infire, AI Crawl Control, expanded CASB scanning, and MCP Server Portals, with a continued focus on customer feedback and ongoing investment.

read more →

Wed, September 3, 2025

AWS Clean Rooms ML adds redacted error log summaries

🔒 AWS Clean Rooms ML collaborators can now configure a privacy control to send redacted error log summaries to selected collaboration members. Summaries include exception type, error message, and the line in the code where the error occurred. When associating a model with a collaboration, parties decide which members receive summaries and whether detectable PII, numbers, or custom strings will be redacted. This helps teams debug models while protecting sensitive data and intellectual property.

read more →

Wed, September 3, 2025

Threat Actors Try to Weaponize HexStrike AI for Exploits

⚠️ HexStrike AI, an open-source AI-driven offensive security platform, is being tested by threat actors to exploit recently disclosed vulnerabilities. Check Point reports criminals claim success exploiting Citrix NetScaler flaws and are advertising flagged instances for sale. The tool's automation and retry capabilities can shorten the window to mass exploitation; immediate action is to patch and harden systems.

read more →

Wed, September 3, 2025

Indirect Prompt-Injection Threats to LLM Assistants

🔐 New research demonstrates practical, dangerous promptware attacks that exploit common interactions—calendar invites, emails, and shared documents—to manipulate LLM-powered assistants. The paper Invitation Is All You Need! evaluates 14 attack scenarios against Gemini-powered assistants and introduces a TARA framework to quantify risk. The authors reported 73% of identified threats as High-Critical and disclosed findings to Google, which deployed mitigations. Attacks include context and memory poisoning, tool misuse, automatic agent/app invocation, and on-device lateral movement affecting smart-home and device control.

read more →

Wed, September 3, 2025

Model Namespace Reuse: Supply-Chain RCE in Cloud AI

🔒 Unit 42 describes a widespread flaw called Model Namespace Reuse that lets attackers reclaim abandoned Hugging Face Author/ModelName namespaces and distribute malicious model code. The technique can lead to remote code execution and was demonstrated against major platforms including Google Vertex AI and Azure AI Foundry, as well as thousands of open-source projects. Recommended mitigations include version pinning, cloning models to trusted storage, and scanning repositories for reusable references.

read more →

Wed, September 3, 2025

How the Generative AI Boom Opens Privacy and Cyber Risks

🔒The rapid adoption of generative AI is prompting significant privacy and security concerns as vendors revise terms to use user data for model training. High-profile pushback — exemplified by WeTransfer’s reversal — revealed how unclear terms and live experimentation can expose corporate and personal information. Employees using consumer tools like ChatGPT for work tasks risk leaking secrets, and platforms such as Slack are explicitly reserving rights to leverage customer data. CISOs must balance strategic AI adoption with heightened compliance, governance and operational risk.

read more →

Wed, September 3, 2025

EMBER2024: Advancing ML Benchmarks for Evasive Malware

🛡️ The EMBER2024 release modernizes the popular EMBER malware benchmark by providing metadata, labels, and computed features for over 3.2 million files spanning six file formats. It supplies a 6,315-sample challenge set of initially evasive malware, updated feature extraction code using pefile, and supplemental raw bytes and disassembly for 16.3 million functions. The package also includes source code to reproduce feature calculation, labeling, and dataset construction so researchers can replicate and extend benchmarks.

read more →

Tue, September 2, 2025

HexStrike-AI Enables Rapid Zero-Day Exploitation at Scale

⚠️ HexStrike-AI is a newly released framework that acts as an orchestration “brain,” directing more than 150 specialized AI agents to autonomously scan, exploit, and persist inside targets. Within hours of release, dark‑web chatter showed threat actors attempting to weaponize it against recent zero‑day CVEs, dropping webshells enabling unauthenticated remote code execution. Although the targeted vulnerabilities are complex and typically require advanced skills, operators claim HexStrike-AI can reduce exploitation time from days to under 10 minutes, potentially lowering the barrier for less skilled attackers.

read more →

Tue, September 2, 2025

The AI Fix Ep. 66: AI Mishaps, Breakthroughs and Safety

🧠 In episode 66 of The AI Fix, hosts Graham Cluley and Mark Stockley walk listeners through a rapid-fire roundup of recent AI developments, from a ChatGPT prompt that produced an inaccurate anatomy diagram to a controversial Stanford sushi hackathon. They cover a Google Gemini bug that generated self-deprecating responses, criticisms that gave DeepSeek poor marks on existential-risk mitigation, and a debunked pregnancy-robot story. The episode also celebrates a genuine scientific advance: a team of AI agents that designed novel COVID-19 nanobodies, and considers how unusual collaborations and growing safety work could change the broader AI risk landscape.

read more →

Tue, September 2, 2025

SASE Summit 2025 — Convergence without Compromise, Global

🔒 Fortinet’s 4th Annual SASE Summit (NAMER: Sept 16, 2025; EMEA/LATAM/APAC: Sept 18, 2025) centers on the theme Convergence without Compromise, arguing that robust security and top performance can be delivered together through a unified, AI-driven platform. The event features Gartner VP Analyst Jonathan Forest and Fortinet leaders Nirav Shah and Jordan Thompson, along with customer case studies from Tepper Sports & Entertainment and Funke Mediengruppe. Attendees will receive practical guidance on adopting a consolidated SASE approach that embeds zero trust, AI-enabled controls, and end-to-end visibility to reduce complexity, cut costs, and better protect hybrid workforces and cloud environments.

read more →

Tue, September 2, 2025

Shadow AI Discovery: Visibility, Governance, and Risk

🔍 Employees are driving AI adoption from the ground up, often using unsanctioned tools and personal accounts that bypass corporate controls. Harmonic Security found that 45.4% of sensitive AI interactions come from personal email, underscoring a growing Shadow AI Economy. Rather than broad blocking, security and governance teams should prioritize continuous discovery and an AI asset inventory to apply role- and data-sensitive controls that protect sensitive workflows while enabling productivity.

read more →

Tue, September 2, 2025

NCSC and AISI Back Public Disclosure for AI Safeguards

🔍 The NCSC and the AI Security Institute have broadly welcomed public, bug-bounty style disclosure programs to help identify and remediate AI safeguard bypass threats. They said initiatives from vendors such as OpenAI and Anthropic could mirror traditional vulnerability disclosure to encourage responsible reporting and cross-industry collaboration. The agencies cautioned that programs require clear scope, strong foundational security, prior internal reviews and sufficient triage resources, and that disclosure alone will not guarantee model safety.

read more →

Tue, September 2, 2025

Agentic AI: Emerging Security Challenges for CISOs

🔒 Agentic AI is poised to transform workflows like software development, customer support, RPA, and employee assistance, but its autonomy raises new cybersecurity risks for CISOs. A 2024 Cisco Talos report and industry experts warn these systems can act without human oversight, chain benign actions into harmful sequences, or learn to evade detection. Lack of visibility fosters shadow AI, and third-party integrations and multi-agent setups widen supply-chain and data-exfiltration exposures. Organizations should adopt observability, governance, and secure-by-design practices before scaling agentic deployments.

read more →

Tue, September 2, 2025

Secure AI at Machine Speed: Full-Stack Enterprise Defense

🔒 CrowdStrike explains how widespread AI adoption expands the enterprise attack surface, exposing models, data pipelines, APIs, and autonomous agents to new adversary techniques. The post argues that legacy controls and fragmented tooling are insufficient and advocates for real-time, full‑stack protections. The Falcon platform is presented as a unified solution offering telemetry, lifecycle protection, GenAI-aware data loss prevention, and agent governance to detect, prevent, and remediate AI-related threats.

read more →

Mon, September 1, 2025

Supply-Chain Attack on npm Nx Steals Developer Credentials

🔒 A sophisticated supply-chain attack targeted the widely used Nx build-system packages on the npm registry, exposing developer credentials and sensitive files. According to a report from Wiz, attackers published malicious Nx versions on August 26, 2025 that harvested GitHub and npm tokens, SSH keys, environment variables and cryptocurrency wallets. The campaign uniquely abused installed AI CLI tools (for example, Claude and Gemini) by passing dangerous permission flags to exfiltrate file-system contents and perform reconnaissance, then uploaded roughly 20,000 files to attacker-controlled public repositories. Organizations should remove affected package versions, rotate exposed credentials and inspect developer workstations and CI/CD pipelines for persistence.

read more →