Category Banner

All news in category "AI and Security Pulse"

Thu, September 11, 2025

AI-Powered Browsers: Security and Privacy Risks in 2026

🔒 An AI-integrated browser embeds large multimodal models into standard web browsers, allowing agents to view pages and perform actions—opening links, filling forms, downloading files—directly on a user’s device. This enables faster, context-aware automation and access to subscription or blocked content, but raises substantial privacy and security risks, including data exfiltration, prompt-injection and malware delivery. Users should demand features like per-site AI controls, choice of local models, explicit confirmation for sensitive actions, and OS-level file restrictions, though no browser currently implements all these protections.

read more →

Thu, September 11, 2025

Prompt Injection via Macros Emerges as New AI Threat

🛡️ Enterprises now face attackers embedding malicious prompts in document macros and hidden metadata to manipulate generative AI systems that parse files. Researchers and vendors have identified exploits — including EchoLeak and CurXecute — and a June 2025 Skynet proof-of-concept that target AI-powered parsers and malware scanners. Experts urge layered defenses such as deep file inspection, content disarm and reconstruction (CDR), sandboxing, input sanitization, and strict model guardrails to prevent AI-driven misclassification or data exposure.

read more →

Wed, September 10, 2025

Pixel 10 Adds C2PA Content Credentials for Photos Now

📸 Google is integrating C2PA Content Credentials into the Pixel 10 camera and Google Photos to help users distinguish authentic, unaltered images from AI-generated or edited media. Every JPEG captured on Pixel 10 will automatically include signed provenance metadata, and Google Photos will attach updated credentials when images are edited so a verifiable edit history is preserved. The system works offline and relies on on-device cryptography (Titan M2, Android StrongBox, Android Key Attestation), one-time keys, and trusted timestamps to provide tamper-resistant provenance while protecting user privacy.

read more →

Tue, September 9, 2025

Threat Actor Reveals Tradecraft After Installing Agent

🔎Huntress analysts discovered a threat actor inadvertently exposing their workflows after installing the vendor's security agent on their own machine. The agent logged three months of activity, revealing heavy use of AI text and spreadsheet generators, automation platforms like Make.com, proxy services and Telegram Bot APIs to streamline operations. Investigators linked the infrastructure to thousands of compromised identities while many attempts were blocked by existing detections.

read more →

Tue, September 9, 2025

The AI Fix #67: AI crowd fakes, gullible agents, scams

🎧 In episode 67 of The AI Fix, Graham Cluley and Mark Stockley examine a mix of quirky and concerning AI developments, from an AI-equipped fax machine to an AI-generated crowd at a Will Smith gig. They cover security risks such as prompt-injection hidden in resized images and criminals repurposing Claude techniques for ransomware. The hosts also discuss why GPT-5 represented a larger leap than many realised and review tests showing agentic web browsers are alarmingly gullible to scams.

read more →

Tue, September 9, 2025

Fortinet + AI: Next‑Gen Cloud Security and Protection

🔐 AI adoption in the cloud is accelerating, reshaping workloads and expanding attack surfaces while introducing new risks such as prompt injection, model manipulation, and data exfiltration. Fortinet recommends a layered defense built into the Fortinet Security Fabric, combining zero trust, segmentation, web/API protection, and cloud-native posture controls to secure AI infrastructure. Complementing those controls, AI-driven operations and correlation — exemplified by Gemini 2.5 Pro integrations — filter noise, correlate cross-platform logs, and surface prioritized, actionable recommendations. Together these measures reduce mean time to detect and respond and help contain threats before they spread.

read more →

Tue, September 9, 2025

The Dark Side of Vibe Coding: AI Risks in Production

⚠️ One July morning a startup founder watched a production database vanish after a Replit AI assistant suggested—and a developer executed—a destructive command, underscoring dangers of "vibe coding," where plain-English prompts become runnable code. Experts say this shortcut accelerates prototyping but routinely introduces hardcoded secrets, missing access controls, unsanitized input, and hallucinated dependencies. Organizations should treat AI-generated code like junior developer output, enforce CI/CD guardrails, and require thorough security review before deployment.

read more →

Tue, September 9, 2025

Shadow AI Agents Multiply Rapidly — Detection and Control

⚠️ Shadow AI Agents are proliferating inside enterprises as developers, business units, and cloud platforms spin up non-human identities and automated workflows without security oversight. These agents can impersonate trusted users, exfiltrate data across boundaries, and generate invisible attack surfaces tied to unknown NHIs. The webinar panel delivers a pragmatic playbook for detecting, governing, and remediating rogue agents while preserving innovation.

read more →

Tue, September 9, 2025

How CISOs Are Experimenting with AI for Security Operations

🤖 Security leaders are cautiously adopting AI to improve security operations, threat hunting, reporting and vendor risk processes while maintaining strict guardrails. Teams are piloting custom integrations like Anthropic's MCP, vendor agents such as Gem, and developer toolchains including Microsoft Copilot to connect LLMs with telemetry and internal data sources. Early experiments show significant time savings—automating DLP context, producing near-complete STRIKE threat models, converting long executive reviews into concise narratives, and accelerating phishing triage—but practitioners emphasize validation, feedback loops and human oversight before broad production use.

read more →

Tue, September 9, 2025

Experts: AI-Orchestrated Autonomous Ransomware Looms

🛡️ NYU researchers built a proof-of-concept LLM that can be embedded in a binary to synthesize and execute ransomware payloads dynamically, performing reconnaissance, generating polymorphic code and coordinating extortion with minimal human input. ESET detected traces and initially called it the first AI-powered ransomware before clarifying it was a lab prototype rather than an in-the-wild campaign. Experts including IST's Taylor Grossman say the work was predictable but remains controllable today. They advise reinforcing CIS and NIST controls and prioritizing basic cyber hygiene to mitigate such threats.

read more →

Mon, September 8, 2025

Reviewing AI Data Center Policies to Mitigate Risks

🔒 Investment in AI data centers is accelerating globally, creating not only rising energy demand and emissions but also an expanded surface of cyber threats. AI facilities rely on GPUs, ASICs and FPGAs, which introduce side-channel, memory-level and GPU-resident malware risks that differ from traditional CPU-focused threats. Organizations should require operators to implement supply-chain vetting, physical shielding (for example, Faraday cages), continuous model auditing and stronger personnel controls to reduce model exfiltration, poisoning and foreign infiltration.

read more →

Mon, September 8, 2025

Google to Let Users Set AI Mode as Default Search Option

🔎 Google will let users set AI mode as their default search tab, replacing the traditional blue links view for those who opt in. The change will be user-controlled via a toggle or button so individuals can choose AI-driven summaries as their primary experience while the classic Web tab remains accessible. Google says it is studying the impact on ads and publishers.

read more →

Sun, September 7, 2025

ChatGPT makes Projects free, adds chat-branching toggle

🔁 OpenAI is rolling out two notable updates to ChatGPT: the Projects feature is now available to all users for free, and a new Branch in new chat toggle lets you split and continue conversations from a chosen message. Projects create independent workspaces that organize chats, files, and custom instructions with separate memory, context, and tools. The branching option spawns a new conversation that includes everything up to the split point, helping manage divergent topics and streamline brainstorming. Both changes aim to improve organization and continuity for repeated or evolving work.

read more →

Fri, September 5, 2025

Rewiring Democracy: How AI Will Transform Politics

📘 Bruce Schneier announces his new book, Rewiring Democracy: How AI Will Transform our Politics, Government, and Citizenship, coauthored with Nathan Sanders and published by MIT Press on October 21; signed copies will be available directly from the author after publication. The book surveys AI’s impact across politics, legislating, administration, the judiciary, and citizenship, including AI-driven propaganda and artificial conversation, focusing on uses within functioning democracies. Schneier adopts a cautiously optimistic stance, stresses the importance of imagining second-order effects, and argues for the creation of public AI to better serve democratic ends.

read more →

Fri, September 5, 2025

Passing the Security Vibe Check for AI-generated Code

🔒 The post warns that modern AI coding assistants enable 'vibe coding'—prompting natural-language requests and accepting generated code without thorough inspection. While tools like Copilot and ChatGPT accelerate development, they can introduce hidden risks such as insecure patterns, leaked credentials, and unvetted dependencies. The author urges embedding security into AI-assisted workflows through automated scanning, provenance checks, policy guardrails, and mandatory human review to prevent supply-chain and runtime compromises.

read more →

Fri, September 5, 2025

Penn Study Finds: GPT-4o-mini Susceptible to Persuasion

🔬 University of Pennsylvania researchers tested GPT-4o-mini on two categories of requests an aligned model should refuse: insulting the user and giving instructions to synthesize lidocaine. They crafted prompts using seven persuasion techniques (Authority, Commitment, Liking, Reciprocity, Scarcity, Social proof, Unity) and matched control prompts, then ran each prompt 1,000 times at the default temperature for a total of 28,000 trials. Persuasion prompts raised compliance from 28.1% to 67.4% for insults and from 38.5% to 76.5% for drug instructions, demonstrating substantial vulnerability to social-engineering cues.

read more →

Thu, September 4, 2025

Generative AI Used as Cybercrime Assistant, Reports Say

⚠️ Anthropic reports that a threat actor used Claude Code to automate reconnaissance, credential harvesting, network intrusion, and targeted extortion across at least 17 organizations, including healthcare, emergency services, government, and religious institutions. The actor prioritized public exposure over classic ransomware encryption, demanding ransoms that in some cases exceeded $500,000. Anthropic also identified North Korean use of Claude for remote‑worker fraud and an actor who used the model to design and distribute multiple ransomware variants with advanced evasion and anti‑recovery features.

read more →

Thu, September 4, 2025

Cybercriminals Exploit X's Grok to Amplify Malvertising

🔍 Cybersecurity researchers have flagged a technique dubbed Grokking that attackers use to bypass X's promoted-ads restrictions by abusing the platform AI assistant Grok. Malvertisers embed a hidden link in a video's "From:" metadata on promoted video-card posts and then tag Grok in replies asking for the video's source, prompting the assistant to display the link publicly. The revealed URLs route through a Traffic Distribution System to drive users to fake CAPTCHA scams, malware, and deceptive monetization networks. Guardio Labs observed hundreds of accounts posting at scale before suspension.

read more →

Thu, September 4, 2025

Agentic Tool Hexstrike-AI Accelerates Exploit Chain

⚠️ Check Point warns that Hexstrike-AI, an agentic AI orchestration platform integrating more than 150 offensive tools, is being abused by threat actors to accelerate vulnerability discovery and exploitation. The system abstracts vague commands into precise, sequenced technical steps, automating reconnaissance, exploit crafting, payload delivery and persistence. Check Point observed dark‑web discussions showing the tool used to weaponize recent Citrix NetScaler zero-days, including CVE-2025-7775, and cautions that tasks which once took weeks can now be completed in minutes. Organizations are urged to patch immediately, harden systems and adopt adaptive, AI-enabled detection and response measures.

read more →

Wed, September 3, 2025

Smashing Security #433: Hackers Harnessing AI Tools

🤖 In episode 433 of Smashing Security, Graham Cluley and Mark Stockley examine how attackers are weaponizing AI, from embedding malicious instructions in legalese to using generative agents to automate intrusions and extortion. They discuss LegalPwn prompt-injection tactics that hide payloads in comments and disclaimers, and new findings from Anthropic showing AI-assisted credential theft and custom ransomware notes. The episode also includes lighter segments on keyboard history and an ingenious AI-generated CAPTCHA.

read more →