Category Banner

All news in category "AI and Security Pulse"

Tue, September 23, 2025

The AI Fix Episode 69: Oddities, AI Songs and Risks

🎧 In episode 69 of The AI Fix, Graham Cluley and Mark Stockley mix lighthearted oddities with substantive AI developments. The hosts discuss viral “brain rot” videos, an AI‑generated J‑Pop song, Norway’s experiment trusting $1.9 trillion to an AI investor, and Florida’s use of robotic rabbits to deter Burmese pythons. The show also highlights its first AI feedback, a merch sighting, and data on ChatGPT adoption, while reflecting on uneven geographic and enterprise AI uptake and recent academic research.

read more →

Tue, September 23, 2025

Two-Thirds of Businesses Hit by Deepfake Attacks in 2025

🛡️ A Gartner survey finds 62% of organisations experienced a deepfake attack in the past 12 months, with common techniques including social-engineering impersonation and attacks on biometric verification. The report also shows 32% of firms faced attacks on AI applications via prompt manipulation. Gartner’s Akif Khan urges integrating deepfake detection into collaboration tools and strengthening controls through awareness training, simulations and application-level authorisation with phishing-resistant MFA. Vendor solutions are emerging but remain early-stage, so operational effectiveness is not yet proven.

read more →

Tue, September 23, 2025

Self-Driving IT Security: Preparing for Autonomous Defense

🛡️ IT security is entering a new era where autonomy augments human defenders, moving beyond scripted automation to adaptive, AI-driven responses. Traditional playbooks and scripts are limited because they only follow defined rules, while attackers continuously change tactics. Organizations must adopt self-driving security systems that combine real-time telemetry, machine learning, and human oversight to improve detection, reduce response time, and manage risk.

read more →

Tue, September 23, 2025

CISO’s Guide to Rolling Out Generative AI at Scale

🔐 Selecting an AI platform is necessary but insufficient; successful enterprise adoption hinges on how the system is introduced, integrated, and supported. CISOs must publish a clear, accessible AI use policy that defines permitted behaviors, off-limits data, and auditing expectations. Provision access by default using SSO and SCIM, pair rollout with vendor-led demos and role-focused training, and provide living user guides. Build an AI champions network, harvest practical productivity use cases, limit unmanaged public tools, and keep governance proactive and supportive.

read more →

Tue, September 23, 2025

Six Novel Ways to Apply AI in Cybersecurity Defense

🛡️ AI is being applied across security operations in novel ways to predict, simulate, and deter attacks. Experts from BforeAI, NopalCyber, Hughes, XYPRO, AirMDR, and Kontra outline six approaches — predictive scoring, GAN-driven attack simulation, AI analyst assistants, micro-deviation detection, automated triage and response, and proactive generative deception — that aim to reduce alert fatigue, accelerate investigations, and increase attacker costs. Successful deployments depend on accurate ground truth data, continuous model updates, and significant compute and engineering investment.

read more →

Mon, September 22, 2025

AI-powered phishing uses fake CAPTCHA pages to evade

🤖 AI-driven phishing campaigns are increasingly using convincing fake CAPTCHA pages to bypass security filters and trick users into revealing credentials. Trend Micro found these AI-generated pages hosted on developer platforms such as Lovable, Netlify, and Vercel, with activity observed since January and a renewed spike in August. Attackers exploit low-friction hosting, platform credibility, and AI coding assistants to rapidly clone brand-like pages that first present a CAPTCHA, then redirect victims to credential-harvesting forms. Organizations should combine behavioural detection, hosting-provider safeguards, and phishing-resistant authentication to reduce risk.

read more →

Mon, September 22, 2025

Protect AI Development Using Falcon Cloud Security

🔒 Falcon Cloud Security provides end-to-end protection for AI development pipelines by embedding AI detection into CI/CD workflows, scanning container images, and surfacing AI-related packages and CVEs in real time. It extends visibility to cloud model services — including AWS SageMaker and Bedrock, Azure AI, and Google Vertex AI — revealing model provenance, dependencies, and API usage. Runtime inventory ties build-time detections to live containers so teams can prioritize fixes, govern models, and maintain delivery velocity without compromising security.

read more →

Mon, September 22, 2025

Agentic AI Risks and Governance: A Major CISO Challenge

⚠️ Agentic AI is proliferating inside enterprises, embedding autonomous agents into development, customer support, process automation, and employee workflows. Security experts warn these systems create substantial visibility and governance gaps: organizations often do not know where agents run, what data they access, or how independent their actions are. Key risks include risky autonomy, uncontrolled data sharing among agents, third-party integration vulnerabilities, and the potential for agents to enable or mimic multi-stage attacks. CISOs should prioritize real-time observability, strict governance, secure-by-design development, and cross-functional coordination to mitigate these threats.

read more →

Sat, September 20, 2025

Researchers Find GPT-4-Powered MalTerminal Malware

🛡️ SentinelOne researchers disclosed MalTerminal, a Windows binary that integrates OpenAI GPT-4 via a deprecated chat completions API to dynamically generate either ransomware or a reverse shell. The sample, presented at LABScon 2025 and accompanied by Python scripts and a defensive utility called FalconShield, appears to be an early — possibly pre-November 2023 — example of LLM-embedded malware. There is no evidence it was deployed in the wild, suggesting a proof-of-concept or red-team tool. The finding highlights operational risks as LLMs are embedded into offensive tooling and phishing chains.

read more →

Fri, September 19, 2025

ShadowLeak zero-click exfiltrates Gmail via ChatGPT Agent

🔒 Radware disclosed a zero-click vulnerability dubbed ShadowLeak in OpenAI's Deep Research agent that can exfiltrate Gmail inbox data to an attacker-controlled server via a single crafted email. The flaw enables service-side leakage by causing the agent's autonomous browser to visit attacker URLs and inject harvested PII without rendering content or user interaction. Radware reported the issue in June; OpenAI fixed it silently in August and acknowledged resolution in September.

read more →

Fri, September 19, 2025

Automating Alert Triage and SOP Execution with AI Platform

🤖 Tines published a prebuilt workflow that automates security alert triage by using AI agents to identify alert types, find relevant SOPs in Confluence, and execute remediation steps across integrated tools. The two-agent design creates structured case records, documents every action, and notifies on-call staff via Slack. The workflow supports integrations such as CrowdStrike, Okta, VirusTotal and others, and is available in Tines' Community Edition for testing.

read more →

Fri, September 19, 2025

Attackers Use AI Platforms to Generate Fake CAPTCHAs

🔐 Trend Micro researchers report cybercriminals are using AI-powered site builders like Lovable, Vercel and Netlify to rapidly create convincing fake CAPTCHA pages. Seen since January 2025 with a sharp escalation from February to April, these pages make phishing links appear legitimate and can help evade automated scanners by presenting a CAPTCHA before redirecting users to credential-stealing sites. Recommended mitigations include employee education, redirect-chain analysis and monitoring trusted domains for abuse.

read more →

Fri, September 19, 2025

OpenAI's $4 GPT Go Plan Poised to Expand Regions Soon

🚀 OpenAI has started expanding its $4 GPT Go plan beyond India, rolling out nudges to free-account users in Indonesia and India and signaling broader regional availability in the coming weeks. Product pages already list pricing in USD, EUR and GBP, suggesting a possible U.S. launch. GPT Go grants access to GPT-5, expanded messaging and uploads, faster image creation, longer memory and limited deep research; GPT Plus ($20) and Pro ($200) tiers provide increasingly advanced capabilities and higher limits.

read more →

Thu, September 18, 2025

Source-of-Truth Authorization for RAG Knowledge Bases

🔒 This post presents an architecture to enforce strong, source-of-truth authorization for Retrieval-Augmented Generation (RAG) knowledge bases using Amazon S3 Access Grants with Amazon Bedrock. It explains why vector DB metadata filtering is insufficient—permission changes can be delayed and complex identity memberships are hard to represent—and recommends validating permissions at the data source before returning chunks to an LLM. The blog includes a practical Python walkthrough for exchanging identity tokens, retrieving caller grant scopes, filtering returned chunks, and logging withheld items to reduce the risk of sensitive data leaking into LLM prompts.

read more →

Thu, September 18, 2025

OpenAI enhances ChatGPT Search to rival Google AI results

🔎 OpenAI has rolled out an update to ChatGPT Search that improves accuracy, reliability, and link summarization to reduce hallucinations and make answers easier to verify. The search now better detects shopping intent, surfacing products when appropriate while keeping results focused for other queries, and it improves link summaries so users can follow back to sources. Answers are reformatted for quicker comprehension without sacrificing detail. OpenAI also added an GPT-5 Thinking toggle with adjustable 'juice' effort levels; the changes are rolling out gradually.

read more →

Thu, September 18, 2025

OpenAI adds user control over GPT-5 Thinking model options

⚙️ OpenAI is rolling out a toggle that lets Plus, Pro, and Business subscribers choose how much "thinking" the GPT-5 Thinking model performs, trading off speed, cost, and depth. The simpler toggle UI replaces a tested slider and exposes internal "juice" effort levels — for example, Standard (juice=18) and Extended (64). Pro users also get Light (5) for very fast replies and Heavy (200) for the model's maximum reasoning depth.

read more →

Thu, September 18, 2025

ShadowLeak: AI agents can exfiltrate data undetected

⚠️Researchers at Radware disclosed a vulnerability called ShadowLeak in the Deep Research module of ChatGPT that lets hidden, attacker-crafted instructions embedded in emails coerce an AI agent to exfiltrate sensitive data. The indirect prompt-injection technique hides commands using tiny fonts, white-on-white text or metadata and instructs the agent to encode and transmit results (for example, Base64-encoded lists of names and credit cards) to an attacker-controlled URL. Radware says the key risk is that exfiltration can occur from the model’s cloud backend, making detection by the affected organization very difficult; OpenAI was notified and implemented a fix, and Radware found the patch effective in subsequent tests.

read more →

Thu, September 18, 2025

How CISOs Can Build Effective AI Governance Programs

🛡️ AI's rapid enterprise adoption requires CISOs to replace inflexible bans with living governance that both protects data and accelerates innovation. The article outlines three practical components: gaining ground truth visibility with AI inventories, AIBOMs and model registries; aligning policies to the organization's speed so governance is executable; and making governance sustainable by provisioning secure tools and rewarding compliant behavior. It highlights SANS guidance and training to help operationalize these approaches.

read more →

Thu, September 18, 2025

Mind the Gap: TOCTOU Vulnerabilities in LLM-Enabled Agents

⚠️A new study, “Mind the Gap,” examines time-of-check to time-of-use (TOCTOU) flaws in LLM-enabled agents and introduces TOCTOU-Bench, a 66-task benchmark. The authors demonstrate practical attacks such as malicious configuration swaps and payload injection and evaluate defenses adapted from systems security. Their mitigations—prompt rewriting, state integrity monitoring, and tool-fusing—achieve up to 25% automated detection and materially reduce the attack window and executed vulnerabilities.

read more →

Wed, September 17, 2025

Securing Remote MCP Servers on Google Cloud Platform

🔒 A centralized proxy architecture on Google Cloud can secure remote Model Context Protocol (MCP) servers by intercepting tool calls and enforcing consistent policies across deployments. Author Lanre Ogunmola outlines five core MCP risks — unauthorized tool exposure, session hijacking, tool shadowing, token/theft and authentication bypass — and recommends an MCP proxy (Cloud Run, GKE, or Apigee) integrated with Cloud Armor, Secret Manager, and identity services for access control, secret scanning, and monitoring. The post emphasizes layered defenses including Model Armor for prompt/response screening and centralized logging to reduce blind spots and operational overhead.

read more →