All news in category "AI and Security Pulse"
Fri, September 26, 2025
Microsoft Photos adds AI Auto-Categorization on Windows
🤖 Microsoft is testing a new AI-powered Auto-Categorization capability in Microsoft Photos on Windows 11, rolling out to Copilot+ PCs across all Windows Insider channels. The feature automatically groups images into predefined folders — screenshots, receipts, identity documents, and notes — using a language-agnostic model that recognizes document types regardless of image language. Users can locate categorized items via the left navigation pane or Search bar, manually reassign categories, and submit feedback to improve accuracy. Microsoft has not yet clarified whether image processing happens locally or is sent to its servers.
Fri, September 26, 2025
How Scammers Use AI: Deepfakes, Phishing and Scams
⚠️ Generative AI is enabling scammers to produce highly convincing deepfakes, authentic-looking phishing sites, and automated voice bots that facilitate fraud and impersonation. Kaspersky explains how techniques such as AI-driven catfishing and “pig butchering” scale emotional manipulation, while browser AI agents and automated callers can inadvertently vouch for or even complete fraudulent transactions. The post recommends concrete defenses: verify contacts through separate channels, refuse to share codes or card numbers, request live verification during calls, limit AI agent permissions, and use reliable security tools with link‑checking.
Fri, September 26, 2025
Hidden Cybersecurity Risks of Deploying Generative AI
⚠️ Organizations eager to deploy generative AI often underestimate the cybersecurity risks, from AI-driven phishing to model manipulation and deepfakes. The article, sponsored by Acronis, warns that many firms—especially smaller businesses—lack processes to assess AI security before deployment. It urges embedding security into development pipelines, continuous model validation, and unified defenses across endpoints, cloud and AI workloads.
Fri, September 26, 2025
Generative AI Infrastructure Faces Growing Cyber Risks
🛡️ A Gartner survey found 29% of security leaders reported generative AI applications in their organizations were targeted by cyberattacks over the past year, and 32% said prompt-structure vulnerabilities had been deliberately exploited. Chatbot assistants are singled out as particularly vulnerable to prompt-injection and hostile prompting. Additionally, 62% of companies experienced deepfake attacks, often combined with social engineering or automated techniques. Gartner recommends strengthening core controls and applying targeted measures for each new risk category rather than pursuing radical overhauls. The survey of 302 security leaders was conducted March–May 2025 across North America, EMEA and Asia‑Pacific.
Fri, September 26, 2025
The Dawn of the Agentic SOC: Reimagining Security Now
🔐 At Fal.Con 2025, CrowdStrike CEO George Kurtz outlined a shift from reactive SOCs to an agentic model where intelligent agents reason, decide, act, and learn across domains. CrowdStrike introduced seven AI agents within its Charlotte framework for exposure prioritization, malware analysis, hunting, search, correlation rules, data transformation and workflow generation, and is enabling customers to build custom agents. The company highlights a proprietary "data moat" of trillions of telemetry events and annotated MDR threat data as the foundation for training agents, and announced the acquisition of Pangea to protect AI agents and launch AIDR (AI Detection and Response). The vision places humans as orchestrators overseeing fleets of agents, accelerating detection and response while preserving accountability.
Thu, September 25, 2025
Adapting Enterprise Risk Management for Generative AI
🛡️ This post explains how to adapt enterprise risk management frameworks to safely scale cloud-based generative AI, combining governance foundations with practical controls. It emphasizes the cloud as the foundational infrastructure and identifies differences from on‑premises models that change risk profiles and vendor relationships. The guidance maps traditional ERMF elements to AI-specific controls across fairness, explainability, privacy/security, safety, controllability, veracity/robustness, governance, and transparency, and references tools such as Amazon Bedrock Guardrails, SageMaker Clarify, and the ISO/IEC 42001 standard to operationalize those controls.
Thu, September 25, 2025
Enabling Enterprise Risk Management for Generative AI
🔒 This article frames responsible generative AI adoption as a core enterprise concern and urges business leaders, CROs, and CIAs to embed controls across the ERM lifecycle. It highlights unique risks—non‑deterministic outputs, deepfakes, and layered opacity—and maps mitigation approaches using AWS CAF for AI, ISO/IEC 42001, and the NIST AI RMF. The post advocates enterprise‑level governance rather than project‑by‑project fixes to sustain innovation while managing harm.
Thu, September 25, 2025
Malicious MCP Server Update Exfiltrated Emails to Developer
⚠️ Koi Security has reported that a widely used Model Context Protocol (MCP) implementation, Postmark MCP Server by @phanpak, introduced a malicious change in version 1.0.16 that silently copied emails to an external server. The package, distributed via npm and embedded into hundreds of developer workflows, had more than 1,500 weekly downloads. Users who installed v1.0.16 or later are advised to remove the package immediately and rotate any potentially exposed credentials.
Thu, September 25, 2025
AI Coding Assistants Elevate Deep Security Risks Now
⚠️ Research and expert interviews indicate that AI coding assistants cut trivial syntax errors but increase more costly architectural and privilege-related flaws. Apiiro found AI-generated code produced fewer shallow bugs yet more misconfigurations, exposed secrets, and larger multi-file pull requests that overwhelm reviewers. Experts urge preserving human judgment, adding integrated security tooling, strict review policies, and traceability for AI outputs to avoid automating risk at scale.
Wed, September 24, 2025
OpenAI Is Testing GPT-Alpha, a GPT-5-Based AI Agent
🧪 OpenAI is internally testing a new AI agent, GPT-Alpha, built on a special GPT-5 variant and briefly exposed to users in an accidental push. A screenshot shared on X showed an 'Agent with Truncation' listing under Alpha Models, and the agent's system prompt outlines capabilities to browse the web, generate and edit images, write, run, and debug code, and create or edit documents, spreadsheets, and slides. OpenAI says the agent uses GPT-5 for advanced reasoning and tool use and may initially be offered as a paid feature due to increased compute demands.
Wed, September 24, 2025
GenSec CTF at DEF CON: Accelerating AI in Security
🔒 At DEF CON 33, Google and Airbus hosted the GenSec Capture the Flag (CTF) to promote human–AI collaboration and accelerate adoption of AI in cybersecurity workflows. Nearly 500 participants completed introductory challenges, 23% used AI for security for the first time, and 85% found the event useful for learning practical AI applications. The CTF also featured Sec-Gemini as an optional assistant in the UI; 77% of respondents rated it very or extremely helpful, and organizers are incorporating feedback into future iterations.
Wed, September 24, 2025
Responsible AI Bot Principles to Protect Web Content
🛡️ Cloudflare proposes five practical principles to guide responsible AI bot behavior and protect web publishers, users, and infrastructure. The framework stresses public disclosure, reliable self-identification (moving toward cryptographic verification such as Web Bot Auth), a declared single purpose for crawlers, and respect for operator preferences via robots.txt or headers. Operators must also avoid deceptive or high-volume crawling, and Cloudflare invites multi-stakeholder collaboration to refine and adopt these norms.
Tue, September 23, 2025
The AI Fix Episode 69: Oddities, AI Songs and Risks
🎧 In episode 69 of The AI Fix, Graham Cluley and Mark Stockley mix lighthearted oddities with substantive AI developments. The hosts discuss viral “brain rot” videos, an AI‑generated J‑Pop song, Norway’s experiment trusting $1.9 trillion to an AI investor, and Florida’s use of robotic rabbits to deter Burmese pythons. The show also highlights its first AI feedback, a merch sighting, and data on ChatGPT adoption, while reflecting on uneven geographic and enterprise AI uptake and recent academic research.
Tue, September 23, 2025
Two-Thirds of Businesses Hit by Deepfake Attacks in 2025
🛡️ A Gartner survey finds 62% of organisations experienced a deepfake attack in the past 12 months, with common techniques including social-engineering impersonation and attacks on biometric verification. The report also shows 32% of firms faced attacks on AI applications via prompt manipulation. Gartner’s Akif Khan urges integrating deepfake detection into collaboration tools and strengthening controls through awareness training, simulations and application-level authorisation with phishing-resistant MFA. Vendor solutions are emerging but remain early-stage, so operational effectiveness is not yet proven.
Tue, September 23, 2025
Self-Driving IT Security: Preparing for Autonomous Defense
🛡️ IT security is entering a new era where autonomy augments human defenders, moving beyond scripted automation to adaptive, AI-driven responses. Traditional playbooks and scripts are limited because they only follow defined rules, while attackers continuously change tactics. Organizations must adopt self-driving security systems that combine real-time telemetry, machine learning, and human oversight to improve detection, reduce response time, and manage risk.
Tue, September 23, 2025
CISO’s Guide to Rolling Out Generative AI at Scale
🔐 Selecting an AI platform is necessary but insufficient; successful enterprise adoption hinges on how the system is introduced, integrated, and supported. CISOs must publish a clear, accessible AI use policy that defines permitted behaviors, off-limits data, and auditing expectations. Provision access by default using SSO and SCIM, pair rollout with vendor-led demos and role-focused training, and provide living user guides. Build an AI champions network, harvest practical productivity use cases, limit unmanaged public tools, and keep governance proactive and supportive.
Tue, September 23, 2025
Six Novel Ways to Apply AI in Cybersecurity Defense
🛡️ AI is being applied across security operations in novel ways to predict, simulate, and deter attacks. Experts from BforeAI, NopalCyber, Hughes, XYPRO, AirMDR, and Kontra outline six approaches — predictive scoring, GAN-driven attack simulation, AI analyst assistants, micro-deviation detection, automated triage and response, and proactive generative deception — that aim to reduce alert fatigue, accelerate investigations, and increase attacker costs. Successful deployments depend on accurate ground truth data, continuous model updates, and significant compute and engineering investment.
Mon, September 22, 2025
AI-powered phishing uses fake CAPTCHA pages to evade
🤖 AI-driven phishing campaigns are increasingly using convincing fake CAPTCHA pages to bypass security filters and trick users into revealing credentials. Trend Micro found these AI-generated pages hosted on developer platforms such as Lovable, Netlify, and Vercel, with activity observed since January and a renewed spike in August. Attackers exploit low-friction hosting, platform credibility, and AI coding assistants to rapidly clone brand-like pages that first present a CAPTCHA, then redirect victims to credential-harvesting forms. Organizations should combine behavioural detection, hosting-provider safeguards, and phishing-resistant authentication to reduce risk.
Mon, September 22, 2025
Protect AI Development Using Falcon Cloud Security
🔒 Falcon Cloud Security provides end-to-end protection for AI development pipelines by embedding AI detection into CI/CD workflows, scanning container images, and surfacing AI-related packages and CVEs in real time. It extends visibility to cloud model services — including AWS SageMaker and Bedrock, Azure AI, and Google Vertex AI — revealing model provenance, dependencies, and API usage. Runtime inventory ties build-time detections to live containers so teams can prioritize fixes, govern models, and maintain delivery velocity without compromising security.
Mon, September 22, 2025
Agentic AI Risks and Governance: A Major CISO Challenge
⚠️ Agentic AI is proliferating inside enterprises, embedding autonomous agents into development, customer support, process automation, and employee workflows. Security experts warn these systems create substantial visibility and governance gaps: organizations often do not know where agents run, what data they access, or how independent their actions are. Key risks include risky autonomy, uncontrolled data sharing among agents, third-party integration vulnerabilities, and the potential for agents to enable or mimic multi-stage attacks. CISOs should prioritize real-time observability, strict governance, secure-by-design development, and cross-functional coordination to mitigate these threats.