Category Banner

All news in category "AI and Security Pulse"

Wed, October 22, 2025

Face Recognition Failures Affect Nonstandard Faces

⚠️ Bruce Schneier highlights how facial recognition systems frequently fail people with nonstandard facial features, producing concrete barriers to services and daily technologies. Those interviewed report being denied access to public and financial services and encountering nonfunctional phone unlocking and social media filters. The author argues the root cause is often design choices by engineers who trained models on a narrow range of faces and calls for inclusive design plus accessible backup systems when biometric methods fail.

read more →

Wed, October 22, 2025

Four Bottlenecks Slowing Enterprise GenAI Adoption

🔒 Since ChatGPT’s 2022 debut, enterprises have rapidly launched GenAI pilots but struggle to convert experimentation into measurable value — only 3 of 37 pilots succeed. The article identifies four critical bottlenecks: security & data privacy, observability, evaluation & migration readiness, and secure business integration. It recommends targeted controls such as confidential compute, fine‑grained agent permissions, distributed tracing and replay environments, continuous evaluation pipelines and dual‑run migrations, plus policy‑aware integrations and impact analytics to move pilots into reliable production.

read more →

Tue, October 21, 2025

DeepSeek Privacy and Security: What Users Should Know

🔒 DeepSeek collects extensive interaction data — chats, images and videos — plus account details, IP address and device/browser information, and retains it for an unspecified period under a vague “retain as long as needed” policy. The service operates under Chinese jurisdiction, so stored chats may be accessible to local authorities and have been observed on China Mobile servers. Users can disable model training in web and mobile Data settings, export or delete chats (export is web-only), or run the open-source model locally to avoid server-side retention, but local deployment and deletion have trade-offs and require device protections.

read more →

Tue, October 21, 2025

The AI Fix #73: Gemini gambling, poisoning LLMs and fallout

🧠 In episode 73 of The AI Fix, hosts Graham Cluley and Mark Stockley explore a sweep of recent AI developments, from the rise of AI-generated content to high-profile figures relying on chatbots. They discuss research suggesting Google Gemini exhibits behaviours resembling pathological gambling and report on a Gemma-style model uncovering a potential cancer therapy pathway. The show also highlights legal and security concerns— including a lawyer criticised for repeated AI use, generals consulting chatbots, and techniques for poisoning LLMs with only a few malicious samples.

read more →

Tue, October 21, 2025

Securing AI in Defense: Trust, Identity, and Controls

🔐 AI promises stronger cyber defense but expands the attack surface if not governed properly. Organizations must secure models, data pipelines, and agentic systems with the same rigor applied to critical infrastructure. Identity is central: treat every model or autonomous agent as a first‑class identity with scoped credentials, strong authentication, and end‑to‑end audit logging. Adopt layered controls for access, data, deployment, inference, monitoring, and model integrity to mitigate threats such as prompt injection, model poisoning, and credential leakage.

read more →

Mon, October 20, 2025

AI-Powered Phishing Detection: Next-Gen Security Engine

🛡️ Check Point introduces a continuously trained AI engine that analyzes website content, structure, and authentication flows to detect phishing with high accuracy. Integrated with ThreatCloud AI, it delivers protection across Quantum gateways, Harmony Email, Endpoint, and Harmony Mobile. The model learns from millions of domains and real-time telemetry to adapt to new evasion techniques. Early results indicate improved detection of brand impersonation and credential-harvesting pages.

read more →

Mon, October 20, 2025

Agentic AI and the OODA Loop: The Integrity Problem

🛡️ Bruce Schneier and Barath Raghavan argue that agentic AIs run repeated OODA loops—Observe, Orient, Decide, Act—over web-scale, adversarial inputs, and that current architectures lack the integrity controls to handle untrusted observations. They show how prompt injection, dataset poisoning, stateful cache contamination, and tool-call vectors (e.g., MCP) let attackers embed malicious control into ordinary inputs. The essay warns that fixing hallucinations is insufficient: we need architectural integrity—semantic verification, privilege separation, and new trust boundaries—rather than surface patches.

read more →

Mon, October 20, 2025

Agent Factory Recap: Evaluating Agents, Tooling, and MAS

📡 This recap of the Agent Factory podcast episode, hosted by Annie Wang with guest Ivan Nardini, explains how to evaluate autonomous agents using a practical, full-stack approach. It outlines what to measure — final outcomes, chain-of-thought, tool use, and memory — and contrasts measurement techniques: ground truth, LLM-as-a-judge, and human review. The post demonstrates a 5-step debugging loop using the Agent Development Kit (ADK) and describes how to scale evaluation to production with Vertex AI.

read more →

Mon, October 20, 2025

ChatGPT privacy and security: data control guide 2025

🔒 This article examines what ChatGPT collects, how OpenAI processes and stores user data, and the controls available to limit use for model training. It outlines region-specific policies (EEA/UK/Switzerland vs rest of world), the types of data gathered — from account and device details to prompts and uploads — and explains memory, Temporary Chats, connectors and app integrations. Practical steps cover disabling training, deleting memories and chats, managing connectors and Work with Apps, and securing accounts with strong passwords and multi-factor authentication.

read more →

Sat, October 18, 2025

OpenAI Confirms GPT-6 Not Shipping in 2025; GPT-5 May Evolve

🤖 OpenAI says GPT-6 will not ship in 2025 but continues to iterate on its existing models. The company currently defaults to GPT-5 Auto, which dynamically routes queries between more deliberative reasoning models and the faster GPT-5-instant variant. OpenAI has issued multiple updates to GPT-5 since launch. After viral analyst claims that GPT-6 would arrive by year-end, a pseudonymous OpenAI employee and company representatives denied those reports, leaving room for interim updates such as a potential GPT-5.5.

read more →

Fri, October 17, 2025

Generative AI and Agentic Threats in Phishing Defense

🔒 Generative AI and agentic systems are transforming phishing and smishing into precise, multilingual, and adaptive threats. What were once rudimentary scams now leverage large language models, voice cloning, and autonomous agents to craft personalized attacks at scale. For CISOs and security teams this represents a strategic inflection point that demands updated detection, user education, and cross-functional incident response.

read more →

Fri, October 17, 2025

Preparing for AI, Quantum and Other Emerging Risks

🔐 Cybersecurity must evolve to meet rapid advances in agentic AI, quantum computing, low-code platforms and proliferating IoT endpoints. The author argues organizations should move from static defenses to adaptive, platform-based security that uses automation, continuous monitoring and AI-native protection to match attackers' speed. He urges early planning for post-quantum cryptography and closer collaboration with partners so security enables — rather than hinders — innovation.

read more →

Fri, October 17, 2025

Identity Security: Your First and Last Line of Defense

⚠️ Enterprises now face a reality where autonomous AI agents run with system privileges, executing code and accessing sensitive data without human oversight. Fewer than 4 in 10 AI agents are governed by identity security policies, creating serious visibility and control gaps. Mature identity programs that use AI-driven identity controls and real-time data sync deliver stronger ROI, reduced risk, and operational efficiency. CISOs must move IAM from compliance checkbox to strategic enabler.

read more →

Thu, October 16, 2025

CISOs Brace for an Escalating AI-versus-AI Cyber Fight

🔐AI-enabled attacks are rapidly shifting the threat landscape, with cybercriminals using deepfakes, automated phishing, and AI-generated malware to scale operations. According to Foundry's 2025 Security Priorities Study and CSO reporting, autonomous agents can execute full attack chains at machine speed, forcing defenders to adopt AI as a copilot backed by rigorous human oversight. Organizations are prioritizing human risk, verification protocols, and training to counter increasingly convincing AI-driven social engineering.

read more →

Thu, October 16, 2025

Microsoft Adds Copilot Actions for Agentic Windows Tasks

⚙️ Microsoft is introducing Copilot Actions, a Windows 11 Copilot feature that allows AI agents to operate on local files and applications by clicking, typing, scrolling and using vision and advanced reasoning to complete multi-step tasks. The capability will roll out to Windows Insiders in Copilot Labs, extending earlier web-based actions introduced in May. Agents run in isolated Agent Workspaces tied to standard Windows accounts, are cryptographically signed, and the feature is off by default.

read more →

Thu, October 16, 2025

Architectures, Risks, and Adoption of AI-SOC Platforms

🔍 This article frames the shift from legacy SOCs to AI-SOC platforms, arguing leaders must evaluate impact, transparency, and integration rather than pursue AI for its own sake. It outlines four architectural dimensions—functional domain, implementation model, integration architecture, and deployment—and prescribes a phased adoption path with concrete vendor questions. The piece flags key risks including explainability gaps, data residency, vendor lock-in, model drift, and cost surprises, and highlights mitigation through governance, human-in-the-loop controls, and measurable POCs.

read more →

Wed, October 15, 2025

58% of CISOs Boost AI Security Budgets in 2025 Nationwide

🔒 Foundry’s 2025 Security Priorities Study finds 58% of organizations plan to increase spending on AI-enabled security tools next year, with 93% already using or researching AI for security. Security leaders report agentic and generative AI handling tier-one SOC tasks such as alert triage, log correlation, and first-line containment. Executives stress the need for governance—audit trails, human-in-the-loop oversight, and model transparency—to manage risk while scaling defenses.

read more →

Wed, October 15, 2025

Ultimate Prompting Guide for Veo 3.1 on Vertex AI Preview

🎬 This guide introduces Veo 3.1, Google Cloud's improved generative video model available in preview on Vertex AI, and explains how to move beyond "prompt and pray" toward deliberate creative control. It highlights core capabilities—high-fidelity 720p/1080p output, variable clip lengths, synchronized dialogue and sound effects, and stronger image-to-video fidelity. The article presents a five-part prompting formula and detailed techniques for cinematography, soundstage direction, negative prompting, and timestamped scenes. It also describes advanced multi-step workflows that combine Gemini 2.5 Flash Image to produce consistent characters and controlled transitions, and notes SynthID watermarking and certain current limitations.

read more →

Wed, October 15, 2025

OpenAI Sora 2 Launches in Azure AI Foundry Platform

🎬 Azure AI Foundry now includes OpenAI's Sora 2 in public preview, providing developers with realistic video generation from text, images, and video inputs inside a unified, enterprise-ready environment. The integration offers synchronized multilingual audio, physics-based world simulation, and fine-grained creative controls for shots, scenes, and camera angles. Microsoft highlights enterprise-grade security, input/output content filters, and availability via API starting today at $0.10 per second for 720×1280 and 1280×720 outputs.

read more →

Wed, October 15, 2025

MCPTotal Launches Platform to Secure Enterprise MCPs

🔒 MCPTotal today launched a comprehensive platform designed to help organizations adopt and secure Model Context Protocol (MCP) servers with centralized hosting, authentication and credential vaulting. Its hub-and-gateway architecture functions as an AI-native firewall to monitor MCP traffic, enforce policies in real time, and provide a vetted catalog of hundreds of secure MCP servers. Employees can safely connect models to business systems like Slack and Gmail while security teams gain visibility, guardrails, auditing and multi-environment coverage to reduce supply chain, prompt-injection, rogue-server and data-exfiltration risks.

read more →