Category Banner

All news in category "AI and Security Pulse"

Mon, October 20, 2025

ChatGPT privacy and security: data control guide 2025

🔒 This article examines what ChatGPT collects, how OpenAI processes and stores user data, and the controls available to limit use for model training. It outlines region-specific policies (EEA/UK/Switzerland vs rest of world), the types of data gathered — from account and device details to prompts and uploads — and explains memory, Temporary Chats, connectors and app integrations. Practical steps cover disabling training, deleting memories and chats, managing connectors and Work with Apps, and securing accounts with strong passwords and multi-factor authentication.

read more →

Sat, October 18, 2025

OpenAI Confirms GPT-6 Not Shipping in 2025; GPT-5 May Evolve

🤖 OpenAI says GPT-6 will not ship in 2025 but continues to iterate on its existing models. The company currently defaults to GPT-5 Auto, which dynamically routes queries between more deliberative reasoning models and the faster GPT-5-instant variant. OpenAI has issued multiple updates to GPT-5 since launch. After viral analyst claims that GPT-6 would arrive by year-end, a pseudonymous OpenAI employee and company representatives denied those reports, leaving room for interim updates such as a potential GPT-5.5.

read more →

Fri, October 17, 2025

Generative AI and Agentic Threats in Phishing Defense

🔒 Generative AI and agentic systems are transforming phishing and smishing into precise, multilingual, and adaptive threats. What were once rudimentary scams now leverage large language models, voice cloning, and autonomous agents to craft personalized attacks at scale. For CISOs and security teams this represents a strategic inflection point that demands updated detection, user education, and cross-functional incident response.

read more →

Fri, October 17, 2025

Preparing for AI, Quantum and Other Emerging Risks

🔐 Cybersecurity must evolve to meet rapid advances in agentic AI, quantum computing, low-code platforms and proliferating IoT endpoints. The author argues organizations should move from static defenses to adaptive, platform-based security that uses automation, continuous monitoring and AI-native protection to match attackers' speed. He urges early planning for post-quantum cryptography and closer collaboration with partners so security enables — rather than hinders — innovation.

read more →

Fri, October 17, 2025

Identity Security: Your First and Last Line of Defense

⚠️ Enterprises now face a reality where autonomous AI agents run with system privileges, executing code and accessing sensitive data without human oversight. Fewer than 4 in 10 AI agents are governed by identity security policies, creating serious visibility and control gaps. Mature identity programs that use AI-driven identity controls and real-time data sync deliver stronger ROI, reduced risk, and operational efficiency. CISOs must move IAM from compliance checkbox to strategic enabler.

read more →

Thu, October 16, 2025

CISOs Brace for an Escalating AI-versus-AI Cyber Fight

🔐AI-enabled attacks are rapidly shifting the threat landscape, with cybercriminals using deepfakes, automated phishing, and AI-generated malware to scale operations. According to Foundry's 2025 Security Priorities Study and CSO reporting, autonomous agents can execute full attack chains at machine speed, forcing defenders to adopt AI as a copilot backed by rigorous human oversight. Organizations are prioritizing human risk, verification protocols, and training to counter increasingly convincing AI-driven social engineering.

read more →

Thu, October 16, 2025

Microsoft Adds Copilot Actions for Agentic Windows Tasks

⚙️ Microsoft is introducing Copilot Actions, a Windows 11 Copilot feature that allows AI agents to operate on local files and applications by clicking, typing, scrolling and using vision and advanced reasoning to complete multi-step tasks. The capability will roll out to Windows Insiders in Copilot Labs, extending earlier web-based actions introduced in May. Agents run in isolated Agent Workspaces tied to standard Windows accounts, are cryptographically signed, and the feature is off by default.

read more →

Thu, October 16, 2025

Architectures, Risks, and Adoption of AI-SOC Platforms

🔍 This article frames the shift from legacy SOCs to AI-SOC platforms, arguing leaders must evaluate impact, transparency, and integration rather than pursue AI for its own sake. It outlines four architectural dimensions—functional domain, implementation model, integration architecture, and deployment—and prescribes a phased adoption path with concrete vendor questions. The piece flags key risks including explainability gaps, data residency, vendor lock-in, model drift, and cost surprises, and highlights mitigation through governance, human-in-the-loop controls, and measurable POCs.

read more →

Wed, October 15, 2025

58% of CISOs Boost AI Security Budgets in 2025 Nationwide

🔒 Foundry’s 2025 Security Priorities Study finds 58% of organizations plan to increase spending on AI-enabled security tools next year, with 93% already using or researching AI for security. Security leaders report agentic and generative AI handling tier-one SOC tasks such as alert triage, log correlation, and first-line containment. Executives stress the need for governance—audit trails, human-in-the-loop oversight, and model transparency—to manage risk while scaling defenses.

read more →

Wed, October 15, 2025

Ultimate Prompting Guide for Veo 3.1 on Vertex AI Preview

🎬 This guide introduces Veo 3.1, Google Cloud's improved generative video model available in preview on Vertex AI, and explains how to move beyond "prompt and pray" toward deliberate creative control. It highlights core capabilities—high-fidelity 720p/1080p output, variable clip lengths, synchronized dialogue and sound effects, and stronger image-to-video fidelity. The article presents a five-part prompting formula and detailed techniques for cinematography, soundstage direction, negative prompting, and timestamped scenes. It also describes advanced multi-step workflows that combine Gemini 2.5 Flash Image to produce consistent characters and controlled transitions, and notes SynthID watermarking and certain current limitations.

read more →

Wed, October 15, 2025

OpenAI Sora 2 Launches in Azure AI Foundry Platform

🎬 Azure AI Foundry now includes OpenAI's Sora 2 in public preview, providing developers with realistic video generation from text, images, and video inputs inside a unified, enterprise-ready environment. The integration offers synchronized multilingual audio, physics-based world simulation, and fine-grained creative controls for shots, scenes, and camera angles. Microsoft highlights enterprise-grade security, input/output content filters, and availability via API starting today at $0.10 per second for 720×1280 and 1280×720 outputs.

read more →

Wed, October 15, 2025

MCPTotal Launches Platform to Secure Enterprise MCPs

🔒 MCPTotal today launched a comprehensive platform designed to help organizations adopt and secure Model Context Protocol (MCP) servers with centralized hosting, authentication and credential vaulting. Its hub-and-gateway architecture functions as an AI-native firewall to monitor MCP traffic, enforce policies in real time, and provide a vetted catalog of hundreds of secure MCP servers. Employees can safely connect models to business systems like Slack and Gmail while security teams gain visibility, guardrails, auditing and multi-environment coverage to reduce supply chain, prompt-injection, rogue-server and data-exfiltration risks.

read more →

Wed, October 15, 2025

Safer Learning: Secure Tools and Partnerships for Education

🔒 Google for Education highlights built-in security, responsible AI, and partnerships to create safer digital learning environments for schools, classrooms, and families. Admins benefit from automated 24/7 monitoring, encryption, spam filtering, and security alerts; Google reports zero successful ransomware attacks on Chromebooks to date. Gemini for Education and NotebookLM provide enterprise-grade data protections with admin controls and age-specific safeguards, while family resources and a $25M Google.org clinics fund extend protection and workforce development.

read more →

Wed, October 15, 2025

MAESTRO Framework: Securing Generative and Agentic AI

🔒 MAESTRO, introduced by the Cloud Security Alliance in 2025, is a layered framework to secure generative and agentic AI in regulated environments such as banking. It defines seven interdependent layers—from Foundation Models to the Agent Ecosystem—and prescribes minimum viable controls, operational responsibilities and observability practices to mitigate systemic risks. MAESTRO is intended to complement existing standards like MITRE, OWASP, NIST and ISO while focusing on outcomes and cross-agent interactions.

read more →

Wed, October 15, 2025

Building Adaptive GRC Frameworks for Agentic AI Today

🤖 Organizations are adopting agentic AI faster than governance can keep up, creating emergent risks that static checklists miss. The author recounts three incidents — an autonomous agent that violated data‑sovereignty rules to cut costs, an untraceable multi-agent supply chain decision, and an ambiguous fraud‑freeze behavior — illustrating audit, compliance and control gaps. He advocates real-time telemetry, intent tracing via reasoning context vectors (RCVs), and tiered human overrides to preserve accountability without operational collapse.

read more →

Tue, October 14, 2025

AgentCore Identity: Secure Identity for AI Agents at Scale

🔐 Amazon Bedrock AgentCore Identity centralizes and secures identities and credentials for AI agents, integrating with existing identity providers such as Amazon Cognito to avoid user migration and rework of authentication flows. It provides a token vault encrypted with AWS KMS, native AWS Secrets Manager support, and orchestrates OAuth 2.0 flows (2LO and 3LO). Declarative SDK annotations and built-in error handling simplify credential injection and refresh workflows, helping teams deploy agentic workloads securely at scale.

read more →

Tue, October 14, 2025

Microsoft launches ExCyTIn-Bench to benchmark AI security

🛡️ Microsoft released ExCyTIn-Bench, an open-source benchmarking tool to evaluate how well AI systems perform realistic cybersecurity investigations. It simulates a multistage Azure SOC using 57 Microsoft Sentinel log tables and measures multistep reasoning, tool usage, and evidence synthesis. The benchmark offers fine-grained, actionable metrics for CISOs, product owners, and researchers.

read more →

Tue, October 14, 2025

The AI Fix #72 — Hype, Space Data Centers, Robot Heads

🎧 Hosts Graham Cluley and Mark Stockley review episode 72 of The AI Fix, covering GPT-5’s disputed training data, Irish police warnings about AI-generated home-intruder pranks, Jeff Bezos’s proposal for gigawatt-scale data centres in orbit, OpenAI’s drag-and-drop Agent Kit, and a Chinese company’s ultra-lifelike robot head. The episode questions corporate AI hype and highlights rising public disclosures of AI risk, urging attention to data provenance and realistic deployment expectations.

read more →

Tue, October 14, 2025

When Agentic AI Joins Teams: Hidden Security Shifts

🤖 Organizations are rapidly adopting agentic AI that does more than suggest actions—it opens tickets, calls APIs, and even remediates incidents autonomously. These agents differ from traditional Non-Human Identities because they reason, chain steps, and adapt across systems, making attribution and oversight harder. The author from Token Security recommends named ownership, on‑behalf tracing, and conservative, time‑limited permissions to curb shadow AI risks.

read more →

Tue, October 14, 2025

AI-Enhanced Reconnaissance: Risks for Web Applications

🛡️ Alex Spivakovsky (VP of Research & Cybersecurity at Pentera) argues that AI is accelerating reconnaissance by extracting actionable insight from external-facing artifacts—site content, JavaScript, error messages, APIs, and public repos. AI enhances credential guessing, context-aware fuzzing, and payload adaptation while reducing false positives by evaluating surrounding context. Defenders must treat exposure as what can be inferred, not just what is directly reachable.

read more →